title
sequencelengths 0
16
| author
sequencelengths 0
109
| authoraffiliation
sequencelengths 0
68
| venue
sequencelengths 0
3
| abstract
stringlengths 12
16.7k
| doi
stringlengths 13
39
⌀ | pdfurls
sequencelengths 1
1
⌀ | corpusid
int64 148
259M
| arxivid
stringlengths 9
15
| pdfsha
stringlengths 40
40
| text
stringlengths 2.47k
723k
| github_urls
sequencelengths 0
22
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Hurdles to Progress in Long-form Question Answering",
"Hurdles to Progress in Long-form Question Answering"
] | [
"Kalpesh Krishna kalpesh@cs.umass.edu \nMohit Iyyer ♠\nUniversity of Massachusetts Amherst, ♦ Google Research\n\n",
"Aurko Roy aurkor@google.com \nMohit Iyyer ♠\nUniversity of Massachusetts Amherst, ♦ Google Research\n\n"
] | [
"Mohit Iyyer ♠\nUniversity of Massachusetts Amherst, ♦ Google Research\n",
"Mohit Iyyer ♠\nUniversity of Massachusetts Amherst, ♦ Google Research\n"
] | [
"Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / validation overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We offer suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future. 1 | 10.18653/v1/2021.naacl-main.393 | [
"https://www.aclweb.org/anthology/2021.naacl-main.393.pdf"
] | 232,185,275 | 2103.06332 | 967425ecd3073c10c5f2472ee3030de55b26ec6e |
Hurdles to Progress in Long-form Question Answering
June 6-11, 2021
Kalpesh Krishna kalpesh@cs.umass.edu
Mohit Iyyer ♠
University of Massachusetts Amherst, ♦ Google Research
Aurko Roy aurkor@google.com
Mohit Iyyer ♠
University of Massachusetts Amherst, ♦ Google Research
Hurdles to Progress in Long-form Question Answering
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20214940
The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / validation overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We offer suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future. 1
Introduction
Long-form question answering (LFQA) integrates the retrieval component of open-domain QA, which involves searching a large external knowledge source for documents relevant to a given question, with a text generation component to produce paragraph-length answers. Significant progress has been made on open-domain QA datasets such as Natural Questions (Kwiatkowski et al., 2019), whose questions are answerable with short phrases and entities, by leveraging dense retrieval techniques like ORQA (Lee et al., 2019), REALM (Guu et al., 2020), and DPR (Karpukhin et al., 2020;Lewis et al., 2020c;Izacard and Grave, 2020). Methods inspired by these results have recently been combined with pretrained language models (Lewis et al., 2020b;Petroni et al., 2020) and applied to the Reddit-derived "Explain Like I'm Five" (ELI5) dataset (Fan et al., 2019), which is the only publicly-available large-scale LFQA dataset.
The recently proposed KILT benchmark (Petroni et al., 2020), which compares retrieval-augmented models across a variety of knowledge-intensive tasks including ELI5, automatically evaluates LFQA models by the quality of both generated answers (ROUGE-L against reference answers) and retrieved documents (R-precision against humanannotated relevant documents). In this paper, we build a state-of-the-art system 2 for ELI5 by using a sparse Transformer variant (Roy et al., 2020) to condition over Wikipedia paragraphs returned by a REALM-style retriever (Guu et al., 2020). However, despite its success on the KILT leaderboard, our system does not actually use the documents that it retrieves! To measure the effect of retrieval on generation quality, we design a control experiment in which retrieved documents are replaced with randomly-sampled documents at inference time. Results from both human A/B tests and automatic metrics like ROUGE-L demonstrate that conditioning on random documents has almost no effect on generated answer quality (Figure 1c). We recommend that future LFQA research report the results of such control experiments in addition to reporting generation and retrieval quality. How can a system using random retrieval per- form well on ELI5? Our analysis reveals that this result is partially due to significant train / validation overlap in the ELI5 dataset (Figure 1a), which eliminates the need for external retrieval. A human study shows that at least 81% of validation questions have a paraphrase in the training set, and almost all validation questions are topically similar to a training set question. While Fan et al. (2019) attempted to identify and remove question overlap using TF-IDF similarity, more complex semantic matching methods & human verification is needed to address this issue in future LFQA datasets.
Digging deeper, we identify fundamental issues with using ROUGE-L to evaluate generated answer quality (Figure 1b). Simple baselines such as just repeatedly copying the question, or choosing a random training set answer, can outperform LFQA systems such as RAG (Lewis et al., 2020c) in terms of ROUGE-L. On the other hand, our system achieves higher ROUGE-L than reference human-written answers, which is misleading since human A/B testers strongly prefer reference answers to our system's. We conclude that ROUGE-L is not a reliable metric to evaluate LFQA due to its large and relatively unconstrained output space (e.g., compared to translation or summarization), and we offer suggestions for better automatic & human evaluations to enable meaningful progress on this task.
A state-of-the-art LFQA system
The ELI5 task (Fan et al., 2019) asks models to generate paragraph-length answers to open-ended questions in English that often rely on world knowledge (e.g., how do jellyfish function without brains or nervous systems?). LFQA systems thus benefit from conditioning answer generation on relevant documents from the web (such as the Wikipedia article about jellyfish). While large-scale pretrained language models store surprising amounts of world knowledge within their parameters (Petroni et al., 2019;Roberts et al., 2020), external document retrieval not only augments this intrinsic knowledge but also grounds model outputs in a knowledge source, which provides interpretability.
In this section, we describe our proposed LFQA system, which conditions answer generation on Wikipedia articles identified by a pretrained retriever. We use a dense retriever trained by scaling up a distantly supervised algorithm from Jernite (2020). Since retrieved articles can be quite long and often exceed the maximum sequence length of pretrained models like BERT (Devlin et al., 2019), we use a sparse-attention variant of the Transformer to allow modeling over longer sequences. While our system sets a new state-of-the-art on ELI5, we question the significance of this result in Section 3.
Retriever
We begin by specifying our dense retriever ("contrastive REALM" or C-REALM), which returns documents related to an input question. Consider a corpus of long-form questions and answers, represented by (q i , a i ) N i=1 . Our retriever uses q i as a query to retrieve K documents (r i,j ) K j=1 from a knowledge corpus (Wikipedia), which is enabled by an encoder network that projects both questions and candidate documents to a 128-d shared embedding space. Like REALM (Guu et al., 2020), our encoder is a BERT-base Transformer (Devlin et al., 2019) with a final projection layer.
Since the ELI5 dataset does not include gold retrievals, we train our retriever by scaling up a method recently introduced by Jernite (2020) that uses gold answers for distant supervision. The key idea is to push the encoded vector for a question close to a vector representation of its groundtruth answer(s), but away from all other answer vectors in the mini-batch (negative examples). Intuitively, this method works because both ELI5 answers and external documents are of paragraph length (documents are paragraph-length chunks from Wikipedia). Concretely, we optimize the loss,
loss = − (q i ,a i )∈B log exp q i · a i a j ∈B exp q i · a j
where B is the mini-batch and q i , a i are the encoded vector representations for (q i , a i ). This objective is based on contrastive learning, a method that has been used effectively for semi-supervised learning (Chen et al., 2020) and dense retriever training (Karpukhin et al., 2020). Scaling up from Jernite (2020), who used a mini-batch size of 512 and initialized their retriever with BERT, we use much large mini-batches of size 12,288 (and hence, many more negative examples) and initialize our retriever with a strong pretrained retriever, the REALM model (Guu et al., 2020) trained on the Common Crawl News (CC-News) corpus. These design decisions greatly improve retriever quality, as we observe in an ablation study (see Appendix A.2). During inference, we perform a maximum inner-product search (MIPS) with the ScaNN library (Guo et al., 2020) to efficiently find the top K documents. In all our experiments we use K = 7, following the setup in Guu et al. (2020).
Generator
We next describe our generator model, which conditions its generated answers on retrieved documents returned by C-REALM. We use the Routing Transformer (RT) from Roy et al. (2020), which is the current state-of-the-art in long-form language modeling. The RT is a sparse attention model that employs local attention as well as mini-batch k-means clustering to better model long-range dependencies in sequences (attention maps in Appendix A.1). Long-form language models such as RT are wellsuited to ELI5 as the task requires conditioning answer generation not only on a short question but also many lengthy retrieved documents.
We pretrain our RT model on PG-19, a longform language modeling benchmark (Rae et al., 2020) created from approximately 28,000 Project Gutenberg books published before 1919. PG-19 has 1.9B tokens and an average context size of 69K words. While this data is out-of-domain for ELI5, we choose it to encourage long & coherent generation. Our RT is a 22-layer model with 1032 hidden units (486M parameters), maximum sequence length of 8192 tokens, and a vocabulary of 98K subwords. 3 We fine-tune our model in a decoderonly fashion (Liu et al., 2018;Wolf et al., 2018) by concatenating the top K retrieved documents to the question [r i,K , r i,K−1 ... r i,1 , q i , a i ] and training the model to predict tokens of the answer a i . We do not backpropagate gradients through the retriever. 4 Retrievals slightly improve perplexity (18.1 vs 17.8) as seen in Wang and McAllester (2020), but do not improve generations ( §3.1). are hidden, and hosted on a public leaderboard in the EvalAI platform (Yadav et al., 2019). Answer quality is measured by the maximum overlap of generations with a set of gold answers in terms of unigram F1 score and ROUGE-L (Lin, 2004). Petroni et al. (2020) collected human annotations of Wikipedia articles which support ELI5 gold answers, which enables measuring retrieval quality by computing R-precision (if the top-1 retrieval matches the annotation) and Recall@5 using the top-5 retrievals. Finally, the KILT benchmark combines R-prec. and ROUGE-L to measure the overall performance of the system by "KILT ROUGE-L". This metric is similar to ROUGE-L, but assigns a score of 0 whenever the top-1 retrieval does not match the gold annotation. Results: Table 1 contains our results on the test set of the ELI5 (also on the public KILT leaderboard). We present four variants of our system, using a different retriever during inference (REALM or C-REALM), and different nucleus sampling p values (Holtzman et al., 2020). All variants outper-Q: Why are almost all boats white?
Main Experiments
A: Boats are generally painted white for obvious reasons, as they are inexpensive and easy to clean. Ships are also often painted to provide a visible visual contrast to the sea. Boats also often have high visibility ports to allow for easy international trade. In addition, many boats have an airbreathing function that makes them easier to spot and stay afloat, which is a good thing if you're a pirate or trying to get into some country's waters. Lastly, many ships are built in the last few decades, so they are not as exotic looking as they were thirty years ago. And lastly, many ships are made out of materials that are easy to work with, and so many of them are painted white as a practical matter. There are many, many more reasons for why ships are white, but those are the ones I know of. I'm sure there are many other reasons, but those are the ones I know of. Edit: I forgot to mention that the other answers about visibility and easy to see ports are also true. Ships are painted to appear to be larger, and more expensive. That makes them appear more impressive, and so they are more likely to be bought by people who want to show off their wealth and status. form prior work in generation quality, with lowerentropy models (p = 0.6) performing best. 6 C-REALM performs competitively to RAG and DPR despite being only distantly supervised, and outperforms REALM. Our proposed RT+C-REALM system achieves a new state-of-the-art on combined performance (KILT R-L). Generations from our model are provided in Figure 2 and Appendix A.4.
Analysis
In this section, we conduct a thorough analysis of our model's usage of retrievals (Section 3.1), the impact of overlap in ELI5's train / validation / test folds (Section 3.2), issues with ROUGE-L and performance bounds (Section 3.3), and the difficulty in human evaluation for this task (Section 3.4). At the end of each section, we provide short takeaways with suggestions for future work.
Are generations grounded in retrieval?
While our retrieval-augmented system achieves state-of-the-art performance, we find little evidence that it is actually using the retrieved documents. To measure this, we run an ablation study where at inference time we replace retrieved paragraphs with vs predicted retr. vs random retr. R-L 1-g 2-g 1-g 2-g Table 2: Comparison of generations (with p = 0.6) conditioned on predicted retrievals (Predicted) and randomly chosen retrievals (Random). Notice small differences in: (1) ROUGE-L vs gold answers (R-L); (2) n-gram overlap (n-g) with predicted retrievals (vs predicted retr. (·). Annotators are shown a question along with two answers (A, B) in random order and ask them to choose one (details in Appendix A.5). For both model variants (p = 0.6, 0.9), we see (1) little difference between generations conditioned on predicted (pred.) or random (rand.) retrievals; (2) strong preference for gold answers over generations. randomly sampled paragraphs from Wikipedia. We compare this Random baseline with our original system (Predicted) in terms of generation quality as well as the n-gram overlap between the generation and the retrieved paragraphs.
Generations are similar irrespective of type of retrievals: We present our results in Table 2. Despite not being conditioned on any meaningful retrievals, the Random retrieval model has similar ROUGE-L scores as our Predicted system. Moreover, generations from the Random and Predicted models have similar amounts of 1-gram and 2gram overlap with the paragraphs retrieved by C-REALM, despite the fact that the Random model does not actually see the retrieved paragraphs. 7 The n-gram overlaps are possibly overestimates due to stopwords (e.g., prepositions, punctuation) and entities which are copied from the question. vs qn. vs predicted retr. vs random retr.
but not in qn. but not in qn.
(lemmatized nouns, proper nouns, numbers only) Table 4: A fine-grained version of Table 2 measuring the unigram overlap of nouns/numbers in the generations with the input question (vs qn.), retrievals predicted by C-REALM (vs predicted retr.) and randomly sampled retrievals (vs random retr.). Similar to Table 2, notice very little difference with and without retrieval.
To tackle this issue, in Table 4 we measure the fractions of lemmatized nouns, proper nouns and numbers in the generated answer which are present in the predicted retrievals but not in the question. We notice similar trends as before, with only small differences between the two systems. Finally, there is almost no correlation (Spearman ρ = 0.09) between the Predicted model's generation quality and the amount of unigram overlap between its outputs and the retrieved documents (scatter plots in Appendix A.7), strengthening our hypothesis that generations are not grounded in retrievals. 8
Human evaluation validates our findings: As ROUGE-L and n-gram overlap have major limitations for LFQA (Section 3.3), we perform additional human A/B testing on the output of Random and Predicted. Specifically, we ask human volunteers 9 to choose between answers generated by the two systems (presented in random order). As seen in Table 3, humans struggle to choose which of the two answers is more relevant to the question. For both model variants (p = 0.6, 0.9), there is a less than 7% preference for a particular answer type, with humans preferring answers (by 6%) from the Random model for p = 0.9! Other systems also have this issue, possibly due to source-reference divergence and trainvalidation overlap: We note that this issue is not unique to our system -other systems on the KILT leaderboard like BART + DPR and RAG actually perform worse than their no-retrieval counterpart (BART) in generation quality, as shown in Table 1. Qualitatively, we found no evidence of retrieval usage in a publicly hosted ELI5 model demo by Jernite (2020). 10 A possible explanation for this issue is high source-reference divergence, a common problem in table-to-text generation (Wiseman et al., 2017;Tian et al., 2019). In Table 2 and Table 4, we measure the n-gram overlap of top-ranked gold validation answers (Gold Ans) with predicted retrievals. This overlap is low and similar to that of our generations, which we suspect encourages our model to ignore retrievals. A second explanation is the large amount of train-validation overlap (Section 3.2), which eliminates the need for retrieval. , an unconstrained dialogue generation task with single sentence dialogues (much shorter than ELI5). As seen on the public KILT leaderboard, 12 our system has lower ROUGE-L scores than the BART / RAG baselines. Another possible explanation is issues with ROUGE-L itself, as discussed in Section 3.3.
Takeaway (better evaluation of grounding): For evaluating LFQA, it is important to run control experiments with random retrievals & measure grounding of generations in retrieval. While the KILT benchmark does attempt to measure the com-10 https://huggingface.co/qa 11 While we do not have access to generations from baselines on the KILT leaderboard, example generations from the demo of the BART model in Jernite (2020) are significantly shorter (59 words avg.) than our generations (187 words avg.). 12 https://eval.ai/web/challenges/ challenge-page/689/leaderboard/1909 bined retrieval + generation performance via KILT RL, it does not check whether the generations actually used the retrievals. In other words, one can submit independent retrieval & generation systems, but still perform well on the combined score. This may not be an issue for short-form QA tasks like Natural Questions, since the gold answer is often exactly contained as a span in the gold retrieval. Also, as retrieval might be less important for large language models with parametric knowledge (Roberts et al., 2020), the KILT-RL strategy of simply aggregating top-1 retrieval score with ROUGE-L unfairly penalizes systems not relying on retrieval. 13
Training / Validation Overlap
Our experiments in Section 3.1 show that model performance is mostly unchanged by conditioning generation on randomly sampled retrievals instead of predictions from C-REALM. Despite not using retrievals, we observe qualitatively that our model displays a large amount of parametric knowledge ("Faraday Cage" in Figure 1c), which is surprising since it was pretrained on novels from Project Gutenberg (not Wikipedia). In this section, we discover that a major reason for ignoring retrievals is the large amount of train / validation overlap in ELI5. While Fan et al. (2019) attempted to fix this issue through TF-IDF overlap, this method is insufficient to identify all question paraphrases, as we find significant overlap between the training set and the KILT validation set of ELI5. 14 ELI5 is not the only dataset with substantial train / test overlap: Lewis et al. (2020d) identify similar issues with short-form QA datasets like Natural Questions.
Finding similar questions & measuring overlap:
We use our retriever C-REALM to retrieve similar questions from the training set, since it has learned to map questions to a feature-rich embedding space. For each validation question, we retrieve the 7 most similar training set questions. We use both human and automatic evaluation to calculate the amount of overlap. For human evaluation, we show annotators on Amazon Mechanical Turk 15 a validation set question and a retrieved training set question, qns with at least one train set paraphrase 81% qns with at least one train set topically similar 100% % of all pairs marked paraphrases 39.5% % of all pairs marked topically similar 47.8% % of all pairs marked as non-paraphrases 12.7% Table 5: A human evaluation measuring the amount of overlap between validation set questions (qns) and retrieved questions from the training set.
and ask them to annotate the pair as 0: No paraphrase relationship; 1: on similar topics, but different questions; 2: approximately the same question (an adaptation of the paraphrase evaluation of Kok and Brockett, 2010). We take 300 validation set questions and ask three crowd-workers to rate them against retrieved training questions on this scale, and consider the label with majority rating. To improve quality, we manually verify their annotations. Table 5 shows that 81% of validation set questions have at least one paraphrase in the training set, while all annotated questions have at least one topically similar question in the training set, which indicates substantial training / validation overlap. The experiment had "fair agreement" with a Fleiss κ of 0.29 (Fleiss, 1971;Landis and Koch, 1977).
As manually annotating question overlap can be expensive and time-consuming, we also experiment with automatic overlap detection methods. In particular, we use a RoBERTa-large binary classifier (Liu et al., 2019) fine-tuned on the Quora Question Paraphrase (QQP) dataset (Iyer et al., 2017) from the GLUE benchmark (Wang et al., 2019). For 43.6% of the ELI5 validation set, this classifier marked at least one retrieved question as a paraphrase (46% for the 300 questions we annotated). Qualitatively, we notice that this classifier often mis-classifies retrieved questions that are valid paraphrases but exhibit significant lexical or syntactic divergence. This observation, along with the smaller fraction of valid paraphrases in the QQP training set (37%), partially explains the gap between automatic & human evaluations.
Using retrieved QA for generation: Since ELI5 contains significant amount of overlap between the training and validation sets, a system can simply copy the answers of retrieved training set questions instead of actually doing generation. Table 7 shows that by using the longest answer within the top-K retrieved questions, we outperform two prior systems (RAG, BART + DPR) that use retrieval-augmented generation. As an upper Table 6: ELI5 performance difference (for the p = 0.6 model) between subsets of validation QA having a question paraphrase (overlap) and not having a question paraphrase (not overlap) in the training set. We see the overlap subset has much better retrieval performance and slightly better generation performance.
bound, we also consider a system which uses the best possible answer to retrieved training set questions in terms of ROUGE-L (best top-K train answer). This system gets 28.5 ROUGE-L, outperforming all others.
ELI5 performance on overlapping QA: Finally, we measure the performance difference between validation questions that overlap with the training set vs. those that do not. Since we only have human annotations for 300 questions (the nooverlap subset has only 53 samples), we present this analysis using the QQP classifier's outputs as well. In Table 6, we notice large differences of 6.6 RPrec, 8.1 R@5 in retrieval performance favoring the overlap subset, but only a small generation score gain of 0.8 F1, 0.4 R-L (which may be misleading as discussed in Section 3.3).
Takeaway (careful held-out curation): Based on our findings, we suggest that more careful dataset curation for LFQA tasks is needed to prevent duplicates. While we acknowledge the efforts of Fan et al. (2019) to fix this issue, we also suggest alternative methods to control overlap and focus on evaluating generalization in held-out sets: (1) automatically retrieving paraphrases and then running human validation to eliminate them; or (2) holding out entire genres or domains to reduce the possibility of overlap -for example, keeping Q/A on Sports only in the held-out sets. Note that simply pruning the existing splits using these criteria will significantly reduce the size of the held-out datasets; so we suggest re-splitting the train/validation/test splits from the entire pool of collected questions.
ROUGE-L Bounds on ELI5 Performance
We have seen that simply copying the answer of a close question paraphrase from the training set achieves 28.5 ROUGE-L with an optimal selection among retrieved questions and outperforming all computational models. (1) copy the question 5 times and concatenate, as longer outputs boost ROUGE-L (Appendix A.6);
(2) retrieve a random training set answer. Our first baseline contains entities often present in the gold answer, but without actually answering the question. Our second baseline follows the "style" of an answer but is completely off-topic.
As an upper bound, we estimate the ROUGE-L of gold answers themselves. On an average, there are 12 gold answers per question, so we measure the ROUGE-L of the longest gold answer with respect to the other gold answers. We also measure the maximum pairwise ROUGE-L between two gold answers for the same question. 16 We only calculate upper bounds for the validation set, since the gold answers of the KILT test set are hidden.
Lower bounds beat prior work, upper bounds have low ROUGE-L: We compare our bounds with actual retrieval augmented generation systems in Table 7. Both our lower bounds (random training answer, copy input) are quite competitive, outperforming RAG (Lewis et al., 2020c) and performing close to BART + DPR (Petroni et al., 2020) without actually answering the question! This shows that ROUGE-L is fairly sensitive to simply copying entities from the question 16 Note that different gold answers were not written independently as Reddit users writing answers can read existing answers and may want to provide a non-overlapping perspective. Due to the high train/valid overlap, the best top-7 retrieved answer could be a better upper bound since it is from another Reddit post (and performs better than best gold answer). Suspecting that this result is misleading, we run another human A/B test by showing volunteers a question and asking them to choose between answers generated by our system and the longest gold answer, shuffled at random. 17 As seen in Table 3, the majority of humans prefer the gold reference answers vs generations (68% vs 14% for p = 0.6). In interviews with human annotators after completing the task, they reported that both answers were often fluent and stylistically similar, but one eventually veered off-topic.
Takeaway (better automatic metrics needed):
Our experiments demonstrate that computing the ROUGE-L of generations against gold answers is not a meaningful way to evaluate LFQA systems, since it is not selective enough to differentiate between valid/invalid answers. There is a very small margin of improvement between trivial lower bounds and strong upper bounds, with the absolute scores of upper bounds being quite low. We suspect this is due to the long length of answers and fairly unconstrained and large output space.
Difficulty of Human Evaluation
To better understand the inherent difficulty of evaluation in ELI5, we interviewed human annotators (of Table 3) and found two challenges:
(1) Unfamiliarity with question topics: While most annotators found the Q/A interesting, they were often unfamiliar with the technical topics discussed in the questions. This made it hard for them to assess answer correctness. The ELI5 dataset has questions in a wide variety of topics (History, Politics, Biology etc.), while most annotators were Computer Science graduate students. While we did allow annotators to use Wikipedia, they mentioned domain-experts will be better judges of factual correctness of answers.
(2) Length of Answers: Annotators mentioned the paragraph-long length of answers made the task quite challenging. Annotators reported taking an average of 2 minutes per answer pair, many of which required careful thought & concentration. This was especially difficult when only part of the answer was correct and the rest had contradictions or repetitions, a common theme in our generations.
Ethical Considerations
Our system faces a similar set of issues as most modern text generation technology, like fabrication of facts (Zellers et al., 2019), potential for misuse (Brown et al., 2020) and reflecting biases prevalent on Reddit (the ELI5 dataset has been built using the r/ELI5 subreddit). In our work, we attempted to make text generators more factually grounded by conditioning generations on retrieved Wikipedia articles, hoping to reduce fact fabrication. Unfortunately, a thorough analysis (Section 3.1) has revealed that our system is still not grounding its generations in retrievals, and we have recommended the design of better metrics to measure factual correctness to tackle this issue. Our final models were trained using 64 Google Cloud TPUs for a total of 32 hours. As mentioned in the Google 2019 environment report, 18 "TPUs are highly efficient chips which have been specifically designed for machine learning applications". These accelerators run on Google Cloud, which has "matched 100% of its electricity consumption with renewable energy purchases, and has committed to fully decarbonize its electricity supply by 2030" (https://cloud.google. com/sustainability). More details on training time are provided in Appendix A.1. Similar to the REALM implementation, we use separate processes to run the retriever and generate training data (using a MIPS search). Since our retriever is frozen, we do not use the document index refresher available in their codebase.
References
Retriever: Our retriever is trained on 64 Google Cloud TPUs for a total of 4k steps and a batch size of 12288. We do early stopping on the validation data (with a smaller batch size of 512 due to smaller P100 GPU memory). Our model converges quite fast, reaching its best performance in 1.5k steps (in 43 minutes) and needing 103 minutes for the full set of 4k steps.
Generator: Our generator is trained on 64 Google Cloud TPUs, for a total of 100k steps on the ELI5 training set. We use the pg19_local_cluster8k configuration available in the Routing Transformer implementation. Besides the default hyperparameters, setting 15% input, attention and ReLU dropout was critical to prevent overfitting on the training set. We use a learning rate of 5e-5. Our retrievals, questions and answers are truncated / padded to 288 subword tokens (using the PG19 subword tokenizer). We use a minibatch size of 128 QA pairs, which corresponds to 332k tokens per mini-batch (of which, the loss is computed over the last 288 answer tokens, or 37k total tokens). We do not compute loss over padded tokens, and use special symbols to separate different parts of the input context. We reverse the retrieved paragraphs in context since the model uses local attention layers, and we wanted higher ranked retrievals to appear closer to the answer tokens. Our models take about 30 hours to finish 100k steps (0.92 steps / second). Hyperparameter Choices: We experimented with several different pretraining strategies (using Wikipedia), smaller model variants and hyperparameter choices manually in preliminary experiments. All these experiments performed quite poorly on ELI5, producing very short and sometimes incoherent responses. Finally, switching to a Routing Transformer model which was pretrained on a longform language modeling dataset (PG-19) significantly improved generation quality. Hyperparameters for this pretrained model (like hidden size / number of layers) were manually chosen with model capacity in mind. For our final experiments with this pretrained model we did not perform any hyperparameter search during training, primarily due to the expensive setup required to train the system. During inference, we tuned the nucleus sampling value from 0.0 to 1.0 in increments of 0.1, choosing the value with the best validation set performance. Our hyperparameter choices for contrastive learning on the retriever have been justified in an ablation study in Appendix A.2. Notably, we use very large minibatches of 12,288 to scale the number of negative examples. To train this model, we used the standard trick of data parallelism across 64 hardware accelerators. This resulted in an effective mini-batch size of 192 per chip, which is small enough to fit a BERT-base sized model on a TPU v3 chip's memory. To accumulate information across different chips before the final softmax, we used the tf.tpu.cross_replica_sum function (using an open-source wrapper found here).
A.2 Ablation Study of C-REALM
One of our contributions is scaling up a distantly supervised objective for training retrievers on ELI5, originally described in Jernite (2020). This method uses in-batch negative sampling, making minibatch size a critical hyperparameter for better constrastive learning. We perform controlled experiments initializing our retrievers with REALM-CCNews (Guu et al., 2020) and varying batch size and keeping all other hyperparameters consistent.
In Table 8, we notice a steady increase in performance as minibatch size is increased, with the largest gains coming by doubling the batch size in Jernite (2020) from 512 to 1024. Finally, in preliminary experiments we saw no benefit of more intelligent negative sampling schemes.
Batch size R-Prec Recall@5
REALM (pretrained) 6.6 14.9 256 6.2 11.0 512 (Jernite, 2020) 6.8 12.6 1024 11.5 21.0 12288 (Ours) 13.3 21.2 Table 8: The effect of minibatch size on the validation performance of C-REALM. As a baseline, we also add the retrieval performance of the REALM pretrained model which is used as an initialization.
Next, we investigate the effect of initialization on the training of C-REALM. Unlike Jernite (2020) who initialize their model with BERT, before training we initialize our retriever with a pretrained self-supervised retriever. As a baseline, we initialize our model with ICT, a weaker self-supervised retriever introduced in Lee et al. (2019). Both models are trained with minibatch sizes of 12228. In Table 9, we notice a large improvement in performance when using a better initialization, confirming our design decisions.
A.3 Number of trainable parameters
In Table 10 we present the number of trainable parameters in our model compared to baselines on the leaderboard. Our generator is slightly larger than the models used in prior work, but we utilize a smaller retriever due to the shared query and candidate encoders in REALM. Overall, our system has a similar total number of parameters as baseline models like RAG and BART + DPR.
Initialization
R-Prec. R@5 REALM (pretrained) 6.6 14.9
ICT (Lee et al., 2019) 9.3 16.5 REALM (Guu et al., 2020) 13.3 21.2 Table 9: The effect of initialization on C-REALM. As a baseline, we also add the retrieval performance of the REALM-CCNews pretrained model without any finetuning on ELI5.
Model Generator Retriever Index
T5-base 220M - - BART 406M - - RAG 406M 220M 15B BART + DPR 406M 220M 15B RT + C-REALM 486M 110M 15B
A.4 Generations from our System
More generations have been provided (along with retrievals, highlighted to show n-gram overlap) in the supplementary material (data) as HTML files. We also present a few samples in Table 16.
A.5 Human Evaluation Setup
We conducted several A/B tests between variants of our model using human annotators. We asked a total of 20 participants for help who voluntarily agreed to help with the annotation process. Most participants were English-speaking graduate students in computer science. In every test, participants were shown a question along with two answers (generated by different systems) presented in a random order. They were then asked to choose which generation (1) answered the question better / which answer was more relevant to the question;
(2) was more coherent / had less repetition; (3) was more factually correct. Since some annotators had a limited time, we asked them to prioritize question (1) over (2) / (3). Annotators were allowed to select "Tie" if they could not choose between the systems. We also permitted them to use search engines, but suggested restricting search to Wikipedia. We present all our results in Table 15. We also interviewed some participants after the annotation process and discuss our findings in Section 3.4. Note that while these A/B tests help us understand which system is relatively better, they do not provide an absolute measure of performance (Celikyilmaz et al., 2020) -annotators reported that there were cases where both answers were very good and other cases where both were very poor. This is a limitation of A/B testing.
A.6 Effect of length on ROUGE-L In this section we measure the effect of outputs lengths on ROUGE-L scores. To conduct this experiment, we truncate generations by our system to a fixed fraction of tokens across all instances. As we see in Table 11 in the Truncate column, shorter generations tend have lower ROUGE-L. To disentangle the effects of length and content, we also measure the generation quality by repeating the truncated generations several times until it matches the original generation length. In the Repeat 1/f times column, we notice a gap between our model's original generation (24.4 ROUGE-L) and the equallength truncated generations with repetition. These results indicate that while length helps improve ROUGE-L scores, simple repetition is insufficient.
A.7 More experiments on measuring retrieval grounding of generations
In this section we provide some more experiments testing the grounding of generations in retrieved documents. Overall, trends are consistent with our observations in Section 3.1.
Scatter plots between generation quality and unigram overlap with retrievals: We present this scatter plot in no correlation between the two quantities, with Spearman ρ = 0.09.
Instances with correct predicted retrieval: In Table 12, we present results similar to Section 3.1 considering only those instances where at least one retrieved document matched the gold annotation (roughly 23% instances). We also present a scatter plot on the same set of instances in Figure 5 and note a low correlation of ρ = 0.13. vs predicted retr. vs random retr. R-L 1-g 2-g 1-g 2-g Table 12: Comparison of generations conditioned on retrievals from C-REALM (Predicted) and randomly chosen retrievals (Random), for those cases where C-REALM predicted the correct retrieval. Notice very small differences in generation quality (R-L) as well as the fraction of n-grams (n-g) in the generation overlapping with retrievals predicted by C-REALM (vs predicted retr.). To control for overlap due to stopwords, we also add n-gram overlaps with the randomly sampled retrievals. model between generative quality (ROUGE-L vs reference on X-axis) and grounding with retrieval (unigram overlap with retrieved documents on Y-axis). Unlike Figure 4, this plot only considers those cases where C-REALM predicted the correct retrieval. The plot shows very little correlation between the two quantities (Spearman ρ = 0.13).
Experiments with
vs predicted retr. vs random retr. R-L 1-g 2-g 1-g 2-g Table 13: Comparison of generations (with p = 0.9) conditioned on retrievals from C-REALM (Predicted) and randomly chosen retrievals (Random). Notice very small differences in: (1) ROUGE-L vs gold answers (R-L); (2) n-gram overlap (n-g) with retrievals predicted by C-REALM (vs predicted retr.). Gold answers also have a similar overlap with predicted retrievals. To control for overlap due to stopwords, we also add n-gram overlaps with the randomly sampled retrievals.
higher nucleus sampling values. As we saw in Section 2.3, these generations tend to be more fluent and coherent, but less relevant to the question. In Table 13 and Table 14 we find consistent trends as Section 3.1, with very little difference between models conditioned on retrievals from C-REALM and random retrievals.
vs qn. vs predicted retr. vs random retr. but not in qn. but not in qn.
(lemmatized nouns, proper nouns, numbers only) A: The same thing that causes the clouds you see at the start of a storm to form in the first place. The air pressure changes as the jet moves through the air, causing it to cool down and condense into water droplets. These droplets are in fact a lot bigger than the water droplets in the clouds, so when the plane passes through them, they can cause the air pressure to change, causing the water droplets to condense into clouds. This is what you see when you look at a cloud formation from the ground. In addition to this, when the plane passes through the clouds, it also moves through the air, causing it to change its temperature, causing the water droplets to condense and form clouds. These clouds are often quite large, and can block out the sun. This is what you see when you look at a contrail. Finally, the water vapor that forms at high altitudes is cooled by the air above, and as it cools, it condenses into clouds. This is what you see when you look at a cumulonimbus cloud. These clouds are even larger than the water droplets in the clouds, and they block out the sun.
Q: How do shampoo and conditioner work?
A:
The key ingredients in shampoo and conditioner are usually surfactants, they tend to attract oil from your skin. Shampoo will usually contain an anti-dandruff agent (dandruff is caused by fungi in your skin which cause dandruff). They also contain some moisturizing agents (moisturizers are things that stay in your skin longer and protect your scalp from cleaning), and a lot of anti-dandruff (dandruff is a side effect of high doses of both surfactants and moisturizers) Conditioner will contain moisturizing agents, like a skin moisturizer. They tend to cause oils to be separated from your hair (e.g. wash your hair with conditioner instead of shampoo). They both have a foaming agent that makes them foam up (usually Hydrogen Peroxide). They also have a foaming agent that stops dirt from attaching to your hair. Some conditioner will also suppress the action of the shampoo (to help prevent dirt from attaching to your hair). So what's the difference between shampoo and conditioner? Well, if you combine both the shampoo and conditioner Q: When glancing at a clock, why does the first second after glancing at it sometimes feel longer than the rest?
A: This is known as chronostasis. Your brain gets used to seeing the second hand move after the first, so it ignores the first one. If you were to look at the clock a lot, you'd see the second hand move a lot more than the first, and you'd be able to tell the difference between the second and the first. The second hand will appear to move much slower than the first hand because your brain is used to seeing the second hand move a lot more than the first. If you were to go on a long trip, you'd get used to the second hand moving a lot more than the first, and you'd have no problem telling the difference between the second and the first hand. It's kind of like how you don't notice your nose when you're driving, but if you look at it for a while, you start to notice it. Your brain is just getting used to seeing the nose move a lot more than the other, and it can't tell the difference between the two. It's kind of like that with your eyes.
Q: Why do Britain and other English empire countries still bow to monarchs? What real purpose does the queen serve?
A: Depends what you define as "purpose". There is no specific purpose for the British monarch -rather, they are figureheads that serve a role. The role is to have a high degree of influence over a country, though the rules of the monarchy can somewhat vary from country to country, and the ceremony of "kissing the royal corpse" does not always involve kissing the monarch. Whether that's the only reason for the custom, or if it was the only reason, I can't say, but that's the reasoning. When the U.K. was in full power, the monarch was a puppet of Parliament, and the powers of the monarch were transferred to the Prime Minister and thus to the Prime Minister's deputy, who then became the Prime Minister. Since then, Parliament has been able to vote on legislation that goes through the monarch, although they may still act as the monarch's representative in negotiating treaties, which can have very very deep consequences. The Queen's role, as a representative of Britain, doesn't necessarily involve her formal approval of any of the laws or legislation that goes through Parliament, though. All that's left is for the legislature to pass laws Q: What exactly is fire, in detail? How can light and heat come from something we can't really touch?
A: Fire is the chemical reaction of fuel (oxygen) with (what we call) impurities (ash, soot, oil etc). These impurities are created due to a number of things. If you were to study the chemical composition of a log (or a candle) and analyse it you would discover the chemical formula (A, C, and G) which is a simplified version of what actually happens. There is always fuel, and there is always the ash/soot. Wood is made up of mostly carbon and hydrogen and these are the fuels that create the fire. Hydrogen and oxygen are reactive, and they form water (H2O) and CO2 (CO2). If we split a log into smaller pieces, we start to see the process of combustion as it occurs. The carbon is burned, and with the explosion of CO, H2O and H2CO2, we get water and carbon dioxide (mostly). If we smash a log together, we start to see what happens when we also hit another log with another log...imagine smashing two sand castles together. We see how the sand castles and brick castles form and collapse at different rates...then it starts to burn. The smoke, ash and flames are the actual combustion of fuel.
Q: What goes on in those tall tower buildings owned by major banks?
A: The actual buildings are not there for profit but for show. There are many real world historical buildings that have little to no effect other than being a symbol of an economic entity or symbol of an era or location, and nothing more. For example look at Sears, Sears didn't care what went on inside, it was all about the _appearance_ of its location, the prestige of the location, the facilities and so on. It didn't care about how long it took it to operate, it was about how much people would pay to go see it. Sears was a landmark as a cultural movement and other big companies followed suit, so if you want to see a building you've never seen before, you have to go see Sears, just like you have to see a Toyota Camry for Toyota Camry. They used to be all about building new factories, some of them if I recall, but now that they're bigger, that means that more factory jobs are coming to them. You've probably seen them in stores as stores where people buy and sell stuff, so there aren't that many places for them to come from. Instead, it's just for show, a symbol of rich people.
( a )Figure 1 :
a1Many held-out questions are paraphrased in the training set. Best answer to similar train questions gets 27.4 ROUGE-L (d) Annotators find it difficult to judge long answers (with repetition) & correctness of technical content (c) Conditioning answer generation on random documents instead of relevant ones does not measurably impact its factual correctness. Longer outputs get higher ROUGE-L A summary of the major hurdles (a-d) to progress in long-form question answering with ELI5.
Baselines: We compare our model with the other entries on the ELI5 KILT leaderboard which are either generation-only, like T5-base (Raffel et al., 2020) and BART (Lewis et al., 2020b), or variants of BART using retrieval such as RAG (Lewis et al., 2020c) and BART + DPR (Petroni et al., 2020). These systems are based on massive pretrained language models, with similar number of parameters as our model (details in Appendix A.3).
Figure 2 :
2Example generation from our LFQA system with p = 0.9. Generations are long & coherent, but suffer from repetition towards the end. (more in Appendix A.4 and attached data supplementary material).
Figure 3 :
3Figures (from Roy et al., 2020) showing 2-D attention schemes for the sparse attention mechanism used in Routing Transformer. Lower layers pool in local information via sliding window local attention (Subfigure 3a) while upper layers gather global information for every token via clustering (Sub-figure 3b).
Figure 4 .
4There
Figure 4 :
4Scatter plot for generations from the p = 0.6 model between generative quality (ROUGE-L vs reference on X-axis) and grounding with retrieval (unigram overlap with retrieved documents on Y-axis). The plot shows no correlation between the two quantities.
p = 0 . 9 :Figure 5 :
095We conduct additional experiments studying our model variant with Scatter plot for generations from the p = 0.6
Dataset & Evaluation details: We evaluate our model on the KILT validation & test subsets of ELI5 (Petroni et al., 2020), since the original ELI5 dataset does not have human annotations to measure retriever performance. We downloaded the ELI5 dataset (Fan et al., 2019) from the KILT Github repository. 5 This version of the dataset has 272,634 training examples, 1,507 validation examples and 600 test examples. The test set answers 3 Our hyperparameters have been chosen manually with minimal tuning. See Appendix A.1 for details.4 We tried training the retriever jointly with RT using the attention bias scheme proposed inMARGE (Lewis et al., 2020a). This improved perplexity only in autoencoding settings where the gold answer itself is used as a retrieval query (like the setup in Lewis et al., 2020a), which is not valid in LFQA. 5 github.com/facebookresearch/KILTRetrieval
Generation
Model
RPr. R@5
F1 R-L KRL
T5-base
0.0
0.0 16.1 19.1
0.0
BART
0.0
0.0 19.2 20.6
0.0
RAG
11.0
22.9 14.5 14.1
1.7
BART + DPR
10.7
26.9 17.9 17.4
1.9
p = 0.9
RT + REALM
6.7
15.5 25.1 21.5
1.4
RT + C-REALM 10.2
24.4 25.4 21.5
2.1
p = 0.6
RT + REALM
6.7
15.7 23.1 23.4
1.5
RT + C-REALM 10.7
24.6 22.9 23.2
2.4
Table 1: Results on the KILT test set for ELI5 for
(1) retrieval performance, using R-precision and Re-
call@5 (RPrec, R@5), and (2) generation quality, using
ROUGE-L (R-L). These scores are combined to pro-
duce the final metric KILT R-L (KRL). We outperform
prior work on both generation & combined scores.
Table 3 :
3Human evaluation results with exact number of ratings shown in
Table 7 :
7Upper (↑) and lower (↓) bounds to perfor-
mance on ELI5. Lower bounds have been submitted
to the public KILT leaderboard, as "Metrics Test".
as well as stylistic properties of ELI5. On the
other hand, upper bounds (longest gold answer)
perform worse than our system (21.2 vs 24.4).
Marchand and Sewon Min for suggesting useful experiments on checking ROUGE-L bounds. Finally, we thank Shufan Wang, Andrew Drozdov, Nader Akoury, Andrew McCallum, Rajarshi Das, and the rest of the UMass NLP group for helpful discussions and suggestions at various stages in the project. This work was primarily done during KK's internship at Google Brain, mentored by AR. MI and KK are supported by award IIS-1955567 from the National Science Foundation (NSF).Takeaway: Human evaluation is challenging but
necessary for evaluating LFQA. Crowd-workers
are unlikely to spend time reading & analyzing
long text (Akoury et al., 2020). Hence, it is imper-
ative to design simpler evaluations. One effort in
this direction is Dugan et al. (2020), who reveal one
generated sentence at a time and estimate system
quality based on the number of sentences which
fooled humans. Another promising direction is ex-
trinsic evaluation (Celikyilmaz et al., 2020) where
humans actually interact with systems in real-world
scenarios such as the Alexa Prize (Ram et al., 2018)
or STORIUM (Akoury et al., 2020).
Chang and Zora Tung) for help with their codebase
and several useful discussions which helped us im-
prove our experiments. We are grateful to Tu Vu
for help with the QQP classifier. We thank Jules
Gagnon-
the-loop story generation. In Proceedings of Empirical Methods in Natural Language Processing. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In Proceedings of the International Conference of Machine Learning. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570.Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng
Chen, Andy Davis, Jeffrey Dean, Matthieu Devin,
Sanjay Ghemawat, Geoffrey Irving, Michael Isard,
et al. 2016. Tensorflow: A system for large-scale
machine learning. In 12th {USENIX} symposium
on operating systems design and implementation
({OSDI} 16), pages 265-283.
Nader Akoury, Shufan Wang, Josh Whiting, Stephen
Hood, Nanyun Peng, and Mohit Iyyer. 2020. Sto-
rium: A dataset and evaluation platform for machine-
in-Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao.
2020. Evaluation of text generation: A survey.
arXiv preprint arXiv:2006.14799.
Ting Chen, Simon Kornblith, Mohammad Norouzi,
and Geoffrey Hinton. 2020. A simple framework
for contrastive learning of visual representations. In
Proceedings of the International Conference of Ma-
chine Learning.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Conference of the North American
Chapter of the Association for Computational Lin-
guistics.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela
Fan, Michael Auli, and Jason Weston. 2019. Wizard
of wikipedia: Knowledge-powered conversational
agents. In International Conference on Learning
Representations.
Liam Dugan, Daphne Ippolito, Arun Kirubarajan, and
Chris Callison-Burch. 2020. RoFT: A tool for eval-
uating human detection of machine-generated text.
In Proceedings of the 2020 Conference on Empiri-
cal Methods in Natural Language Processing: Sys-
tem Demonstrations. Association for Computational
Linguistics.
Esin Durmus, He He, and Mona Diab. 2020. Feqa: A
question answering evaluation framework for faith-
fulness assessment in abstractive summarization. In
Proceedings of the Association for Computational
Linguistics.
Angela Fan, Yacine Jernite, Ethan Perez, David Grang-
ier, Jason Weston, and Michael Auli. 2019. ELI5:
Long form question answering. In Proceedings of
the Association for Computational Linguistics.
Joseph L Fleiss. 1971. Measuring nominal scale agree-
ment among many raters. Psychological bulletin,
76(5):378.
Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng,
David Simcha, Felix Chern, and Sanjiv Kumar. 2020.
Accelerating large-scale inference with anisotropic
vector quantization. In Proceedings of the Interna-
tional Conference of Machine Learning.
Suchin Gururangan, Swabha Swayamdipta, Omer
Levy, Roy Schwartz, Samuel Bowman, and Noah A
Smith. 2018. Annotation artifacts in natural lan-
guage inference data. In Conference of the North
American Chapter of the Association for Computa-
tional Linguistics.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and
Yejin Choi. 2020. The curious case of neural text de-
generation. In International Conference on Learn-
ing Representations.
Shankar Iyer, Nikhil Dandekar, and Kornél Csernai.
2017. First quora dataset release: Question pairs.
Gautier Izacard and Edouard Grave. 2020. Lever-
aging passage retrieval with generative models for
open domain question answering. arXiv preprint
arXiv:2007.01282.
Yacine Jernite. 2020. Explain anything like i'm five: A
model for open domain long form question answer-
ing. https://yjernite.github.io/lfqa.html.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell
Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
2020. Dense passage retrieval for open-domain
question answering. In Proceedings of Empirical
Methods in Natural Language Processing.
Divyansh Kaushik and Zachary C Lipton. 2018. How
much reading does reading comprehension require?
a critical investigation of popular benchmarks. In
Proceedings of Empirical Methods in Natural Lan-
guage Processing.
Stanley Kok and Chris Brockett. 2010. Hitting the
right paraphrases in good time. In Conference of
the North American Chapter of the Association for
Computational Linguistics.
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R Bowman. 2019.
Glue: A multi-task benchmark and analysis platform
for natural language understanding. In Proceedings
of the International Conference on Learning Repre-
sentations.
Hai Wang and David McAllester. 2020. On-the-fly in-
formation retrieval augmentation for language mod-
els. In Proceedings of the First Joint Workshop
on Narrative Understanding, Storylines, and Events,
pages 114-119.
Sam Wiseman, Stuart M Shieber, and Alexander M
Rush. 2017. Challenges in data-to-document gener-
ation. In Proceedings of Empirical Methods in Nat-
ural Language Processing.
Thomas Wolf, Victor Sanh, Julien Chaumond, and
Clement Delangue. 2018. Transfertransfo: A trans-
fer learning approach for neural network based con-
versational agents. In NeurIPS CAI Workshop.
Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi-
jit Chattopadhyay, Taranjeet Singh, Akash Jain,
Shiv Baran Singh, Stefan Lee, and Dhruv Batra.
2019. Rowan Zellers, Ari Holtzman, Hannah Rashkin,
Yonatan Bisk, Ali Farhadi, Franziska Roesner, and
Yejin Choi. 2019. Defending against neural fake
news. In Advances in Neural Information Process-
ing Systems, pages 9054-9065.
Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christo-
pher D Manning, and Curtis P Langlotz. 2020. Op-
timizing the factual correctness of a summary: A
study of summarizing radiology reports. In Proceed-
ings of the Association for Computational Linguis-
tics.
Chunting Zhou, Jiatao Gu, Mona Diab, Paco Guz-
man, Luke Zettlemoyer, and Marjan Ghazvinine-
jad. 2020. Detecting hallucinated content in condi-
tional neural sequence generation. arXiv preprint
arXiv:2011.02593.
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo,
Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy-
gen: A benchmarking platform for text generation
models. In The 41st International ACM SIGIR Con-
ference on Research & Development in Information
Retrieval.
A Appendices for "Hurdles to Progress
in Long-form Question Answering"
A.1 Training & Model Details
All our models are developed and trained us-
ing TensorFlow 1.15 (Abadi et al., 2016) and
Tensor2Tensor (Vaswani et al., 2018). Our imple-
mentations are based on the open-source codebases
of REALM 19 and the Routing Transformer. 20
Table 10 :
10The number of parameters used by our model
and baselines. Our generator is slightly bigger than
other submissions on the leaderboard, but we use a
smaller retriever with a similar sized index.
Table 11 :
11Effect of truncating generations (Truncate)
from the p = 0.6 model to keep the first f fraction
of tokens, and then repeating the truncated generations
1/f times to match the original length (Repeat ...). No-
tice a consistent increase in ROUGE-L with longer out-
puts, but a gap between the original generations (24.4)
and equal-length generations formed by repeating trun-
cations (Repeat 1/f times column).
Table 14 :
14A fine-grained version ofTable 13measuring the unigram overlap of nouns/numbers in the generations with the input question (vs qn.), retrievals predicted by C-REALM (vs predicted retr.) and randomly sampled retrievals (vs random retr.). Similar to Table 13, notice very little difference with and without retrieval.Experiment 1: A comparison between nucleus sampling p values (0.6, 0.9), conditioning on predicted retrievals (pred.). Result: Lower entropy more relevant to question, but higher entropy more coherent and lesser repetition.Experiment 2: A comparison between generations conditioned on predicted (pred.) and random retrievals (rand.). Result: Little difference in generation quality / coherence / relevance to question, high amounts of tie. p = 0.6, pred. p = 0.6, rand. Which generation answers the question better? Which ans. is more factually correct...** p = 0.9, pred. p = 0.9, rand. Which generation answers the question better?Experiment 3: A comparison between generations conditioned on predicted retrievals (pred.) and the longest gold answer. Result: Strong preference for gold answers over generations.A
B
Question
Prefer A
Prefer B
Tie
p = 0.6, pred. p = 0.9, pred. Which generation answers the question better?
41% (65)
30% (48) 29% (46)
Which answer is more coherent?
27% (42)
50% (79) 23% (37)
Which ans. is more factually correct + sensical? 30% (47)
37% (58) 33% (52)
40% (78)
33% (64) 27% (51)
Which answer is more coherent?**
55% (12)
27% ( 6) 18% ( 4)
48% (10)
9% ( 2) 43% ( 9)
31% (52)
37% (63) 32% (54)
Which answer is more coherent?
32% (26)
36% (30) 32% (26)
Which ans. is more factually correct + sensical? 28% (23)
35% (29) 37% (30)
p = 0.6, pred. gold answer
Which generation answers the question better?
14% (29) 68% (138) 18% (36)
Which answer is more coherent?
7% ( 8) 71% ( 77) 21% (23)
Which ans. is more factually correct + sensical?
2% ( 2) 76% ( 65) 22% (19)
p = 0.9, pred. gold answer
Which generation answers the question better?
17% (49) 72% (203) 11% (31)
Which answer is more coherent?
13% (14) 61% ( 65) 25% (27)
Which ans. is more factually correct + sensical?
6% ( 6) 72% ( 78) 22% (24)
Table 15 :
15Human evaluations experiments with exact number of ratings shown in (·). Differences greater than 10% with more than 50 total samples have been bold marked. The experiments marked with ** have less than 50 samples, so it is difficult to draw meaningful conclusions. Q: What causes the trail behind jets at high altitude?
Table 16 :
16Example generations from our LFQA system with p = 0.9.
State-of-the-art as of April 3, 2021 -the "Google Research & UMass Amherst" team entry on https: //evalai.cloudcv.org/web/challenges/ challenge-page/689/leaderboard/1908
As in Holtzman et al. (2020), a human study reveals that higher entropy (p = 0.9) answers are slightly more coherent and sensible, but lower entropy answers (p = 0.6) are more relevant to the question (details in Appendix A.5).
Corresponding experiments with the p = 0.9 variant of our model are presented in Appendix A.7.
All these trends persist even on questions for which our retriever predicts the ground-truth document (Appendix A.7) 9 Details of our experimental setup in Appendix A.5.
Another issue of KILT-RL is ignoring non top-1 retrievals, penalizing models using multiple retrievals together in context. 14 The ELI5 demo from Jernite (2020) also retrieves the top-1 similar training set question. Qualitatively, we found many validation examples had near-identical train paraphrases.15 We pay workers 4 cents per question pair ($8-12 / hr). We only hire workers from USA, UK and Australia with a 95% or higher approval rating and at least 1000 approved HITs.
Human A/B testing details in Appendix A.5.
ConclusionWe present a "retrieval augmented" generation system that achieves state-of-the-art performance on the ELI5 long-form question answering dataset. However, an in-depth analysis reveals several issues not only with our model, but also with the ELI5 dataset & evaluation metrics. We hope that the community works towards solving these issues so that we can climb the right hills and make meaningful progress on this important task.
https://www.gstatic.com/ gumdrop/sustainability/ google-2019-environmental-report.pdf
https://github.com/google-research/ language/tree/master/language/realm 20 https://github.com/google-research/ google-research/tree/master/routing_ transformer Attention Maps: We show the 2D plots of our generator's attention maps inFigure 3.(a) Local attention (b) Routing attention
AcknowledgementsFirst and foremost, we thank the twenty people who volunteered to help out with with the human annotation experiments. We are very grateful to Vidhisha Balachandran, Niki Parmar, and Ashish Vaswani for weekly meetings discussing progress and the REALM team (Kenton Lee, Kelvin Guu, Ming-Wei | [
"https://github.com/google-research/",
"https://github.com/google-research/"
] |
[
"STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer",
"STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer"
] | [
"Yiwei Lyu \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n",
"Paul Pu Liang pliang@cs.cmu.edu \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n",
"Hai Pham htpham@cs.cmu.edu \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n",
"Eduard Hovy \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n",
"Barnabás Póczos \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n",
"Ruslan Salakhutdinov \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n",
"Louis-Philippe Morency \nMachine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n\n"
] | [
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n",
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n",
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n",
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n",
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n",
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n",
"Machine Learning Department\nCarnegie Mellon University ♠ Language Technologies Institute\nCarnegie Mellon University\n"
] | [
"Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Text style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant. Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e.g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence. In this paper, we introduce a large-scale benchmark, STYLEPTB, with (1) paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as (2) compositions of multiple transfers which allow modeling of fine-grained stylistic changes as building blocks for more complex, high-level transfers. By benchmarking existing methods on STYLEPTB, we find that they struggle to model fine-grained changes and have an even more difficult time composing multiple styles. As a result, STYLEPTB brings novel challenges that we hope will encourage future research in controllable text style transfer, compositional models, and learning disentangled representations. Solving these challenges would present important steps towards controllable text generation. | 10.18653/v1/2021.naacl-main.171 | [
"https://www.aclweb.org/anthology/2021.naacl-main.171.pdf"
] | 233,210,062 | 2104.05196 | 72905000002f89941ec2b2190ff5007ce4396f70 |
STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer
June 6-11, 2021
Yiwei Lyu
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
Paul Pu Liang pliang@cs.cmu.edu
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
Hai Pham htpham@cs.cmu.edu
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
Eduard Hovy
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
Barnabás Póczos
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
Ruslan Salakhutdinov
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
Louis-Philippe Morency
Machine Learning Department
Carnegie Mellon University ♠ Language Technologies Institute
Carnegie Mellon University
STYLEPTB: A Compositional Benchmark for Fine-grained Controllable Text Style Transfer
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20212116
Text style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant. Many of the existing style transfer benchmarks primarily focus on individual high-level semantic changes (e.g. positive to negative), which enable controllability at a high level but do not offer fine-grained control involving sentence structure, emphasis, and content of the sentence. In this paper, we introduce a large-scale benchmark, STYLEPTB, with (1) paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as (2) compositions of multiple transfers which allow modeling of fine-grained stylistic changes as building blocks for more complex, high-level transfers. By benchmarking existing methods on STYLEPTB, we find that they struggle to model fine-grained changes and have an even more difficult time composing multiple styles. As a result, STYLEPTB brings novel challenges that we hope will encourage future research in controllable text style transfer, compositional models, and learning disentangled representations. Solving these challenges would present important steps towards controllable text generation.
Introduction
At the heart of interactive AI systems lies the element of communication as a channel to convey intentions using different stylistic attributes. Research in human-AI interaction has focused on building dialog systems (Celikyilmaz et al., 2018), virtual assistants (Cooper et al., 2004), and intelligent agents (Kim et al., 2013;Liang et al., 2020a;Pittermann et al., 2010) that can communicate their intentions with specific styles for different situations, target audiences, and environments (Lample * authors contributed equally
The bad service of the waitresses make me dread going sometimes.
The good service of the waitresses makes me dread going sometimes.
The good service of the waitresses makes me enjoy going sometimes.
I left three messages without a call back.
I left three messages.
I left three thankful messages.
After 3 months they can't be too new now.
After 3 months they can't be too new. Figure 1: STYLEPTB provides a large-scale resource to study fine-grained compositional style transfer. The styles provided in STYLEPTB (in green) span lexical, syntax, semantic, and thematic aspects (DiMarco and Hirst, 1993) which can be composed to form high-level style transfers as commonly studied in existing benchmarks (e.g. Yelp for sentiment (Shen et al., 2017) and GYAFC for formality (Rao and Tetreault, 2018)). et al., 2019;Li et al., 2018). For example, expressing the same facts using either formal or informal styles can be more suitable for certain target audiences (Rao and Tetreault, 2018). What is a style in natural languages? Existing style transfer benchmarks primarily focus on individual high-level stylistic changes across sentiment (Shen et al., 2017), formality (Rao and Tetreault, 2018), politeness (Madaan et al., 2020), and writing styles (Jhamtani et al., 2017). Figure 1 provides some motivating examples to show that the high-level style transfers as commonly studied in existing benchmarks (e.g. Yelp for sentiment (Shen et al., 2017) and GYAFC for formality (Rao and Tetreault, 2018)) can in fact be seen as composed from a dictionary of fine-grained style constructs. This alternative way of studying styles brings additional flexibility that enables finegrained control with the possibility to compose a broader space of styles spanning tense, sentence structure, phrase emphasis, and information contained in the sentence. However, the missing link is a benchmark dataset that offers this type of fine-grained style constructs, with the controllability to compose these stylistic transfers.
To fill this gap, we leverage research in linguistics to study formulations of styles across 4 representational categories: lexical, syntax, semantics, and thematics, that span the fundamental atomic transfers that text can undergo (McDonald and Pustejovsky, 1985;DiMarco and Hirst, 1993). Using these insights, we introduce a large-scale benchmark with (1) paired sentences undergoing 21 finegrained stylistic changes spanning the most atomic lexical, syntactic, semantic, and thematic style constructs, as well as (2) compositions of multiple transfers which model how fine-grained style constructs compose to form more complex, high-level transfers. Our dataset, called STYLEPTB, builds upon Penn Treebank (Marcus et al., 1993) by annotating each sentence undergoing these fine-grained style constructs, resulting in a large-scale resource spanning 59, 767 sentence pairs across 21 individual styles and an additional 35, 887 sentence pairs across 32 compositions of multiple styles.
STYLEPTB allows us to study the performance of state-of-the-art style transfer models when faced with the new challenge of fine-grained style transfer. It is interesting to observe that these models, while capable of performing high-level semantic changes, struggle with fine-grained changes, particularly in the syntactic and thematic domains. A second analysis in this paper is to see how these models can handle compositions of multiple style constructs as a step towards controllable high-level style transfer. However, we find that current models have an even more difficult time composing multiple styles. As a step towards this desiderata, we also propose an approach (CS-GPT) based on pre-trained language models (Radford et al., 2019) that achieves compositional style transfer. We believe that STYLEPTB will bring novel challenges that we hope will encourage research in controllable generation, compositionality of styles, and learning disentangled representations (John et al., 2019). From a broader perspective, we conclude with the observation that controllable style transfer models trained on STYLEPTB can help mitigate social biases in pre-trained language models.
Related Work
Several lines of research have aimed to formalize styles in natural languages through computational and linguistic perspectives (DiMarco and Hirst, 1993). The first systematic formulation of styles was by McDonald and Pustejovsky (1985) and later extended by DiMarco and Hirst (1993) to 4 representational categories including lexical, syntax, thematic, and semantic aspects. Following this, there has been some early efforts applying stylistic analysis into dialog generation (Hovy, 1987), machine translation (DiMarco, 1994), and text generation (Gatt and Krahmer, 2018). We take advantage of this prior work when formalizing our new STYLEPTB dataset.
Current benchmarks for style transfer focus on high-level style definitions such as transfer of sentiment (Shen et al., 2017;Lample et al., 2019;Li et al., 2018;Wu et al., 2019), politeness (Madaan et al., 2020), formality (Rao andTetreault, 2018;Liu et al., 2020;Krishna et al., 2020), writing styles (Jhamtani et al., 2017;Syed et al., 2020;Jin et al., 2020) and some other styles (Kang and Hovy, 2019). However, these only focus on only high-level styles, unlike STYLEPTB.
Computational models for style transfer span statistical NLP methods (Hovy, 1987;Xu et al., 2012), neural generative models (Prabhumoye et al., 2018;Lample et al., 2019;He et al., 2020), and Retrieve-and-Edit approaches (Li et al., 2018;Hashimoto et al., 2018;Guu et al., 2018;Sudhakar et al., 2019;Madaan et al., 2020). These approaches work for a predefined set of styles but are unable to generalize to compositions of styles.
Evaluating style transfer is difficult due to the diversity of plausible transferred sentences. In addition to automatic scores such as BLEU, perplexity, or binary classification accuracy of style transfer (Hu et al., 2017;Lample et al., 2019;He et al., 2020), other automatic metrics (Fu et al., 2018;Mir et al., 2019) and human evaluation are also commonly used (Li et al., 2018;Shen et al., 2017).
Fine-Grained Style Constructs
As a step towards enabling fine-grained control with the possibility to compose a broader space of styles, we first define style constructs at finegrained levels spanning lexical, syntactic, semantic, and thematic aspects. When selecting these style constructs, we have 2 goals in mind: (1) they should be representative of the four aspects (lexical, syntactic, semantic, thematic) following the formal categorizations in DiMarco and Hirst (1993), and (2) the transfers should be consistent (i.e. welldefined such that if multiple annotators are asked to modify the same sentence, the results will be similar). With these goals in mind, we summarize the Noun antonym replacement Investors will develop thicker skins and their confidence will return he says.
Investors will develop thicker skins and their diffidence will return he says.
Verb synonym replacement
The meeting is expected to call for heightened austerity for two years.
The meeting is anticipated to call for heightened austerity for two years.
Verb antonym replacement He noted that higher gasoline price will help buoy the October totals.
He ignored that higher gasoline prices will help buoy the October totals.
ADJ synonym replacement Most other states have enacted similar bans. Most other states have enacted alike bans.
ADJ antonym replacement
It is also planning another night of original series.
It is also planning another night of unoriginal series.
Most frequent synonym replacement
Republicans countered that long-range revenue estimates were unreliable.
Republicans countered that long-range revenue judges were unreliable.
Least frequent synonym replacement
Merrill Lynch Capital Markets Inc. is the sole underwriter for the offering .
Merrill Lynch Capital Markets Inc. is the sole investment-banker for the oblation .
SYNTAX
To future tense It is also planning another night of original series.
It will be also planning another night of original series. To present tense Sen. Mitchell urged them to desist. Sen. Mitchell urges them to desist.
To past tense It is also planning another night of original series.
It was also planning another night of original series.
Active to passive He also received 20-year sentences for each of the 24 passengers injured.
20-year sentences also were received by him for each of the 24 passengers injured.
Passive to active Most bills are drafted by bureaucrats not politicians.
Bureaucrats not politicians draft most bills.
PP front to back
In Indianapolis Lilly declined comment. Lilly declined comment in Indianapolis .
PP back to front
The dollar has been strong unlike 1987 . Unlike 1987 the dollar has been strong.
SEMANTICS
ADJ or ADV removal
The controls on cooperatives appeared relatively liberal when first introduced
The controls on cooperatives appeared liberal when introduced PP removal The controls on cooperatives appeared relatively liberal when first introduced.
The controls appeared relatively liberal when first introduced.
Substatement removal
The controls on cooperatives appeared relatively liberal when first introduced .
The controls on cooperatives appeared relatively liberal.
Information addition
He reports his business is up slightly from customers replacing old stock.
[ 'customer', 'waiting to buy', 'seafood' ]
He reports his business is up slightly from customers waiting to buy seafood and replacing old stock.
THEMATICS
Verb/Action emphasis He intends to add to the litigation staff. add Adding to the litigation staff is what he intends to do.
Adjective emphasis
The comparable year-earlier number was 56 million a spokesman said. comparable A spokesman said the year-earlier number of 56 million was comparable . Table 1: Examples of each of the 21 defined style constructs across lexical, syntactic, semantic, and thematic aspects found in STYLEPTB. The original phrase is in cyan and the corresponding target phrase is in magenta .
Note that some thematic and semantic transfers require additional information, highlighted in red .
following 21 chosen fine-grained style constructs spanning 4 categories and also provide detailed examples in Table 1. Lexical transfers are those at fine-grained lexicon levels (i.e. vocabulary or words) that include word constitutions (Heine et al., 2002) and word meaning (Cruse et al., 1986). As a starting point, we selected two types of lexical transfers: synonym/antonym replacements (6 transfers that replace nouns/verbs/adjectives with their synonyms/antonyms), and frequency-based replacements (2 transfers that replace words with their most/least appeared synonyms). The synonym/antonym resources are taken from Wordnet (Fellbaum, 2012).
Syntax transfers modify the underlying grammatical rules that govern the structure of sen-tences (Chomsky, 2002) without affecting the content (Akmajian and Heny, 1980). We selected three simple syntax transfers: tense changes (3 transfers: to past/present/future tense), voice changes (2 transfers: active to/from passive), proposition position changes (2 transfers: front to/from back).
Semantic transfers are changes to the meaning of sentences (Bagha, 2011) that not only extend beyond lexical (Cruse et al., 1986) andsyntaxlevel (Kratzer andHeim, 1998) changes, but also include modifications using indirect information such as referring (Strawson, 1950), situations (Barwise andPerry, 1981) or intentions and extensions (Allwood et al., 1977). As a starting point, we defined two simple types of semantic transfers: (1) Info removal: 3 transfers on different deletions: wordlevel (removing adjectives and adverbs), phrase level (removing propositions), and substatement level (removing entire substatements) that represent referring and situations, as well as (2) Info addition: 1 transformation that adds a given piece of information regarding a particular phrase in the current sentence representing extension.
Thematic transfers concern the placing of emphasis across different parts in a sentence (Stevenson et al., 1994) to highlight different aspects of the same event (DiMarco, 1994). We defined two emphatic transfers across adjectives and verbs (actions). As an example of adjective emphasis, "the hot meat is on the table" emphasizes location, while "the meat on the table is hot" emphasizes the hot temperature. To enforce consistency across annotators, we require adjective emphasis to rewrite the sentence into a be-statement of the emphasized adjective (as in the example above).
Analysis: To evaluate how useful these 21 selected atomic transfers are, we randomly sampled 50 sentence pairs from GYAFC and 50 sentences from Yelp with their reference transfer generated by Deep Latent Sequence Model (He et al., 2020) and manually tried to complete the transfers by composing one or more of the 21 atomic transfers we have defined, together with capitalization fixes and word-spelling fixes. We found that 72% of transfers from GYAFC, and 82% of transfers from Yelp can be done this way. Specifically, in GYAFC, 24% require one atomic transfer, and another 48% require composing multiple atomic transfers; in Yelp, 52% require one or less atomic transfers and another 30% require composing multiple atomic transfers. The results of this analysis suggest that STYLEPTB's dictionary of atomic styles is already a good start in studying compositional style transfer. STYLEPTBatomic transfers and their composition do indeed span a large percentage of current highlevel style transfers.
The STYLEPTB Dataset
Using these selected 21 style constructs, we now illustrate the steps towards collecting and annotating parallel sentences across style transfers.
Dataset Preprocessing
We use Penn Treebank (PTB) (Marcus et al., 1993) as our source of sentences. Additionally, the availability of parse trees in PTB allows us to automate the majority of syntactic transfers using rule-based methods. We begin with a total of 43, 948 sentences in the full PTB before removing sentences that are incomplete, too long (over 12 words), or too short (less than 5 words). This leaves 7, 719 sentences (see Figure 2 for statistics and Appendix A.1 for full details).
Generating transferred sentences
We give a brief overview of the data annotation process (see Appendix A.3 for full details).
Automated rule-based transfers: For 18 of the 21 transfers (lexical, syntax, and semantic transfers except Info Addition), we defined rule-based transfers using NLTK (Loper and Bird, 2002), parse trees (syntax, semantics), and WordNet (lexical). After human quality control, the total number of sentences transferred is listed in Table 2 (see Appendix A.2 for more details on automated generation and Appendix A.4 for human evaluation on quality of generated sentences)
Transfers with human annotations: For the remaining 3 transfers, we have human annotators (via Amazon Mechanical Turk) manually rewrite them due to the difficulty of automating the process. See Appendix A.3 for details on the data generation, human annotation and quality assurance process for each of the three transfers. After annotations and quality control, we obtained 696 rewritten sentences for adjective emphasis, 1201 rewritten sentences for verb emphasis, and 2114 valid sentence-information pairs with their transferred sentence with information added.
Relative Difficulty of Transfers
Lexical transfers can be done by replacing individual words and is simple to evaluate. To evaluate the difficultly of the remaining 13 syntax, semantic, and thematic transfers, we calculated the tokenlevel (i.e. word level) Hamming distance between original and transferred sentences. Using this metric, we categorized these 13 transfers into easy, medium and hard categories (see Table 3). We also evaluated semantic measures from BERT embeddings (Devlin et al., 2018) but found it less correlated with human judgment (see Appendix A.5). Figure 3: Example of generating sentence pairs that compose tense and voice changes. Starting from an original sentence ( green box ), we sequentially apply parse tree transfers (blue arrows) to obtain multiple transferred sentences ( yellow box ), yielding multiple parallel pairs (yellow arrows). We use transfer tokens (∆ 1 , ∆ 2 ) to track changes (see Section 5 for details).
No Voice
Change (0) Active To Passive (1) Passive To Active (2)
Compositional Transfers
To allow for compositionality, we also generated compositional data that includes parallel pairs of sentences linked by multiple sequential transfers.
To compose automatic transfers, we applied a sequence of rule-based transfers starting with parse trees (see Table 4). To compose transfers that involve human annotations, we apply a sequence of "reverse" changes on the original sentences with parse trees (since human rewritten sentences no longer have parse trees), before chaining the sequence of automatic reverse transfers with the final human-annotated transfer (see Figure 3).
A Model for Compositional Transfer
We extend the pre-trained GPT2 language model (Radford et al., 2019) for parallel style transfer by giving it designated style transfer tokens as input in addition to the source sentence. For example, for each individual binary style s i , we define a style transfer token ∆ i ∈ {0, 1, 2} where ∆ i = 0 represents keeping s i unchanged, ∆ i = 1 represents a change from s i = 0 to s i = 1, and vice versa for ∆ i = 2. We likewise extend the definition of ∆ i for styles taking more than 2 values. Given a parallel (source, target) pair (s, t), we define the appropriate transfer token ∆ ∈ {0, 1, 2} and train using maximum likelihood estimation to predict every word t j , for j = 1, 2, . . . , T , in the target sentence given the source and ∆:
θ * = arg max θ E (s,t)∼D ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ T j=1 log p θ (t j ; s, ∆) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ,(1)
where θ denotes the pre-trained GPT2 parameters and θ * denotes the parameters after fine-tuning on STYLEPTB. Note that we also train the model to reconstruct the same source sentence again when setting ∆ = 0 (no style change), which we found to help bridge the domain shift between data used to pre-train GPT2 and sentences in STYLEPTB.
As a step towards compositionality, we also train with (source, target) pairs that undergo multiple atomic style transfers as provided in STYLEPTB, resulting in multiple style transfer tokens ∆ i being activated at the same time. We call the resulting model CS-GPT (Compositional Style GPT) and show its architecture in Figure 4. Learning separate representations for each ∆ i results in disentangled style variables that can then be composed as desired. Another benefit of using disentangled style variables is the ability of a single model in performing multiple style transfers.
Datasets and Metrics
We use STYLEPTB and evaluate on the 13 nonlexical transfers (since lexical changes works best with fixed word substitutions). Please refer to Appendix B.1 for dataset preprocessing details. Automated evaluation metrics consists of automatic BLEU scores, METEOR scores, ROUGE_L scores, and CiDER scores between generated and ground truth sentences (Sharma et al., 2017). In addition, we did human evaluations on random sets of 10 samples generated by each model for each transfer. We followed prior work (He et al., 2020) and had 2 independent annotators each rate transferred sentences on three aspects (clarity/grammar, content preservation, style change) on a 1 − 5 Likert scale, and takes average.
Baseline Models
We evaluate the following baselines commonly used in style transfer. Since none of these existing models handle compositions of styles, we train separate models on each of the 13 transfers.
1) GPT2: We fine-tune pre-trained GPT2 (Radford et al., 2019) on each transfer with the source as input and predicting the target using MLE, similar to Liu et al. (2020); Syed et al. (2020).
2) SEQ2SEQ: A Seq2Seq model (Sutskever et al., 2014) with attention trained using MLE (Zhou et al., 2020;Jin et al., 2020).
3) RETRIEVEEDIT: Given input x, a retriever is trained to pick a similar training example (x ′ , y ′ ). We treat y ′ as our prototype and use a trained editor to edit it into desired output y (Guu et al., 2018;Madaan et al., 2020).
4) HUMAN:
We also report human performance for each style transfer by having two independent human annotators manually perform the style transfer on 20 sampled sentences.
Results and Observations
We evaluate these 3 baseline models on the style transfers in STYLEPTB and show results in Table 5. We make the following observations:
Baseline comparisons: RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers. When qualitatively observing the generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence (e.g. making word repetitions). This was not an issue Table 6: Human evaluation of style transfer models trained on the Verb Emphasis task. All approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity, content, and style metrics. GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation.
for RETRIEVEEDIT since it works by editing the sentence from the prototype. Both GPT2 and RE-TRIEVEEDIT significantly outperform SEQ2SEQ models on all 13 non-lexical transfers.
Difficulties of transfers:
We also compare the relative difficulty of transfers based on the automatic metrics described in Section 4.3. In line with our Hamming distance metric, we found that thematic transfers are especially difficult -all three baselines struggled on this task, which is intuitive because shifting emphasis requires completely different sentence structure changes on sentences and emphasized words. We found that GPT2 and SEQ2SEQ tend to struggle with grammar and word repetitions, while RETRIEVEEDIT sometimes follows the structural edits in the chosen (and often completely unfitting) examples, resulting in malformed outputs (see examples in Appendix C.1). All current methods significantly fall short of human performance especially on hard transfers. Therefore, we believe that STYLEPTB brings novel challenges that will spark future research in modeling fine-grained style changes.
Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them. We show average results below for one of the hard transfers (Verb Emphasis). From Table 6, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity, content, and style metrics. Furthermore, GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation, which further supports our qualitative observations above. Full results for human evaluations are available in Table 17 in Appendix C.1.
Towards Compositionality of Styles
As a step towards learning compositional transfers, we implemented the following baselines: 1. GPT2: Sequentially applying the GPT2 model trained for single transfers multiple times to perform compositional transfers.
2. CS-GPT: Our proposed CS-GPT model (detailed in Section 5) trained on compositional transfer pairs found in STYLEPTB.
3. CS-GPT-ZERO: An ablation of CS-GPT trained only on individual style changes but tested in a zero-shot setting on compositional transfers.
We evaluated these models on two compositional transfers: Tense+Voice (composing tense changes and active/passive voice changes), and Tense+PP Removal (composing tense changes and PP Removal). We conveniently used the numerical prefixes in the datasets as transfer tokens. The results are shown in Table 7 and we make the following observations:
CS-GPT works best for compositional transfers: CS-GPT significantly outperforms existing methods for compositional style transfer. This is expected, as CS-GPT is trained on the full compositional dataset, while CS-GPT-ZERO is only trained on part of the compositional data and SE-QGPT is trained on single-transfer parallel data. Qualitatively, we observed that CS-GPT is able to perform each required transfer at the same time, producing outputs with relatively low reconstruction error compared to the other two methods. We included a few samples generated by the three models in Table 9 with more examples in Appendix C.2.
Zero-shot compositionality remains challenging: We included CS-GPT-ZERO to explore whether CS-GPT can learn to compose transfers in a zero-shot manner. While CS-GPT outperforms CS-GPT-ZERO and existing models, all still struggle to perform zero-shot compositions. We noticed that CS-GPT-ZERO usually only performs one of the necessary transfers: e.g. in a Tense+Voice task, CS-GPT-ZERO tends to only make the tense change, not the voice change. Quantitatively, in the Tense+PP Removal dataset, CS-GPT-ZERO performs much worse than either CS-GPT or sequentially applying GPT2; in Tense+Voice dataset, CS-GPT-ZERO is similar to GPT2. We believe that sequentially applying GPT2 accumulates errors present in each one.
Training on compositional styles may improve fine-grained styles: We observe that CS-GPT trained on compositional data can achieve similar performance as compared to GPT2 trained specifically for that one single transfer. Sometimes CS-GPT even outperforms GPT2, as shown in Table 8 (see Table 20 in Appendix for full results). Therefore, CS-GPT has leveraged compositional structure and data to perform strongly on multiple single and compositional transfers with just one model.
Broader Impact: Mitigating Biases
Unconditional language models have been shown to perpetuate undesirable stereotypes during generation which disproportionately harm underrepresented social groups (Liang et al., 2020b;Ravfogel et al., 2020;Sheng et al., 2020). As one possible application of fine-grained style transfer (in addition to many others), we hypothesize that more fine-grained control over the generated outputs can Transfer
To Future + Passive To Active To Past + PP Removal Source Sentence NUM % was risen by sales to NUM billion from NUM billion. the bond market was unmoved by the economic statistics. CS-GPT sales will rise NUM % to NUM billion from NUM billion. the bond market is unmoved. Another technique that may be useful is to replace the key nouns with vague pronouns: GPT2: The Hispanic man said that he was walking his dog when the suspect yelled. Replace nouns: The hispanic man said that he was walking his dog when they yelled.
2. Semantic changes can also help by removing biased information from generated test using information removal. In the following two examples, the information removal is done by GPT2:
GPT2: The woman walks home from the theatre crying . Info removal: The woman walks home from the theatre. GPT2: The Asian man drove a black BMW to the station to meet with an officer who would be taking him to a jail . Info removal: The Asian man drove a black BMW to meet with an officer.
We can also replace the removed part with uniformly sampled information with Info Addition. Since existing baselines currently struggle, we present a manual example as proof of concept:
GPT2: The woman worked as a medical assistant in a health center. Info removal: The woman worked in a health center. Info addition with "worked as doctor": The woman worked as a doctor in a health center.
Finally, we performed a qualitative evaluation: we sampled 49 sentence completions using prompts from Sheng et al. (2019) that focus on stereotypical associations between gender and occupations (e.g. The woman worked as a babysitter.). We obtained post-processed versions using Info removal followed by Info addition with uniformly sampled new occupations. When presented to two independent human annotators, they judged 22 49 sen-tences as showing significantly lower bias with the remaining showing little or no bias change, indicating that fine-grained style transfer presents a new perspective to mitigating social biases in language models (see Appendix D for evaluation details).
Conclusion
In this paper, we propose a large-scale benchmark, STYLEPTB, for fine-grained style transfer spanning atomic lexical, syntactic, semantic, and thematic changes as well as their compositions into high-level transfers. We show that STYLEPTB provides an important step towards training more controllable text generators and removing social biases from generated text. However, existing style transfer models struggle to perform fine-grained changes and have an even more difficult time composing multiple styles. As a result, STYLEPTB brings novel challenges that we hope will inspire future research in controllable text generation, compositional models, and style disentanglement.
Appendix
A Dataset Construction
Here we provide more details on dataset pre-processing, annotation, quality control, post-processing, and statistics.
A.1 Dataset Preprocessing
We use parts of Penn Tree Bank (PTB) that have been used in training neural language models (Kim et al., 2015) as the source of sentences to transfer. The availability of parse trees of these sentences allows us to automate the majority of transfers using rule-based python scripts. We begin with a total of 43, 948 sentences in full PTB before removing sentences that are incomplete, too long (over 12 words), or too short (less than 5 words). This leaves 7, 719 sentences (see Figure 2 for statistics).
Note that the original sentences in this version of the tree bank have all punctuation removed, and have the "n't" shorthand as separate words (for example, "wasn't" is represented as two words "was n't"). The transferred sentence we generated or collected in this new dataset will follow the same format.
A.2 Programmatic Transfers
For 18 of 21 transfers (including all lexical and syntax transfers, as well as all semantic transfers except Info Addition), we wrote Python scripts that utilize the parse trees of the sentences to complete the transfers. For the lexical transfers, synonyms/antonyms are extracted from WordNet (Fellbaum, 2012). For syntax transfers and information deletion transfers, we used NLTK tree editing tools and lemmatizers to manipulate parse trees to transfer sentences. Since not all transfers are applicable to each sentence (for example, synonym replacements cannot be done to a sentence with no synonyms found for any of its words, and Proposition front/back changes do not apply to sentences without propositions in the front or back). The total number of sentences transferred by our scripts is listed in Table 2.
Although we found that the data collected for two syntax transfers, Passive To Active and Proposition Back To Front are extremely low in quantity, this shouldn't be a problem in training models for these transfers because the reverse transfers of these two are also part of the dataset with much larger quantities, and we can simply swap the original/transferred sentences of the reverse transfers to get as much data for these two transfers as other ones.
A.3 Annotation Details
For the three remaining transfers, we asked human annotators manually to rewrite them due to the difficulty of automating the processes. Due to limited resources, we randomly selected 2, 000 of the 7, 719 selected sentences as original sentences for these three transfers.
We utilized Amazon Mechanical Turk (AMT) to get annotators. For each task, we designed a prompt with very detailed instructions and plenty of examples to ensure consistency of rewritten sentences. In addition, we tested them by releasing small batches of tasks and see if the annotations are satisfactory. When the main batch of tasks is released, we also inspect random samples of rewritten sentences of each worker to ensure quality and we reject ones from the workers who do not follow our consistency requirements. We also told workers to make sure the sentences they produce are grammatically correct and free of spelling mistakes and rejected sampled rewritten sentences that have grammatical or spelling errors.
For Info Addition transfers, we used Visual Genome Dataset (Krishna et al., 2016) as the knowledge base for additional information. We first made a dictionary mapping each word to attributes and relations in Visual Genome that contains the word, ordered by frequency of appearance in Visual Genome, and then for each noun in the sentence, we select the most frequent attribute and relation from Visual Genome that contain the noun (if any) as additional information to be added to the sentence. Therefore, multiple sentence-information pairs may be created from the same original sentence. We ended up with 4, 412 total pairs to be annotated. Since the information added may be unfitting or even contradictory in the context of the sentence (such as information "milk in stock" in a sentence about stock markets), we asked workers to evaluate whether their rewritten sentences satisfies common sense, and we discard rewritten sentences that are marked as not fitting common sense. We ended up with 2, 117 rewritten sentences that are marked as satisfying common sense.
The web page used for Information Addition task is shown in Figure 5, and the instructions for this task (which pops up when "view instructions" on the prompt page is clicked) is shown in Figure 6, together with lots of detailed examples in the example tab next to it.
For adjective emphasis and verb emphasis tasks, we use information from the parse trees to identify adjectives and verbs to be emphasized, and we filter out words that shouldn't be emphasized (such as "'be" for verb emphasis). To ensure consistency, the workers are instructed to strictly follow the required format for each emphasis task. If an emphasis rewrite with the required format is impossible or if the original sentence is already emphasizing the word in the required format, the workers are asked to submit "N/A", and we discard these cases from our dataset. We started with 808 adjective emphasis tasks and 1, 373 verb emphasis tasks, and after discarding "N/A" results we still have 696 rewritten sentences for adjective emphasis task and 1201 rewritten sentences for verb emphasis task.
The web pages for the two emphasis tasks are shown in Figure 7 and Figure 9, respectively. And the instructions for each emphasis task are shown in Figure 8 and Figure 10, respectively. Finally, the detailed statistics of the data collection process of these three transfers are shown in Table 10. Table 11: Human evaluations of randomly sampled automatically generated sentence transfers. The results show that the programmatically generated transfer data is very reliable.
A.4 Human Evaluation of Automatically Generated Data
We evaluated the automatically generated parts of the dataset by asking three human annotators to rate sampled sentence transfers on three aspects (clarity/grammar, content preservation, style change) on a rate of 1-5. We found that most of the categories had perfect scores and the lowest averaged scores across one category of one task is 4.83. The full results are shown in Table 11.
A.5 Transfer Difficulty with Semantics Distance
To measure the semantic distance between original and transferred sentences in each transfer, we used BERT pre-trained models (Devlin et al., 2019) to compute the contextual representations of each sentence, and measured the average 2 distance as well as cosine similarity between representations of original and transferred sentences. The results are shown in Table 12. We find that this metric is not as effective as Token Level Hamming Distance in deciding the relative difficulty of transfers, therefore we stick to the difficulty categories determined in Table 3. Table 12: Average 2 distance and cosine similarity between BERT pooled output vectors of original and transferred sentences of the syntax, semantic and thematic transfers.
A.6 Compositional Transfers
To allow for compositionality, we also generated compositional data that include parallel pairs of sentences linked by multiple sequential transfers. To compose automatic transfers, we applied a sequence of rulebased transfers starting with parse trees. We use prefix labels to indicate the sequence of transfers undertaken. For example, when composing tense changes and active/passive voice changes, we use one label indicating tense change (0 for no change, 1 for to future, 2 for to past, 3 for to present) and the one indicating voice change (0 for no voice change, 1 for Active to Passive, 2 for Passive To Active). Thus, a prefix of "2 1" would mean changing the sentence to both past tense and active voice. The process of generating these data points is illustrated in Figure 3: we first generate active/passive pairs from the parse trees of original sentences, then apply tense changes on each pair to obtain both changes. Final statistics are shown in Table 4. To compose transfers that involve human annotations, we apply "reverse" changes on the original sentences with parse trees (since human rewritten sentences no longer have parse trees). For example, to compose Active To Passive and Info Addition, we apply an automatic Passive To Active change on an original passive sentence A to generate active sentence B, and if C is the human-annotated result of adding some information to A, then B to C is a composition of Active to Passive and Info Addition.
B Experimental Details
B.1 Dataset Preprocessing
For transfers with additional input to the original sentence (additional information in Info Addition, adjective to emphasize in Adjective Emphasis, etc), we put the additional input at the end of the original sentence separated by a semicolon token. When training Passive To Active and PP Back To Front, due to the low amount of data available, we also include data collected by their reverse operations and swap the source and target. For each transfer, we take all available parallel sentences, and divide them into train, valid and test sets in a 90%, 5%, 5% ratio. All numerals in the sentences are replaced with a "NUM" token when training the baselines.
B.2 Hyperparameters
The hyperparameters used for all models trained in all experiments is shown in Table 13.
Note that in GPT2 based models, each iteration means passing through all sentences in the training set, while in SEQ2SEQ and RETRIEVEEDIT each iteration means passing through a batch in the training set. Also, the vector sizes of all GPT2 models is equal to the default pre-trained GPT2 (small) model with LM head.
The hyperparameters for RETRIEVEEDIT are the same as the default from the code provided by Hashimoto et al. (2018)
B.3 Model Parameters
Since GPT2 Baselines, CS-GPT and CS-GPT-ZERO all uses pretrained GPT2 (small), each of those models have about 124M parameters. Under the hyperparameter settings described above, GRU+attn has about 2.4M parameters. Retrieve-Edit has 51.8M parameters.
B.4 Training Resources and Time
All models except RETRIEVEEDIT are run on a single GPU on Google Colab. The running time for training SEQ2SEQ for full 185, 000 iterations is about 2 hours. The training time for GPT2 for full 60 iterations takes between 1 and 4 hours (depending on the size of parallel data in the specific transfer), although the best results (in terms of valid loss) can usually be achieved within the first 20 iterations. The training time for CS-GPT and CS-GPT-ZERO for full 30 iterations is about 4 hours on compositional datasets (Tense+Voice, Tense+PP Removal), and the best results can be achieved within the first 10 iterations. The running time for training each RETRIEVEEDIT model ranges between 40 minutes and 1 hour.
C Full Experimental Results
C.1 Fine-grained Style Transfer
We show complete results of single-style experiments in Table 14-16. We make similar observations that in line with our Hamming distance metric, thematic transfers are especially difficult-all three baselines struggled on this task, which is intuitive because shifting emphasis requires completely different sentence structure changes on different sentences and emphasized words. Shown below are some examples of thematic transfers done by GPT2 and RETRIEVEEDIT model. We found that GPT2 and SEQ2SEQ tend to struggle with grammar and word repetitions, while RETRIEVEEDIT sometimes follows the structural edits in the chosen (and often completely unfitting) examples, resulting in malformed outputs (see examples in Appendix C.1). Furthermore, all current methods significantly fall short of human performance especially on hard transfers. Therefore, STYLEPTB brings novel challenges that will stimulate future research in modeling fine-grained style changes. Note: in the input, along with the original sentence, the word to emphasize is in red ): style metrics. Furthermore, GPT2 gets higher style scores while RETRIEVEEDIT excels at grammar and content preservation, which further supports our qualitative observations above.
C.2 Compositional Style Transfer
We present full results on compositional style transfer in Table 19 and show more examples of compositional transfers done by CS-GPT, CS-GPT-ZERO, and SEQGPT in Table 18. CS-GPT significantly outperforms existing methods in all compositional style transfer tasks in both datasets. This is expected, as CS-GPT is trained on the full compositional datasets, while CS-GPT-ZERO is only trained on part of the compositional dataset and each part of SEQGPT is trained on single-transfer parallel data. Qualitatively, we observed that CS-GPT is able to perform each required transfer at the same time, producing outputs with relatively low reconstruction error compared to the other two methods.
We also present full comparisons of CS-GPT and GPT2 on single style transfer are in Table 20. We observe that CS-GPT can often perform single transfers better than GPT2 trained specifically for that one task, while in the rest of the cases the CS-GPT and GPT2 has nearly the same performance. Therefore, CS-GPT has leveraged compositional structure and data to perform strongly on multiple single and compositional transfers with just one model. Table 19: Results on compositions of transfers using sequentially applying GPT2 (SEQGPT), CS-GPT-ZERO (adding compositional model but not compositional data) and CS-GPT (with both compositional model and data). The result shows that CS-GPT significantly outperforms the other two methods, and zero-shot remains challenging as CS-GPT-ZERO does not perform very well in comparison. Human annotators found that 21 out of 50 sentences generated by GPT2 on "The man worked as" and 28 out of 50 sentences generated by GPT2 on "The woman worked as" to exhibit gender bias.
Bottom table: Out of the 49 biased sentences, after using style transfer to replace occupations with randomly sampled ones, human annotators found that 22 of them are significantly less biased while the rest are either slightly less biased or neutral.
Text Generated by GPT2 Text after occupation replacement The man worked as a security guard for the family's hotel until the weekend.
The man worked as a receptionist for the family's hotel until the weekend. The man worked as a driver for his father.
The man worked as a personal care aide for his father. The woman worked as a maid at a resort in a small town.
The woman worked as a driver at a resort in a small town. The woman worked as a nurse at a facility. The woman worked as a construction worker at a facility. Table 22: Examples of sentences generated by GPT2 and with occupation replacements that are rated as "Significantly Less Biased" after the change by human annotators.
D Mitigating Social Biases: Qualitative Evaluation
We created two prompts "The man worked as" and "The woman worked as", and generated 50 sentences with each prompt from GPT2. Next, we determine biased words by taking the 1, 000 closest vectors in GloVe word embeddings (Pennington et al., 2014) to "man" and "woman". Then, we determine a sentence as biased if the phrase describing the occupation in the sentences contains any biased words. With this standard, we found that 21 out of 50 sentences for man and 28 out of 50 sentences are biased. Then, we replaced the occupations in these 49 biased sentences with occupations sampled uniformly randomly from all 100 generated sentences, and then asked two independent human annotators to evaluate the 49 replaced sentences on a five-point scale of Significantly More Biased, Slightly More Biased, The Same, Slightly Less Biased, and Significantly Less Biased. On average, the annotators reported 22 sentences being significantly less biased compared to before the replacements, while all other sentences are either slightly less biased or neutral. The full results of this experiment are shown in Table 21. A few examples that were deemed Significantly Less Biased by both annotators are shown in Table 22.
( a )
aDistribution of sentence lengths. (b) Distribution of parts of speech. (c) Top 30 most frequent tokens, excluding stop-words.
Figure 2 :
2Statistics: (a) the distribution of sentence lengths, (b) count of word tokens by part-of-speech, and (c) the top 30 most frequent tokens. STYLEPTB exhibits diversity in sentence form and style transfer annotations.
Figure 4 :
4CS-GPT uses multiple transfer tokens ∆ i ∈ {0, 1, 2} to enable compositional style transfer across multiple styles in our model STYLEPTB.
Figure 5 :
5The Amazon Mechanical Turk prompt page for information addition task.
Figure 6 :
6The Amazon Mechanical Turk instruction page for information addition task.
Figure 7 :
7The Amazon Mechanical Turk prompt page for adjective emphasis task.
Figure 8 :
8The Amazon Mechanical Turk instruction page for adjective emphasis task.
Figure 9 :
9The Amazon Mechanical Turk prompt page for verb/action emphasis task.
Figure 10 :
10The Amazon Mechanical Turk instruction page for verb/action emphasis task.
Noun synonym replacement The shift wo n't affect operations.The displacement wo n't affect operations.Aspect
Transfer
Original Sentence
Additional Info/
Emphasis
Transferred Sentence
LEXICAL
Table 2 :
2STYLEPTB is a large-scale resource spanning
59, 767 sentence pairs across 21 individual styles.
Difficulty Transfer
Hamming ↓
Easy
ADJ or ADV removal
1.531
To Present tense
2.318
To Past tense
2.447
To Future tense
3.341
Medium
Information addition
3.729
PP removal
4.079
PP back to front
5.429
Substatement removal
5.625
PP front to back
6.235
Hard
Active to passive
8.147
Passive to active
8.817
Adjective emphasis
8.846
Verb/Action emphasis
11.614
Table 3 :
3Average token-level Hamming distance between original and transferred sentences for all syntax, semantics and thematic transfers.
Table 4 :
4Number of sentence pairs for each composition of tense change and voice change in the generated compositional dataset.
Table 5 :
5Evaluation results on easy (top), medium (middle), and hard (bottom) transfers. Info Addition and thematic transfers are especially difficult for current models.Layer Norm
Self-Attention
Layer Norm
Fully Connected
Table 7 :
7Results on compositions of transfers: CS-GPT with compositional data works better than CS-GPT-ZERO (without compositional data), and sequentially applying GPT2 models.Transfer
Model
BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER
To Present Tense
GPT2
0.753
0.662
0.586
0.523
0.412
0.772
5.293
CS-GPT (TV)
0.733
0.635
0.553
0.488
0.387
0.744
4.742
CS-GPT (TP)
0.826
0.755
0.691
0.637
0.491
0.831
6.315
PassiveToActive
GPT2
0.433
0.271
0.167
0.120
0.191
0.434
1.329
CS-GPT (TV)
0.506
0.345
0.243
0.184
0.229
0.505
1.958
Table 8 :
8Comparing CS-GPT trained on compositional data (TV: Tense+Voice, TP: Tense+PP removal) with GPT2 models. Training on compositional transfers sometimes improve fine-grained transfer performance.
Table 9 :
9Two examples of successful compositional transfers generated by CS-GPT.GPT2:The Black man drove a car to a house where the victim had a family member. Antonym replacement: The Black man drove a car to a house where the beneficiary had a family member.help to control the output sentence and mitigate
bias. To validate our hypothesis, we perform a
proof-of-concept experiment: we show clearly bi-
ased sentences GPT2 generated via given prompts
from Sheng et al. (2019) (shown underlined), be-
fore rewriting them using fine-grained transfers
learned by our model.
1. Simple lexical transfers can mitigate bias by
replacing certain stereotyped nouns with alterna-
tives (through synonym/antonym replacement):
Paul Pu Liang, Jeffrey Chen, Ruslan Salakhutdinov, Louis-Philippe Morency, and Satwik Kottur. 2020a. On emergent communication in competitive multiagent teams. In AAMAS. Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. CoRR, abs/1706.09799.Kalpesh Krishna, John Wieting, and Mohit Iyyer.
2020.
Reformulating unsupervised style trans-
fer as paraphrase generation.
arXiv preprint
arXiv:2010.05700.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John-
son, Kenji Hata, Joshua Kravitz, Stephanie Chen,
Yannis Kalantidis, Li-Jia Li, David A Shamma,
Michael Bernstein, and Li Fei-Fei. 2016. Visual
genome: Connecting language and vision using
crowdsourced dense image annotations.
Guillaume Lample, Sandeep Subramanian, Eric Smith,
Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-
Lan Boureau. 2019. Multiple-attribute text rewrit-
ing. In International Conference on Learning Rep-
resentations.
Juncen Li, Robin Jia, He He, and Percy Liang. 2018.
Delete, retrieve, generate: a simple approach to sen-
timent and style transfer. In Proceedings of the
2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long Pa-
pers), pages 1865-1874. Association for Computa-
tional Linguistics.
Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim,
Ruslan Salakhutdinov, and Louis-Philippe Morency.
2020b. Towards debiasing sentence representations.
In ACL.
Yixin Liu, Graham Neubig, and John Wieting. 2020.
On learning text style transfer with direct rewards.
arXiv preprint arXiv:2010.12771.
Aman Madaan, Amrith Setlur, Tanmay Parekh, Barn-
abas Poczos, Graham Neubig, Yiming Yang, Ruslan
Salakhutdinov, Alan W Black, and Shrimai Prabhu-
moye. 2020. Politeness transfer: A tag and generate
approach. arXiv preprint arXiv:2004.14257.
Mitchell Marcus, Beatrice Santorini, and Mary Ann
Marcinkiewicz. 1993. Building a large annotated
corpus of english: The penn treebank.
David McDonald and James Pustejovsky. 1985. A
computational theory of prose style for natural lan-
guage generation. EACL, pages 187-193.
Remi Mir, Bjarke Felbo, Nick Obradovich, and Iyad
Rahwan. 2019. Evaluating style transfer for text.
arXiv preprint arXiv:1904.02295.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word rep-
resentation. In Proceedings of the 2014 conference
on empirical methods in natural language process-
ing (EMNLP), pages 1532-1543.
Johannes Pittermann, Angela Pittermann, and Wolf-
gang Minker. 2010. Emotion recognition and adap-
tation in spoken dialogue systems. International
Journal of Speech Technology.
Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhut-
dinov, and Alan W Black. 2018. Style transfer
through back-translation. In Proceedings of the 56th
Annual Meeting of the Association for Computa-
tional Linguistics (Volume 1: Long Papers), pages
866-876. Association for Computational Linguis-
tics.
Alec Radford, Jeff Wu, Rewon Child, David Luan,
Dario Amodei, and Ilya Sutskever. 2019. Language
models are unsupervised multitask learners.
Sudha Rao and Joel Tetreault. 2018. Dear sir or
madam, may i introduce the gyafc dataset: Corpus,
benchmarks and metrics for formality style transfer.
arXiv preprint arXiv:1803.06535.
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael
Twiton, and Yoav Goldberg. 2020. Null it out:
Guarding protected attributes by iterative nullspace
projection. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 7237-7256, Online. Association for Computa-
tional Linguistics.
Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi
Jaakkola. 2017. Style transfer from non-parallel text
by cross-alignment. In Advances in Neural Informa-
tion Processing Systems, pages 6833-6844.
Emily Sheng, Kai-Wei Chang, Prem Natarajan, and
Nanyun Peng. 2019. The woman worked as a
babysitter: On biases in language generation. In Pro-
ceedings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the 9th In-
ternational Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), pages 3398-3403.
Emily Sheng, Kai-Wei Chang, Premkumar Natara-
jan, and Nanyun Peng. 2020. Towards control-
lable biases in language generation. arXiv preprint
arXiv:2005.00268.
Rosemary J Stevenson, Rosalind A Crawley, and David
Kleinman. 1994. Thematic roles, focus and the rep-
resentation of events. Language and Cognitive Pro-
cesses, 9(4):519-548.
Peter F Strawson. 1950.
On referring.
Mind,
59(235):320-344.
Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun
Maheswaran. 2019. Transforming delete, retrieve,
generate approach for controlled text style transfer.
arXiv preprint arXiv:1908.09368.
Human annotated Tasks total tasks
tasks rejected
and republished
tasks with "N/A"
or not-make-sense
total sentences
added to dataset
Price per task
(in USD)
Number of
Unique workers
Semantics
Information
Addition
4412
17
2296
2114
0.07
19
Thematics
ADJ emphasis
808
14
112
696
0.13
9
Verb emphasis 1373
141
172
1201
0.12
13
Table 10 :
10Statistics on the collection of data in three transfers using human annotation on AMT.
1 . The hyperparameters for other models are selected by manual tuning using lowest validation loss.Model
Parameter
Value
GPT
pretrained model
GPT2 (small) with LM head
pretrained encoder/decoder
GPT2 (small)
batchsize
20
optimizer
RMSprop
initial learning rate
2e − 5
#turns to half learning rate
15
evaluate every #iterations
1
weight decay
0.015
teacher force ratio
1.0
max iterations
60
Model
Parameter
Value
CS-GPT and CS-GPT-ZERO
pretrained model
GPT2 (small) with LM head
pretrained encoder/decoder
GPT2 (small)
batchsize
20
optimizer
RMSprop
initial learning rate
2e − 5
#turns to half learning rate
5
evaluate every #iterations
1
weight decay
0.015
teacher force ratio
1.0
max iterations
30
Model
Parameter
Value
SEQ2SEQ
encoder GRU hidden size
256
decoder GRU hidden size
256
attention size
256
word embedding size
256
batchsize
1
optimizer
SGD
initial learning rate
1e − 2
#turns to half learning rate 5000
evaluate every #iterations
1000
weight decay
0.015
teacher force ratio
0.9
max iterations
185000
Model
Parameter
Value
RETRIEVEEDIT
encoder layers
2
decoder layers
4
hidden size
256
agenda size
256
attention size
256
word embedding size
300
batchsize
16
VAE-kappa
500
ident_pr
0.1
optimizer
Adam
learning rate
1e − 3
max iterations
1000
evaluate every #iterations 100
Table 13 :
13Table ofhyperparameters for all models in all experiments respectively. Note that in GPT based models, each iteration means passing through all sentences in the training set, while in GRU+attn and Retrieve-Edit each iteration means passing through a batch in the training set. Also, the vector sizes of all GPT models is equal to the default pretrained GPT2-small model with LM head.Transfer
Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER
To Future Tense
GPT2
0.895
0.852
0.813
0.778
0.540
0.899
7.709
SEQ2SEQ
0.527
0.368
0.261
0.188
0.173
0.531
1.525
RETRIEVEEDIT
0.899
0.854
0.815
0.778
0.531
0.901
7.731
HUMAN
0.954
0.915
0.884
0.855
0.636
0.964
9.174
To Past Tense
GPT2
0.836
0.776
0.722
0.674
0.484
0.842
6.700
SEQ2SEQ
0.478
0.313
0.204
0.133
0.155
0.490
1.374
RETRIEVEEDIT
0.935
0.903
0.873
0.847
0.606
0.933
8.358
HUMAN
0.974
0.957
0.939
0.916
0.709
0.982
9.549
To Present Tense
GPT2
0.754
0.663
0.586
0.524
0.412
0.772
5.293
SEQ2SEQ
0.516
0.361
0.267
0.210
0.190
0.518
1.819
RETRIEVEEDIT
0.909
0.870
0.830
0.793
0.599
0.916
7.987
HUMAN
0.969
0.952
0.936
0.918
0.745
0.979
9.501
ADJ or ADV Removal
GPT2
0.647
0.508
0.394
0.308
0.313
0.652
3.259
SEQ2SEQ
0.450
0.274
0.172
0.112
0.140
0.469
1.171
RETRIEVEEDIT
0.897
0.841
0.786
0.731
0.511
0.919
7.461
HUMAN
0.933
0.894
0.870
0.847
0.591
0.965
8.924
Table 14 :
14Evaluation results on easy transfers.Transfer
Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER
PP Front to Back
GPT2
0.398
0.210
0.081
0.001
0.184
0.406
0.886
SEQ2SEQ
0.393
0.280
0.207
0.161
0.162
0.391
1.492
RETRIEVEEDIT
0.541
0.423
0.301
0.176
0.247
0.547
2.536
HUMAN
0.965
0.959
0.952
0.945
0.690
0.970
9.671
PP Back to Front
GPT2
0.407
0.241
0.091
0.001
0.166
0.406
0.931
SEQ2SEQ
0.298
0.157
0.090
0.060
0.112
0.284
0.606
RETRIEVEEDIT
0.649
0.584
0.535
0.491
0.333
0.656
4.667
HUMAN
1.000
1.000
1.000
1.000
1.000
1.000
10.000
PP Removal
GPT2
0.763
0.700
0.645
0.593
0.419
0.787
6.012
SEQ2SEQ
0.330
0.195
0.121
0.081
0.112
0.363
1.004
RETRIEVEEDIT
0.798
0.770
0.739
0.712
0.478
0.846
7.111
HUMAN
0.957
0.944
0.931
0.919
0.681
0.976
9.207
Substatement Removal
GPT2
0.430
0.332
0.247
0.176
0.250
0.588
3.090
SEQ2SEQ
0.317
0.192
0.110
0.001
0.100
0.368
1.041
RETRIEVEEDIT
0.706
0.678
0.647
0.607
0.405
0.767
6.183
HUMAN
0.731
0.720
0.705
0.685
0.607
0.788
7.691
Information Addition
GPT2
0.479
0.305
0.189
0.121
0.207
0.475
1.359
SEQ2SEQ
0.345
0.180
0.094
0.053
0.098
0.335
0.632
RETRIEVEEDIT
0.493
0.396
0.328
0.275
0.284
0.603
3.401
HUMAN
0.846
0.762
0.690
0.624
0.521
0.892
6.863
Table 15 :
15Evaluation results on medium transfers. INFO ADDITION is especially hard for current models.Transfer
Baseline Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE_L CiDER
Active To Passive
GPT2
0.476
0.329
0.238
0.189
0.216
0.464
1.820
SEQ2SEQ
0.373
0.220
0.141
0.103
0.131
0.345
0.845
RETRIEVEEDIT
0.681
0.598
0.503
0.427
0.383
0.663
4.535
HUMAN
0.931
0.881
0.835
0.795
0.587
0.905
8.603
Passive To Active
GPT2
0.433
0.271
0.167
0.120
0.191
0.434
1.329
SEQ2SEQ
0.339
0.214
0.160
0.132
0.126
0.331
1.062
RETRIEVEEDIT
0.714
0.659
0.559
0.474
0.397
0.732
5.024
HUMAN
0.977
0.962
0.942
0.919
0.685
0.973
9.409
Adjective Emphasis
GPT2
0.263
0.079
0.028
0.000
0.112
0.188
0.386
SEQ2SEQ
0.187
0.058
0.018
0.000
0.059
0.179
0.141
RETRIEVEEDIT
0.387
0.276
0.211
0.164
0.193
0.369
1.679
HUMAN
0.834
0.753
0.679
0.611
0.522
0.811
6.796
Verb/Action Emphasis
GPT2
0.309
0.170
0.095
0.041
0.140
0.292
0.593
SEQ2SEQ
0.289
0.127
0.066
0.038
0.098
0.275
0.300
RETRIEVEEDIT
0.416
0.284
0.209
0.148
0.223
0.423
1.778
HUMAN
0.649
0.569
0.493
0.421
0.433
0.693
5.668
Table 16 :
16Results on hard transfers. Thematic transfers are especially difficult for current models.
Table 17 :
17Human evaluation for single atomic style transfer on 7 selected transfers (the 7 transfers with BLEU scores appearing in main part of paper). The result shows that on harder transfers, all approaches fall short of human performance, and that GPT2 excels at style while RETRIEVEEDIT is better at grammar and content preservation. To Future + Passive To ActiveTo Past + PP Removal Source Sentence NUM % was risen by sales to NUM billion from NUM billion the bond market was unmoved by the economic statistics Target Sentence sales will rise NUM % to NUM billion from NUM billion the bond market is unmoved SEQGPT willalesalesalesales to billion from from NUM billion the bond market is is CS-GPT-ZERO NUM % % % risen risen sales sales NUM NUM from NUM billion the bond market is unmoved by the economic statistics CS-GPT sales will rise NUM % to NUM billion from NUM billion the bond market is unmovedTransfer
Table 18 :
182 examples of composition transfers generated by CS-GPT, SEQGPT and CS-GPT-ZERO. CS-GPT successfully models compositional transfers across multiple styles.
Table 20 :
20Comparing single transfer performances between CS-GPT and GPT2 baselines (where TV indicates the CS-GPT is trained on Tense+Voice dataset and TP indicates the CS-GPT is trained on Tense+PP Removal dataset). The result shows that CS-GPT can perform multiple single style transfers with similar performance to GPT2 trained specifically for that one transfer, and sometimes even outperforms GPT2.Male context Female context Total
Biased
21
28
49
Not Biased
29
22
51
Total
50
50
100
Category
Number
Significantly more biased
0
Slightly more biased
0
Little or no change in bias
22
Slightly less biased
5
Significantly less biased
22
Total
49
Table 21 :
21Top table:
ExperimentsWe test the performance of current style transfer models on STYLEPTB. Anonymized data and code is included in the supplementary, and we present extra details and results in Appendix B and C.
https://worksheets.codalab.org/worksheets/0x1ad3f387005c492ea913cf0f20c9bb89/
AcknowledgementsPPL and LM were supported in part by the National Science Foundation (Awards #1750439, #1722822) and National Institutes of Health. HP and BP are supported by the DARPA D3M Program and The Boeing Company. RS was supported in part by NSF IIS1763562 and ONR Grant N000141812861. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation, National Institutes of Health, DARPA, The Boeing Company, or the ONR and no official endorsement should be inferred. We would also like to acknowledge NVIDIA's GPU support and the anonymous reviewers for their constructive comments.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.
Adapting language models for non-parallel author-stylized rewriting. Bakhtiyar Syed, Gaurav Verma, AAAI. Balaji Vasan Srinivasan, Anandhavelu Natarajan, and Vasudeva VarmaBakhtiyar Syed, Gaurav Verma, Balaji Vasan Srini- vasan, Anandhavelu Natarajan, and Vasudeva Varma. 2020. Adapting language models for non-parallel author-stylized rewriting. In AAAI, pages 9008- 9015.
mask and infill": Applying masked language model to sentiment transfer. Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, Songlin Hu, arXiv:1908.08039arXiv preprintXing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. " mask and infill": Applying masked language model to sentiment transfer. arXiv preprint arXiv:1908.08039.
Paraphrasing for style. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, Colin Cherry, Proceedings of COLING 2012. COLING 2012Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Pro- ceedings of COLING 2012, pages 2899-2914.
Exploring contextual word-level style relevance for unsupervised style transfer. Chulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, Hua Wu, arXiv:2005.02049arXiv preprintChulun Zhou, Liangyu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, and Hua Wu. 2020. Exploring contextual word-level style relevance for unsupervised style transfer. arXiv preprint arXiv:2005.02049.
RETRIEVEEDIT: several applications pending is similar application pending that is Verb Emphasis Original Sentence: i much prefer money i can put my hands on ; put Human Annotation: putting my hands on money is something i much prefer GPT2: putting my my my on on on i do do SEQ2SEQ: the saying that is what we is not to do RETRIEVEEDIT: the handing of my hands was by something that my hands on it RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers. When qualitatively observing generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence (e.g. making word repetitions). similar Human Annotation: several other banks have applications pending which are similar GPT2: other applications applications applications applications applications applications pending SEQ2SEQ: the bank that the the the the the that was. This was not an issue for RETRIEVEEDIT since it works by editing the sentence from the prototype, not generating the output sentence sequentially. Both GPT2 and RETRIEVEEDIT significantly outperform SEQ2SEQ models trained from scratch on all 13 non-lexical transfers. Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them. We show average results below for one of the hard transfers (Verb Emphasis). From Table 17, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity,content, andAdjective Emphasis Original Sentence several other banks have similar applications pending; similar Human Annotation: several other banks have applications pending which are similar GPT2: other applications applications applications applications applications applications pending SEQ2SEQ: the bank that the the the the the that was RETRIEVEEDIT: several applications pending is similar application pending that is Verb Emphasis Original Sentence: i much prefer money i can put my hands on ; put Human Annotation: putting my hands on money is something i much prefer GPT2: putting my my my on on on i do do SEQ2SEQ: the saying that is what we is not to do RETRIEVEEDIT: the handing of my hands was by something that my hands on it RETRIEVEEDIT performed equally well compared to GPT2 in some transfers such as To Future Tense and performs significantly better compared to GPT2 in most transfers. When qualitatively observing generated sentences, we found that while GPT2 can learn syntactic and semantic transfers, they suffer in reconstructing the rest of the sentence (e.g. making word repetitions). This was not an issue for RETRIEVEEDIT since it works by editing the sentence from the prototype, not generating the output sentence sequentially. Both GPT2 and RETRIEVEEDIT significantly outperform SEQ2SEQ models trained from scratch on all 13 non-lexical transfers. Human evaluation: We sampled 10 transferred sentences from each automatic generations models for each transfer and asked 2 independent annotators to rate them. We show average results below for one of the hard transfers (Verb Emphasis). From Table 17, we found that all approaches fall far short of human performance, which was judged by a separate human as having almost perfect clarity,content, and
| [] |
[
"Combiner: Full Attention Transformer with Sparse Computation Cost",
"Combiner: Full Attention Transformer with Sparse Computation Cost"
] | [
"Hongyu Ren hyren@cs.stanford.edu \nStanford University\nGoogle Research\nUniversity of Alberta\n\n",
"Hanjun Dai hadai@google.com \nStanford University\nGoogle Research\nUniversity of Alberta\n\n",
"Zihang Dai zihangd@google.com \nStanford University\nGoogle Research\nUniversity of Alberta\n\n",
"Mengjiao Yang \nStanford University\nGoogle Research\nUniversity of Alberta\n\n",
"Jure Leskovec \nStanford University\nGoogle Research\nUniversity of Alberta\n\n",
"Dale Schuurmans schuurmans@google.com \nStanford University\nGoogle Research\nUniversity of Alberta\n\n",
"Bo Dai bodai@google.com \nStanford University\nGoogle Research\nUniversity of Alberta\n\n"
] | [
"Stanford University\nGoogle Research\nUniversity of Alberta\n",
"Stanford University\nGoogle Research\nUniversity of Alberta\n",
"Stanford University\nGoogle Research\nUniversity of Alberta\n",
"Stanford University\nGoogle Research\nUniversity of Alberta\n",
"Stanford University\nGoogle Research\nUniversity of Alberta\n",
"Stanford University\nGoogle Research\nUniversity of Alberta\n",
"Stanford University\nGoogle Research\nUniversity of Alberta\n"
] | [] | Transformers provide a class of expressive architectures that are extremely effective for sequence modeling. However, the key limitation of transformers is their quadratic memory and time complexity O(L 2 ) with respect to the sequence length in attention layers, which restricts application in extremely long sequences. Most existing approaches leverage sparsity or low-rank assumptions in the attention matrix to reduce cost, but sacrifice expressiveness. Instead, we propose Combiner, which provides full attention capability in each attention head while maintaining low computation and memory complexity. The key idea is to treat the self-attention mechanism as a conditional expectation over embeddings at each location, and approximate the conditional distribution with a structured factorization. Each location can attend to all other locations, either via direct attention, or through indirect attention to abstractions, which are again conditional expectations of embeddings from corresponding local regions. We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention, resulting in the same sub-quadratic cost (O(L log(L)) or O(L √ L)). Combiner is a drop-in replacement for attention layers in existing transformers and can be easily implemented in common frameworks. An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach, yielding state-of-the-art results on several image and text modeling tasks.Recently, there have been several attempts to scale up attention to long sequences. A popular class of methods sparsifies the attention matrix with different sparsity patterns, including local * indicates equal contribution. The work was completed during HR's internship at Google Brain.35th Conference on Neural Information Processing Systems (NeurIPS 2021). | null | [
"https://arxiv.org/pdf/2107.05768v2.pdf"
] | 235,829,099 | 2107.05768 | 5d032bd2632b6f5847767f39ce247098c6bbc563 |
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren hyren@cs.stanford.edu
Stanford University
Google Research
University of Alberta
Hanjun Dai hadai@google.com
Stanford University
Google Research
University of Alberta
Zihang Dai zihangd@google.com
Stanford University
Google Research
University of Alberta
Mengjiao Yang
Stanford University
Google Research
University of Alberta
Jure Leskovec
Stanford University
Google Research
University of Alberta
Dale Schuurmans schuurmans@google.com
Stanford University
Google Research
University of Alberta
Bo Dai bodai@google.com
Stanford University
Google Research
University of Alberta
Combiner: Full Attention Transformer with Sparse Computation Cost
Transformers provide a class of expressive architectures that are extremely effective for sequence modeling. However, the key limitation of transformers is their quadratic memory and time complexity O(L 2 ) with respect to the sequence length in attention layers, which restricts application in extremely long sequences. Most existing approaches leverage sparsity or low-rank assumptions in the attention matrix to reduce cost, but sacrifice expressiveness. Instead, we propose Combiner, which provides full attention capability in each attention head while maintaining low computation and memory complexity. The key idea is to treat the self-attention mechanism as a conditional expectation over embeddings at each location, and approximate the conditional distribution with a structured factorization. Each location can attend to all other locations, either via direct attention, or through indirect attention to abstractions, which are again conditional expectations of embeddings from corresponding local regions. We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention, resulting in the same sub-quadratic cost (O(L log(L)) or O(L √ L)). Combiner is a drop-in replacement for attention layers in existing transformers and can be easily implemented in common frameworks. An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach, yielding state-of-the-art results on several image and text modeling tasks.Recently, there have been several attempts to scale up attention to long sequences. A popular class of methods sparsifies the attention matrix with different sparsity patterns, including local * indicates equal contribution. The work was completed during HR's internship at Google Brain.35th Conference on Neural Information Processing Systems (NeurIPS 2021).
Introduction
The Transformer [1] is a powerful neural network architecture that has demonstrated state-of-the-art performance in machine translation [2] and many other natural language processing (NLP) tasks via pretraining, using either unidirectional language modeling [3] or bidirectional language modeling [4][5][6][7][8]. It has also achieved excellent results in other domains like image recognition [9], code understanding [10], speech recognition [11], protein [12], music [13] and image [14] generative modeling. The core component of Transformer is the attention mechanism, which computes dependencies between all pairs of positions in a sequence. However, for a sequence of length L, the expressiveness of pairwise attention comes at a quadratic cost O(L 2 ) in both time and memory consumption. This makes the vanilla Transformer [1] prohibitive for applications that involve long sequences, including high-resolution images, protein sequences, or raw speech signals [15], where the sequence length L is often larger than 10, 000 [14]. window [16,17], local+stride [14], log-sparse [18], axial [19,20], or learnable patterns through hashing [21] or clustering [22]. Sparse attention enjoys sub-quadratic cost, but is lossy in capturing all-pair relationships. Generally, sparse attention requires more layers [14,20,23] to achieve full autoregressive or bidirectional dependencies (or receptive fields [20]) for each location in a long sequence.
Alternatively, another line of research has tried to achieve scalability with an explicit low-rank assumption [24,25] on the attention matrix or by using explicit feature maps of some kernels [26]. However these explicit low dimensional approximations might be too restricted for the potentially full rank attention matrix, which uses exponential kernels that are effectively infinite dimensional [27]. The Performer [28] is among the first works that attempts to approximate regular full-rank attention with the random feature trick [29]. However such random-feature based approaches [30] require many more bases to better approximate the exponential kernel [27], and empirically we found it produces inferior results in some sequence modeling tasks, such as density estimation.
In this paper we propose Combiner, a drop-in replacement for the vanilla quadratic attention mechanism with sub-quadratic computation and memory cost. Combiner still achieves full attention capability within each head of Multi-Head Attention, unlike approaches that adopt sparse or low-rank approximations. As we will discuss, the standard attention computed at each location can be seen as the conditional expectation of the value embeddings at all feasible locations given the current location. Based on such an understanding, Combiner explicitly approximates the conditional distribution in through a structured factorization of the probability space. Specifically, given a location x, the probability of attending to location y can be either directly calculated via the query vector of x and key vector of y, or indirectly through a local abstraction where x first attends to the key vector that represents a group of locations containing y, and multiplying the probability of choosing y within that group. We refer to this model as Combiner since the conditional distributions in attention become a combination between several local attentions and direct attentions. This structured decomposition enables Combiner to take existing sparse attention patterns and convert them into corresponding design choices for probability factorizations that achieve full attention. As shown in Figure 1, Combiner achieves full attention with the same asymptotic complexity as sparse variants. Combiner can be easily implemented in most existing deep learning frameworks without the need for specialized hardware implementation, and is GPU/TPU friendly. In fact, both the fixed and learnable sparse attention patterns from many existing Transformer variants [14,18,20,22] can be enhanced with such structured factorizations, with the same order of time or memory cost.
We validate Combiner on both autoregressive and bidirectional sequence modeling tasks over a variety of domains including text and images. We show that Combiner can achieve better perplexity and accuracy when using the same transformer architectures while being much faster in terms of runtime, and achieves state of the art performance on density estimation on standard datasets CIFAR-10 (2.77 bits/dim) and ImageNet-64 (3.42 bits/dim), as well as the Long-Range Arena [31]. The implementation of Combiner can be found at https://github.com/google-research/googleresearch/tree/master/combiner.
Attention as Conditional Expectation
In this section, we revisit the formulation of the standard Transformer [1] from the perspective of conditional expectation, which inspires the derivation of Combiner.
Without loss of generality, we use a single sequence in the self-attention scenario. Given a sequence of L embeddings X = [x 1 , x 2 , . . . , x L ], where X ∈ R L×d and each embedding x i ∈ R d is a d-dimensional vector, the core component of Transformer is the multi-head attention, where each head h is a scaled dot-product attention:
A h (X) = softmax Q h √ d K h V h , Q h = XW Q h , K h = XW K h , V h = XW V h ∈ R L×d ,(1)
and the attention vector from each head A h (X) is concatenated and projected:
MultiHeadAttn(X) = [A 1 (X), A 2 (X), . . . , A H (X)] W o , W o ∈ R Hd×d .(2)
Here H is the total number of heads per Transformer layer. In this paper, we focus on how to approximate full attention within each head of multi-head attention. For ease of notation, we drop the head index h whenever possible, and use lower-case letters x i , q i , k i , v i ∈ R d to denote rows in [14], Logsparse (B) [18] and Axial (C) [20] to Combiner-Fixed (D), Combiner-Logsparse (E) and Combiner-Axial (F). Combiner approximates the conditional expectation (3) with a combination of direct expectation (blue) and local expectation (yellow). Our instantiations (D)(E)(F) achieves full attention with the same sub-quadratic complexity.
X, Q, K, V respectively, which corresponds to a location i in the original sequence of length L. We use [n] to denote the set of positive integers {1, 2, . . . , n}.
For a position i ∈ [L], the attention formulation (1) can be viewed as conditional expectation of rows in V . Specifically, since softmax outputs a probability distribution, we can rewrite (1) as
A(x i ) = E p(j|i) [v j ] , p(j|i) = 1 Z (x i ) exp q i √ d k j ,(3)
where p(j|i) denotes the conditional probability at position j given the token at position i and the partition function Z (x i ) = j∈Ωi exp qi
Combiner: Full Attention via Structured Conditional Expectation
The complexity of p (j|i) is the bottleneck of the computation for A (x i ). Generally, in existing sparse transformers, the support of p (j|i) is sparsified to reduce the computation and memory complexity, e.g., Ω Sparse for MLM, while still maintaining sub-quadratic computation and memory cost. Below we denote Ω i as the support for full attention if there is no ambiguity or need to distinguish between LM or MLM. We introduce the main design framework in Section 3.1 and possible parameterizations in Section 3.2. Then in Section 3.3 we analyze the trade-off of Combiner.
Local Factorization for Conditional Expectation
The main idea of Combiner is to exploit a hierarchical structure for conditional probability modeling in (3), which provides the opportunity for reducing computation complexity while maintaining the same support. Specifically, we introduce support variables Ω r i , for r = 0, . . . , n i and i ∈ [L]. The support variables are disjoint, i.e., Ω r i ∩ Ω s i = ∅, ∀r = s, and ∪ ni r=0 Ω r i = Ω i . Then we can factorize p(j|i) as
p(j|i) = ni r=0 p(j, Ω r i |i) = ni r=0 p(j|Ω r i , i)p(Ω r i |i) = p(j|Ω rj i , i)p(Ω rj i |i),(4)
where r j denotes the index of the support to which j belongs. The last equation arises from the fact that the Ω r i are disjoint from each other (Ω r i ∩ Ω s i = ∅, ∀r = s). Therefore, there is only one support, Ω rj i , containing j. The remaining terms, where j ∈ Ω r i for r = r j , are all zero since p (j|Ω r i , i) = 0. Furthermore, assume Ω rj i is a sufficient statistic, i.e., j and i are independent given Ω rj i , we obtain
p(j|i) = p(j|Ω rj i )p(Ω rj i |i).(5)
Given the partition {Ω r i } ni r=0 , the attention form in (3) can be rewritten as
A (x i ) = E p(j|i) [v j ] = ni r=0 j∈Ω r i p (j, Ω r i |i) v j (6) = j∈Ω 0 ip (j|i)v j direct expectation + ni r=1 p(Ω r i |i) j∈Ω r i p(j|Ω r i )v j local expectation ,(7)
where we consider direct attention in partition Ω 0 i and apply the local factorization (5) to the partition r = 1, . . . , n i . Herep(j|i) ∝ p(j|i) but with different normalization constants, which will be explained below. We refer to this model as Combiner since the structured attention (7) combines the direct expectation of Ω 0 i and multiple local expectations via p(j|Ω r i ) and p(Ω r i |i) to form the final conditional expectation.
Equivalently, we can also rewrite the structured attention (7) as
A(x i ) = j∈Ωi I(j ∈ Ω 0 i )p(j|i) + ni r=1 I(j ∈ Ω r i )p(j|Ω r i )p(Ω r i |i) the new effective conditional probability q(j|i) v j ,(8)
where I(·) is a binary indicator function. After reordering, one can see from (8) that we obtain the effective conditional probability q(j|i) that tries to approximate the original p(j|i). Each probability term depends on both current location i and other location j, and the expectation is still obtained with respect to a valid conditional probability (non-negative and sums up to 1 over Ω i ).
Requirement for Sub-quadratic Cost. We can immediately see the benefit of this formulation from the fact that the local expectation in (7) is independent of the position i. The full dependence is achieved via the multiplier p(Ω r i |i) where j ∈ Ω r i . If we can design the local factorization such that: 1. the order of number of terms in (7) for p(·|i), ∀i ∈ [L]: [1,ni] be the unique set of partitions used for local expectation calculation, then the order of |U| (i.e., the number of unique partitions in U) is sub-quadratic; 3. the order of total number of unique calculations of local expectation across all locations in (7), Ω∈U |Ω| is sub-quadratic; then one can see that the overall computation and memory cost will be sub-quadratic with full attention support Ω Combiner i = Ω i , ∀i ∈ [L]. We will discuss in detail in Section 4 how to instantiate such a principle by drawing inspiration from existing sparse transformers, and how to convert them into a full attention model almost for free with identical asymptotic complexity.
L i=1 (n i + |Ω 0 i |) is sub-quadratic; and 2. let U = {Ω r i } i∈[L],r∈
Remark (Further Hierarchical Decomposition):
We introduce the local decomposition with a one layer partition of support of p(·|i) for simplicity. In fact, such local decompositions can be stacked further, which introduces a partition tree. Specifically, we can further partition Ω r i with disjoint subsets Ω rk i nr k=1 , and consider local decomposition p(j, Ω r i |i) = p(j|Ω
rkj i , i)p(Ω rkj i |Ω r i , i)p(Ω r i |i),
where k j is the index of sub-region which j belongs to. Thus, we obtain a hierarchical decomposition of p(j|i), which can also be plugged to (6) and yield a new full attention formulation.
Parameterizing Conditional Probabilities
While we obtained a possible way to speed up the standard Transformer via a combination of direct expectation and local expectations, it is also important to have an efficient design choice for the probability terms in (7), namelyp(j|i) from direct expectation, p(j|Ω r i ) from local expectation and p(Ω r i |i) for r ∈ [1, n i ]. For simplicity we use the scaled dot-product, which means that we will associate positions i, j and variable sets Ω r i with the corresponding embedding representation, and thus the probability is proportional to the exponential of the embedding inner products. Specifically:
•p(j|i): As this term is for the direct expectation, we can letp(j|i) ∝ exp( qi √ d k j ), which is the same as vanilla attention (3) but with different normalizations, which will be explained in Equation 9.
• p(Ω r i |i): This term aims to capture the joint event probability, i.e., p(
Ω r i |i) ∝ exp qi √ d k Ω r i .
Thus the design choice of k Ω r i should make an abstraction of the corresponding support Ω r i . We find k Ω r i = max pooling j∈Ω r i k j already provides good empirical results without introducing additional parameters; we can also use DeepSets [32] to obtain such abstraction. • p(j|Ω r i ): This term is the probability of getting j within this local span Ω r i . We make p(j|Ω r i ) ∝ exp
q Ω r i √ d k j ,
where we use max pooling or DeepSets over {q j } j∈Ω r i to obtain q Ω r i similarly. Normalizing Probability Terms. The terms in each local expectation p(j|Ω r i ), ∀j ∈ Ω r i can be normalized within the local span; the direct expectationp(j|i) and the terms in p(Ω r i |i) should be normalized together,
Z(x i ) = j∈Ω (0) i exp q i √ d k j + ni r=1 exp q i √ d k Ω r i ,(9)
and Z(x i ) is the normalizing constant when calculatingp(j|i) and p(Ω r i |i).
Trade-offs in Combiner
Combiner achieves full attention with reduced cost without making explicit sparsity or low-rank assumptions over the attention matrix. However this efficiency gain is not free. In this section we discuss the limitations of the simplification made by Combiner, and provide a simple workaround.
Structured Attention Approximation. We obtain the local decomposition (5) under the conditional independence assumption. Therefore, the local expectation in (7) is independent of the position i, this suggests that any two locations i 1 and i 2 with Ω r i1 = Ω r i2 = Ω would have linearly dependent attention scores over the region Ω. Formally, the probabilities formed by the effective conditional
distribution a(Ω) i1 = q(j 1 |i 1 ), q(j 2 |i 1 ), . . . , q(j |Ω r i 1 | |i 1 ) = p(Ω r i 1 |i1) p(Ω r i 2 |i2) a(Ω) i2 .
In other words, the rank of the sub-matrix over the same partition in the resulting attention matrix is 1, therefore, the attention matrix is locally low-rank based on the partition. On the other hand, the direct expectation fully attends to each position in sub-support Ω 0 , which ensures the full-rank block. These two attention schemes make the attention matrix of Combiner structured. Compared with the low-rank approximation for attention [26,28,30], which is inspired from random features [29] in the kernel community, a structured approximation that exploits both the locally low-rank and full-rank blocks has been proved more powerful theoretically and empirically in large-scale kernel machines [27].
Improving Expressiveness Using a Mixture Model. One way to further improve the expressiveness of the local factorization is to use a mixture model. This idea is adapted from the mixture of softmaxs [33] to obtain high-rank softmax layer in language modeling. Let ω be a certain partition of the support (i.e., collection of Ω r i ) of Ω i , then one can easily use A(
x i ) = 1 M M m=1 A(x i ; ω m ) to compute the attention, where each component of the mixture A(x i ; ω m )
is the term (7) using a specific factorization plan ω m . Empirically we find two components are already sufficient to improve performance.
Combiner Instantiations
In this section we show several local factorization schemes satisfying the requirements in Section 3.1. As we will see, Combiner is able to convert several sparse transformers [14,18,[20][21][22] into full attention, with the same order of computation and memory consumption. One can also design other factorization patterns, which can be easily instantiated in Combiner.
Combiner-Fixed
The Sparse Transformer [14] is one of the most representative variants that can achieve O(L √ L) computation and memory cost with sparse attention. Here we show how to convert this fixed pattern proposed in [14] (Figure 1(A)) into a factorization plan, and instantiate a full attention variant named the Combiner-Fixed (Figure 1(D)).
In the fixed-sparse attention, the support is Ω sparse MLM
i = {j : j mod s = 0} ∪ {j : j ≡ i (div s)}
where s is a hyper-parameter, div is integer division, and j ≡ i (div s) denotes that the quotients of i and j w.r.t. s are the same. In the autoregressive case, Ω sparse LM Our design of ω MLM fixed has the following form: , there are (s + (L div s)) terms in (7) ; the local expectation has (L div s) terms . The overall complexity is O(L · (s + 2(L div s))). The optimal s is O( √ L), and we can achieve O(L √ L) computation and memory complexity, which is the same as [14]
Ω 0 i = {j : j ≡ i (div s)} , Ω r i = j : j div s = r, j / ∈ Ω 0 i , ∀r ∈ [L div s], ∀i ∈ [L](10)
Combiner-Logsparse
The Logsparse Transformer is proposed in [18] and can theoretically achieve O(L log L) cost. The general idea is to make the size of support Ω sparse i no larger than log 2 i . For the ease of notation, we first define bits(n) = [b 1 , b 2 , . . . , b log 2 n ] to be the binary representation of integer n, with b t ∈ {0, 1} the coefficient of basis 2 t . Thus we have n = To exploit this scheme in the Combiner framework, we can define log 2 n non-overlapping supports, where Ω r i = [suff r ] \ [suff r+1 ] with the boundary case [suff log 2 i−1 +1 ] = ∅. Note that for the ease of notation, some of the Ω r i are empty which will be ignored. In this case, the direct attention set Ω 0 i includes {i}, as well as {i − 1} when i is an even number. Such a factorization leads to Combiner-Logsparse, as shown in Figure 1(E). From the Figure, we observe that in total we will have span summaries for every 2, 4, 8, . . . , 2 log 2 L locations, resulting in total log 2 L t=1 L 2 t or O(L) summaries. Each location i will select at most O(log(i)) non-overlapping spans to cover the full support Ω i , and thus, the total cost will be O (L log L). We leave the design of MLM case to Appendix B.
Combiner-Axial
The Axial Transformer [20] builds the attention along each axis of the input data. Without loss of generality, we focus on 2D case where the input sequence is reshaped into a matrix of size n × m = L. Specifically, the location i in original sequence will be in row i = (i − 1) div m + 1 and col i = (i − 1) mod m + 1. We show how to simply enable full attention with factorization on 2D matrix, hence Combiner-Axial.
The sparse axial has Ω sparse MLM obtain the abstraction. To obtain such abstraction for all the locations, we can leverage the cummax operator for each column to efficiently obtain the prefix-max.
i = {j : j − 1 ≡ i − 1(mod m)} ∪ {j : j − 1 ≡ i − 1(div m)},
• ω LM axial-horizontal : similar as ω axial-vertical except that each Ω r i summarizes the row r before row i and excludes col i (Figure 2(B)).
• ω LM axial-rowmajor :
Ω 0 i = {j : j − 1 ≡ i − 1(div m)} ∩ [i],
i.e., elements in the same row are directly attended, while Ω r i = {j : j ≡ r(div m)} ∩ [i − col i ] captures the rows before row i . This structure is similar to Combiner-Fixed, except for the way that the abstraction (and thus the local expectation) is computed. Combiner-Fixed computes the abstraction only based on r of partition Ω r i , where ω axial-rowmajor depends on both r and the column col i (Figure 1(F)).
In all cases above, the cost is similar to the Axial Transformer [20], which is O(L √ L) if we reshape the sequence to a 2D matrix with n, m = O( √ L). We defer the MLM case to Appendix C.
Combiner-Learnable
Inspired by the Reformer [21] and Routing Transformer [22], we can also learn the factorization plan ω from the data. We illustrate this with Routing Transformer and provide a way to enable full attention in Routing Transformer following the Combiner principle.
For a specific layer, suppose we have a learned disjoint region (or cluster in Routing Transformer) {Ω r } n r=1 where ∪ r Ω r = [L]. In Routing Transformer, we simply have Ω sparse MLM i = Ω ri where Ω ri denotes the region where position i belongs to. To define the Combiner factorization, we let ω routing MLM : Ω 0 i = Ω ri , Ω r i = Ω r \ Ω 0 i , ∀r ∈ [n i ].
(11) Note that n i = n (i.e., number of learned clusters) for all locations. The above factorization can only work for MLM. LM requires the following definition:
ω routing LM : Ω 0 i = Ω ri ∩ [i], Ω r i = Ω r \ Ω 0 i ∩ [i], ∀r ∈ [n i ].(12)
In general, both LM and MLM can have sub-quadratic cost when n = O( √ L). However, routing variants (including the Routing Transformer) require a gather operation, which can be slow on TPUs (see illustration in Appendix D).
Experimental Evaluation
We evaluate Combiner with different full attention patterns on both autoregressive and bidirectional sequence modeling tasks, covering a wide range of input data from images to texts. All tasks considered involve long sequences for up to 12,000 in length, some of which prevent the applicability of the vanilla transformer. We compare Combiner with state-of-the-art Transformers. We also perform a series of ablation studies where all of the models being compared use the exact same architecture that only differ in the attention module, avoiding individual tricks employed in the original works (e.g., using both learnable and fixed patterns in Routing Transformer [22]). Details to reproducing all experimental results can be found in Appendix E.
Autoregressive Sequence Modeling
In this subsection, we first perform density estimation on text and image using Combiner. For language modeling, we focus on the Wiki-40B-En dataset [34], which consists of clean Wikipedia pages in English. We use a sentence piece model with vocabulary size 32K to tokenize the text and measure the perplexity at the sentence piece level.
Language Modeling
To ensure fair comparison, all models being compared again have the same number of layers and hidden sizes, are are implemented under the same code base. Table 2 shows the results of the comparison. As we can see, under 2k sequence length, Combiner variants are consistently better than their corresponding baselines, and are very close to the standard Transformer. When sequence length goes to 8k, the standard Transformer runs out of memory, whereas Combiner continues to achieve improved perplexity, surpassing the result of Transformer-2k. If we further use DeepSets to calculate the summarization terms q Ω r i and k Ω r i , we may further achieve lower perplexity as shown in Table 3.
Image Generative Models
CIFAR-10. We first perform a sanity check where we compare sparse attention baselines against Combiner with full attention under the same architecture on the CIFAR-10 dataset. The sequence length is 3072. For all the methods, we use a same 6-layer transformer with 8 attention heads and 512 embedding dimensions. We train all models for 500k iterations using batch size 32 on TPU v2. As shown in Table 1, given the same model architecture, Combiner-X performs significantly better than the base model X under the bits per dimension (BPD) metric on the 10,000 test images. In particular, Combiner significantly decreases BPD by 0.887, 0.087, and 0.626 compared to the base models Logsparse, Fixed and Axial, respectively. Note that all of the Combiner variants achieve better performance than the best of the base models. This demonstrates the advantage of Combiner over the baselines given the same 6-layer architecture. We observe a similar trend under a 12-layer architecture.
CIFAR-10
Bits/Dim PixelCNN [15] 3.03 PixelCNN++ [36] 2.92 Image Transformer [16] 2.90 PixelSNAIL [37] 2.85 Sparse Transformer [14] 2.80 Combiner-Axial (ours) 2.77
ImageNet 64x64
Bits/Dim PixelCNN [15] 3.57 Parallel Multiscale [38] 3.70 Glow [39] 3.81 SPN [40] 3.52 Sparse Transformer [14] 3.44 Axial Transformer [20] 3.44 Routing Transformer [22] 3.43 Combiner-Axial (ours) 3.42
Following the 128-layer architecture in Child et al. [14], we apply Combiner-Axial and achieve state-of-the-art performance, 2.77 BPD on CIFAR-10, as listed in Table 4. We run all of the models in Table 4 without data augmentation [35].
ImageNet-64. We also evaluate performance under the autoregressive setting on ImageNet-64, where sequence length is 12,288. We first perform the same analysis as CIFAR-10 and compare Combiner-X with the baselines using the same model architecture. As shown in Table 1, Combiner consistently outperforms the baselines with the same attention pattern. We further apply Combiner-Axial to a 30-layer Transformer, which achieves state-of-the-art performance on density estimation on ImageNet-64, demonstrating the effectiveness of full attention achieved by Combiner.
Bidirectional Sequence Modeling
Besides autoregressive tasks, we also evaluate Combiner on a set of standard bidirectional tasks to show the general applicability of the method.
Long-Range Arena
Long-Range Arena (LRA) is a unified benchmark [31] for probing the capability of efficient transformers on handling long sequences. We evaluate our models on five tasks from LRA: ListOps, Text Classification, Retrieval, Image Classification and Pathfinder. All of the tasks are sequence-level multi-class classification. Please refer to the original LRA paper for more details. As shown in Table 5, Combiner is able to match the performance of vanilla Transformer and achieves even better performance in some tasks. Following the protocol of LRA, all methods use the same architecture and hyperparameters for a controllable comparison. We use the numbers from Tay et al. [31] for all tasks except for Pathfinder. Since we were unable to reproduce the original Pathfinder results using the default setup in LRA Github repository, we rerun all the baselines using Pathfinderinter configuration to conduct fair comparison. However, as the benchmark is still of small-scale and the LRA official website discourages hyperparameter tuning, Table 5 should be treated as results for the test bench of expressiveness compared to vanilla Transformer. Figure 3: We measure the inference runtime and memory usage for eight models. Overall Combiner has similar speed with Performer and its sparse counterpart but Vanilla Transformer quickly goes OOM when sequence length grows.
Masked Language Modeling
As the core element of BERT langauge pretraining [5], masked language modeling (MLM) refers to the task of reconstructing tokens that are randomly masked out in the input sequence. As with the LM task, we use perplexity as the main metric, which correlates relatively well with down-stream task performance. Specifically, we use the large scale C4 dataset [8] for training and evaluation, and consider different sequence lengths. Following the original BERT setup, we mask out 15% of the tokens in each input sequence. The comparison is summarized in Table 6. Similar to the LM result, different Combiner variants consistently outperform their corresponding baselines under 2k sequence length. However, apart from the standard Transformer, Combiner-2k also falls behind BigBird-2k. We conjecture that this is related to the special design in BigBird such as all tokens can always attend to the <cls> token directly, which is only applicable in non-causal problems. That said, when we further increase sequence length to 8k, the standard Transformer runs into OOM issue, whereas Combiner not only outperforms BigBird but also substantially surpasses Transformer-2k. This suggests that Combiner can truly benefit from scaling learning to longer sequence lengths.
Runtime and Memory Usage of Combiner
Here we evaluate the inference runtime and memory usage of five baselines -Transformer, Performer, BigBird, Sparse-Fixed and Sparse-Axial, as well as three variants of Combiner-Combiner-Fixed, Combiner-Axial and Combiner-Mixture. We run inference of all the models on a TPU v3-16 (16 cores x 16GB) with batch size 16, and we test sequences of length from 2 10 to 2 14 . As shown in Figure 3, Combiner instantiations achieve comparable runtime and memory usage with their sparse counterpart and Performer. Note Combiner achieves much better empirical performance than the sparse models and Performer. Combiner-Mixture has the same asymptotic complexity with Combiner-Fixed and Combiner-Axial, however, since it requires running two partition plans, it is slower than Combiner-Fixed and Combiner-Axial. Due to the gather operation required by the random attention which is not very TPU/GPU friendly, BigBird is very computationally expensive. And the Transformer model quickly runs out of memory when sequence length increases.
Conclusion
Inspired by the conditional expectation view of attention mechanism, we propose Combiner, a drop-in replacement of the attention module. By introducing structured decomposition to the conditional probability, Combiner achieves full attention capability while maintaining sub-quadratic computational and memory cost. We instantiate several Combiner variants converting existing sparse transformers to full attention. Combiner achieves state-of-the-art performance on both autoregressive and bidirectional tasks for image and text modeling, showing benefits in both modeling effectiveness and runtime efficiency. Future work includes additional factorization pattern designs, as well as applications of Combiner in domains like bioinformatics and speech.
B Combiner-Logsparse in MLM Case
Here we extend the Combiner-logsparse introduced in section 4.2 to the MLM case.
Besides the log 2 i non-overlapping supports in the LM case, we can define addtional log 2 i non-overlapping supports to attend to the tokens after the current token in the sequence. We illustrate this design choice in figure 4.
C Combiner-Axial in MLM Case
Besides the ω LM axial-vertical , ω LM axial-horizontal and ω LM axial-rowmajor introduced in section 4.3, here we introduce how we extend these three models to the MLM case.
• ω MLM axial-vertical :
Ω 0 i = Ω sparse MLM i = {j : j − 1 ≡ i − 1(mod m)} ∪ {j : j − 1 ≡ i − 1(div m)}, and Ω r i = {j : j ≡ r(mod m)}, for r ∈ [m] \ {col i }.
As depicted in Figure 2(A), Ω r i corresponds to the column r above row i , where we use max pooling to obtain the abstraction. To obtain such abstraction for all the locations, we can leverage the cummax operator for each column to efficiently obtain the prefix-max.
• ω MLM axial-horizontal : similar as ω MLM axial-vertical except that each Ω r i summarizes all rows r and excludes col i .
D Combiner-Learnable
As discussed in section 4.4. we design Combiner-learnable as an extension to the routing transformer [22], which learns to cluster the tokens. Each token in the routing transformer only attends to the tokens in the same cluster. As shown in figure 4, our Combiner-learnable combines direct expectation with local expectation (yellow tokens), each of which summarizes one cluster (red, blue or green). Here we list the hyperparameters we used on the CIFAR-10 dataset. Our experiments include (1) an ablation study, where all the models share the exact same architecture; and (2) the main result, where our Combiner achieves the state-of-the-art result under the setting that no data augmentation is allowed.
For the ablation study, the embedding and hidden size is 512. We use 8 attention heads in each layer with in total 6 transformer layers. We train all the models for 400,000 steps with learning rate 1e-3 and batch size 32. For the main result, we use the same architecture as introduced in Child et al. [14], and we train our Combiner-Axial for 1,200,000 steps with cosine learning rate scheduling. We rerun the main result for 3 times and the standard deviation is 0.003.
E.2 ImageNet-64
Regarding the details of the ImageNet-64, we use the same setup with CIFAR-10, which consists of an ablation study and the main result. The architecture used in the ablation study is identical with the one we used in CIFAR-10. For the main result of Combiner-Axial, we used a 30-layer architecture with 768 hidden size and embedding dimension. We train this architecture for 1,200,000 steps with cosine learning rate scheduling. We also rerun the main result for 3 times and the standard deviation is 0.005.
E.3 Wiki-40B Language Modeling
The main purpose of this experiment is not to chase the state-of-the-art performance, as generally speaking, the more parameters/data, the better the perplexity would be for language modeling. So instead, we let all the methods have the same neural network backbone, while only varying the attention implementations to compare their effectiveness. This is similar in spirit to the ablation study in CIFAR-10 and ImageNet-64.
Specifically, we use the word embedding size and hidden size of 768 for all the layers. We use 12 attention heads in each layer, with in total 12 transformer layers. We use the Pre-Norm architecture, and the MLP layers have hidden size equals to 4 × 768. The maximum sequence length can vary in {2048, 8192}, depends on the memory limit of each methods. All the methods are trained for 125,000 stochastic gradient updates, with batch size equals to 128. We also enable the cosine learning rate scheduling, with 10,000 warm-up steps. The optimizer is Adam with gradient clipping.
E.4 LRA Benchmark
We mainly follow the guideline of LRA, where all the models should use roughly the same number of parameters and same hyperparameters like batchsize, number of iterations, etc.. We tried our best to reproduce the experimental results using the code in https://github.com/google-research/longrange-arena, and we found that we cannot reproduce the pathfinder-32 results. We have communicated with the authors but didn't get the issue resolved. So instead, we rerun all the baselines using the same network configurations, on the pathfinder-32-inter setup. We found some of the methods favor the 'MEAN' pooling to get the sequence representation, while others favor the 'CLS' pooling. So we try both of them for each of the method, and report the best result.
E.5 C4 Masked Language Modeling
Similar to the purpose of section E.3, we perform masked language modeling task on C4 dataset, which is typically used for BERT pretraining. As the perplexity metric correlates with the downstream task performance well, we thus perform the controlled experiments with all the methods using the same network architecture.
The architecture used and the hyperparameters are almost the same as in section E.3, except that we have maximum number of segments equal 2.
Figure 1 :
1Attention matrices of several instantiations of Combiner in the autoregressive setting. We transform several sparse attention patterns: Fixed (A)
√ d k j over support Ω i . The support Ω i of p (j|i) defines the set of valid locations that the i-th token can attend to. For instance, the support set in autoregressive language modeling (LM) consists of all previous tokens, i.e., Ω LM i = [i] 2 ; in masked language modeling (MLM) the support consists of all tokens in the sequence, i.e., Ω MLM i = [L]. That is, Ω LM i and Ω MLM i represent the full attention capability respectively in the LM and MLM setting.
, but this can lead to either reduced capacity or limited applicability. We defer detailed discussion of the full capacity of the model to Appendix A. In this section we introduce the Combiner, which achieves Ω
∩
[i]. Please refer to Figure 1(A) for an illustration of the LM version.
where each local expectation is performed in each span of size s, and there are totally L div s spans across all locations. For each position i ∈ [L]
* 2 t . One of the possible design choices to make Logsparse in the LM case is Ω sparse LM i = suff t := i.e., attend to the location indices that equal to the suffix sum of the weighted bits(i − 1), as well as location i itself. This serves as our base sparse version as shown inFigure 1(B).
Figure 2 :
2andΩ sparse LM i = Ω sparse MLM i ∩ [i], which all have at most O(m + n) entries for each i, as illustrated inFigure 1(C). We propose several factorization schemes to make it an attention with full support.• ω LM axial-vertical :Ω 0 i = Ω sparse LM i , and Ω r i = {j : j ≡ r(mod m)} ∩ [i − col i ], for r ∈ [m] \ {col i }.As depicted inFigure 2(A), Ω r i corresponds to the column r above row i , where we use max pooling to Attention matrices and sequence being attended (e.g., a 3x4 image) of vertical and horizontal variants of Combiner-Axial. Blue and yellow correspond to direct and local attention respectively for location i (purple). Locations connected by arrows correspond to the same support Ω r .
softmax in (1) and several requirements on the sparsity patterns in attention scheme.
• ω MLM axial-rowmajor :Ω 0 i = {j : j − 1 ≡ i − 1(div m)}, i.e.,elements in the same row are directly attended, while Ω r i = {j : j ≡ r(div m)} for r ∈ [n] \ {row i } captures all the rows except row i . It is trivial to see that the complexity remains O(L √ L) if n, m = O( √ L).
but here we gain full attention capability in each attention head. For the LM case, we cansimply have ω LM
fixed : {Ω r
i ∩ [i] | Ω r
i ∈ ω MLM
fixed }, which has the same O(L
√
L) optimal complexity.
Table 1 :
1Ablation results in Bits per Dimension (Bits/Dim) on CIFAR-10 and ImageNet-64.Model
Layers CIFAR-10 ImageNet-64
Reformer [21]
6
-
3.740
Performer [28]
6
3.335
3.719
Logsparse [18]
6
4.253
4.351
Combiner-Logsparse (Ours)
6
3.366
3.795
Fixed [14]
6
3.408
3.696
Combiner-Fixed (Ours)
6
3.321
3.654
Axial [20]
6
3.666
4.032
Combiner-Axial (Ours)
6
3.050
3.585
Combiner-Mixture (Ours)
6
3.040
3.585
Reformer [21]
12
-
3.710
Performer [28]
12
3.310
3.636
Routing Transformer [22]
12
2.950
-
Combiner-Mixture (Ours)
12
2.885
3.504
Table 2 :
2LM Perplexity on Wiki-40B (Main).Model
Perplexity
Transformer-2k [1]
17.26
Performer-2k [28]
19.66
Routing-2k [22]
20.85
Fixed-2k [14]
18.04
Combiner-Fixed-2k (Ours)
17.70
Axial-2k [20]
20.82
Combiner-Axial-2k (Ours)
17.56
Combiner-Fixed-8k (Ours)
16.60
Combiner-Axial-8k (Ours)
16.49
Table 3 :
3LM Perplexity on Wiki-40B (Ablation).Model
Perplexity
Transformer-2k [1]
17.26
Combiner-DeepSets-Max-8k (Ours)
16.29
Combiner-DeepSets-Mean-8k (Ours)
16.48
Combiner-Max-8k (Ours)
16.60
Combiner-Mean-8k (Ours)
16.54
Table 4 :
4Bits per Dimension (Bits/Dim) on CIFAR-10 and ImageNet-64.
Table 5 :
5Experimental results on Long-Range Arena benchmark.Model
ListOps Text Retrieval Image Pathfinder
Avg
Chance
10.00
50.00
50.00
10.00
50.00
34.00
Transformer
36.38
64.27
57.46
42.44
88.81
57.87
Local Attention
15.95
52.98
53.39
41.46
84.64
49.68
Sparse TRans.
35.78
63.58
59.59
44.24
83.90
57.42
Longformer
36.03
62.85
56.89
42.22
86.68
56.93
Linformer
35.49
53.94
52.27
38.56
86.17
53.28
Reformer
36.30
56.10
53.40
38.07
79.18
52.61
Sinkhorn Trans.
34.20
61.20
53.83
41.23
73.36
52.76
Synthesizer
36.50
61.68
54.67
41.61
81.61
55.21
BigBird
37.08
64.02
59.29
40.83
86.75
57.59
Linear Trans.
17.15
65.90
53.09
42.34
88.13
53.32
Performer
36.00
65.40
53.82
42.77
88.76
57.35
Combiner-Fixed
36.65
64.99
59.81
41.67
88.59
58.34
Combiner-Axial
36.15
64.36
56.10
41.33
88.43
57.27
Table 6 :
6MLM perplexity on C4 dataset.Model
Perplexity
Transformer-2k [1]
4.552
BigBird-2k [41]
4.696
Performer-2k [28]
10.940
Fixed-2k [14]
5.279
Combiner-Fixed-2k (Ours)
5.170
Axial-2k [20]
5.370
Combiner-Axial-2k (Ours)
4.809
Routing-2k [22]
6.703
Combiner-Routing-2k (Ours)
6.539
BigBird-8k [41]
4.542
Combiner-Axial-8k (Ours)
4.190
Combiner-Fixed-8k (Ours)
4.139
2 10
2 11
2 12
2 13
2 14
Sequence Length
2 7
2 8
2 9
2 10
2 11
2 12
2 13
Milliseconds / Iteration
2 10
2 11
2 12
2 13
2 14
Sequence Length
2 0
2 1
2 2
2 3
Memory (GB)
Vanilla Transformer
Performer
BigBird
Combiner-Axial
Combiner-Fixed
Sparse-Axial
Sparse-Fixed
Combiner-Mixture
Figure 4 :
4Left: Combiner-logsparse in the MLM case. Right: Combiner-Learnable. Following the routing transformer[22], we apply the combiner principle, so that we can achieve full attention in each head with identical complexity with the routing transformer.E Experimental Details
E.1 CIFAR-10
Following the conventional implementation, the input sequence will be "right-shifted" so that the position i can attent to itself in LM setting.
Acknowledgments and Disclosure of FundingWe would like to thank Richard Song and David Dohan for the help on introducing Performer codebase and experiment configurations, Yi Tay and Mostafa Dehghani for clarifications on the LRA benchmark, James Lee-Thorp, Joshua Ainslie, and Ilya Eckstein for clarification on their LRA experiment results, Adams Yu for performing internal paper review and helpful suggestions. We also gratefully acknowledge the support of DARPA under Nos.Appendix A Universal ApproximationHere we show in Proposition 1 that our Combiner-X achieves universal approximation property[42]if the sparse transformer X achieves universal approximation property. For approaches like BigBird[41], they maintain the universal approximation property using the global tokens (CLS). However, the global attention makes it hard to be applied to the unidirectional autoregressive modeling (LM). Besides, the random attention requires the gather operation, making it very slow on dense hardware like TPUs(Figure 3). Proposition 1. The proposed Combiner will not break the universal approximation property of the original sparse transformers.Specifically, we consider the function class constructed by stacking the attention block with a two-layer fully connected network. Formally, following the notations in[42]we have the block aswhich denotes the h-head attentions with X ∈ R L×d , W 1 ∈ R d×r , and W 2 ∈ R r×d . The function class is denoted asE is trainable position embedding}.Yun et al.[42]shows that the function class(15)is still universal approximation w.r.t. the norm defined as d p (f, g) := f (X) − g(X)
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems (NeurIPS). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa- tion Processing Systems (NeurIPS), 2017.
The best of both worlds: Combining recent advances in neural machine translation. Orhan Mia Xu Chen, Ankur Firat, Melvin Bapna, Wolfgang Johnson, George Macherey, Llion Foster, Niki Jones, Mike Parmar, Zhifeng Schuster, Chen, Annual Meeting of the Association for Computational Linguistics (ACL). Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, et al. The best of both worlds: Combining recent advances in neural machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, Advances in Neural Information Processing Systems (NeurIPS). 2020Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V Le, Advances in Neural Information Processing Systems (NeurIPS). Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 2019.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations (ICLR. 2020Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations (ICLR), 2020.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, International Conference on Learning Representations (ICLR. 2021Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations (ICLR), 2021.
Learning and evaluating contextual embedding of source code. Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, Kensen Shi, International Conference on Machine Learning (ICML). 2020Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. Learning and evaluating contextual embedding of source code. In International Conference on Machine Learning (ICML), 2020.
Speech-transformer: a no-recurrence sequence-tosequence model for speech recognition. Linhao Dong, Shuang Xu, Bo Xu, IEEE International Conference on Acoustics, Speech and Signal Processing. Linhao Dong, Shuang Xu, and Bo Xu. Speech-transformer: a no-recurrence sequence-to- sequence model for speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018.
Progen: Language modeling for protein generation. Ali Madani, Bryan Mccann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Po-Ssu Raphael R Eguchi, Richard Huang, Socher, arXiv:2004.03497arXiv preprintAli Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R Eguchi, Po-Ssu Huang, and Richard Socher. Progen: Language modeling for protein generation. arXiv preprint arXiv:2004.03497, 2020.
Music transformer: Generating music with long-term structure. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, Dai, D Hoffman, Eck, International Conference on Learning Representations (ICLR). Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, AM Dai, MD Hoffman, and D Eck. Music transformer: Generating music with long-term structure (2018). In International Conference on Learning Representations (ICLR), 2019.
Generating long sequences with sparse transformers. Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever, arXiv:1904.10509arXiv preprintRewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
Pixel recurrent neural networks. Aaron Van Oord, Nal Kalchbrenner, Koray Kavukcuoglu, International Conference on Machine Learning (ICML). Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning (ICML), 2016.
Image transformer. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, Dustin Tran, International Conference on Machine Learning (ICML). Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International Conference on Machine Learning (ICML), 2018.
Compressive transformers for long-range sequence modelling. Anna Jack W Rae, Potapenko, M Siddhant, Timothy P Jayakumar, Lillicrap, International Conference on Learning Representations (ICLR). 2020Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations (ICLR), 2020.
Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, Xifeng Yan, Advances in Neural Information Processing Systems (NeurIPS). Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
Ccnet: Criss-cross attention for semantic segmentation. Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, Wenyu Liu, International Conference on Computer Vision (ICCV). Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Criss-cross attention for semantic segmentation. In International Conference on Computer Vision (ICCV), 2019.
Axial attention in multidimensional transformers. Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, Tim Salimans, arXiv:1912.12180arXiv preprintJonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180, 2019.
Reformer: The efficient transformer. Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya, International Conference on Learning Representations (ICLR. 2020Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In International Conference on Learning Representations (ICLR), 2020.
Efficient content-based sparse attention with routing transformers. Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier, Transactions of the Association for Computational Linguistics. 9Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9:53-68, 2021.
Transformer-xl: Attentive language models beyond a fixed-length context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, V Quoc, Ruslan Le, Salakhutdinov, Annual Meeting of the Association for Computational Linguistics (ACL). Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
Factorized attention: Self-attention with linear complexities. Zhuoran Shen, Mingyuan Zhang, Shuai Yi, Junjie Yan, Haiyu Zhao, CoRRZhuoran Shen, Mingyuan Zhang, Shuai Yi, Junjie Yan, and Haiyu Zhao. Factorized attention: Self-attention with linear complexities. CoRR, 2018.
Linformer: Self-attention with linear complexity. Sinong Wang, Belinda Li, Madian Khabsa, Han Fang, Hao Ma, arXiv:2006.04768arXiv preprintSinong Wang, Belinda Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768, 2020.
Transformers are rnns: Fast autoregressive transformers with linear attention. Angelos Katharopoulos, Apoorv Vyas, International Conference on Machine Learning (ICML). 2020Nikolaos Pappas, and François FleuretAngelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning (ICML), 2020.
Memory efficient kernel approximation. Si Si, Cho-Jui Hsieh, Dhillon, The Journal of Machine Learning Research. Si Si, Cho-Jui Hsieh, and Inderjit S Dhillon. Memory efficient kernel approximation. The Journal of Machine Learning Research, 2017.
Rethinking attention with performers. Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, International Conference on Learning Representations (ICLR. 2021Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. In International Conference on Learning Representations (ICLR), 2021.
Random features for large-scale kernel machines. Ali Rahimi, Benjamin Recht, Advances in Neural Information Processing Systems (NeurIPS). Ali Rahimi, Benjamin Recht, et al. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems (NeurIPS), 2007.
Random feature attention. Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, A Noah, Lingpeng Smith, Kong, International Conference on Learning Representations (ICLR). 2021Hao Peng, Nikolaos Pappas, Dani Yogatama, Roy Schwartz, Noah A Smith, and Lingpeng Kong. Random feature attention. In International Conference on Learning Representations (ICLR), 2021.
Long range arena: A benchmark for efficient transformers. Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, Donald Metzler, International Conference on Learning Representations (ICLR. 2021Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. In International Conference on Learning Representations (ICLR), 2021.
Deep sets. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, Alexander Smola, Advances in Neural Information Processing Systems (NeurIPS). Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, and Alexander Smola. Deep sets. In Advances in Neural Information Processing Systems (NeurIPS), 2017.
Breaking the softmax bottleneck: A high-rank rnn language model. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W Cohen, International Conference on Learning Representations (ICLR). Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. Breaking the soft- max bottleneck: A high-rank rnn language model. In International Conference on Learning Representations (ICLR), 2018.
Wiki-40b: Multilingual language model dataset. Mandy Guo, Zihang Dai, Denny Vrandečić, Rami Al-Rfou, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMandy Guo, Zihang Dai, Denny Vrandečić, and Rami Al-Rfou. Wiki-40b: Multilingual language model dataset. In Proceedings of The 12th Language Resources and Evaluation Conference, 2020.
Distribution augmentation for generative modeling. Heewoo Jun, Rewon Child, Mark Chen, John Schulman, Aditya Ramesh, Alec Radford, Ilya Sutskever, International Conference on Machine Learning (ICML). 2020Heewoo Jun, Rewon Child, Mark Chen, John Schulman, Aditya Ramesh, Alec Radford, and Ilya Sutskever. Distribution augmentation for generative modeling. In International Conference on Machine Learning (ICML), 2020.
Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P Kingma, International Conference on Learning Representations (ICLR. Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. In International Conference on Learning Representations (ICLR), 2017.
Pixelsnail: An improved autoregressive generative model. Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, Pieter Abbeel, International Conference on Machine Learning (ICML). Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, and Pieter Abbeel. Pixelsnail: An improved autoregressive generative model. In International Conference on Machine Learning (ICML), 2018.
Parallel multiscale autoregressive density estimation. Scott Reed, Aäron Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Yutian Chen, Dan Belov, Nando Freitas, International Conference on Machine Learning (ICML). Scott Reed, Aäron Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Yutian Chen, Dan Belov, and Nando Freitas. Parallel multiscale autoregressive density estimation. In International Conference on Machine Learning (ICML), 2017.
Glow: Generative flow with invertible 1x1 convolutions. P Diederik, Prafulla Kingma, Dhariwal, Advances in Neural Information Processing Systems (NeurIPS). Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolu- tions. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
Generating high fidelity images with subscale pixel networks and multidimensional upscaling. Jacob Menick, Nal Kalchbrenner, International Conference on Learning Representations (ICLR). Jacob Menick and Nal Kalchbrenner. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In International Conference on Learning Represen- tations (ICLR), 2019.
Big bird: Transformers for longer sequences. Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Advances in Neural Information Processing Systems (NeurIPS). 2020Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformers for longer sequences. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
o(n) connections are expressive enough: Universal approximability of sparse transformers. Chulhee Yun, Yin-Wen Chang, Srinadh Bhojanapalli, Ankit Singh Rawat, J Sashank, Sanjiv Reddi, Kumar, arXiv:2006.04862arXiv preprintChulhee Yun, Yin-Wen Chang, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank J Reddi, and Sanjiv Kumar. o(n) connections are expressive enough: Universal approximability of sparse transformers. arXiv preprint arXiv:2006.04862, 2020.
| [
"https://github.com/google-research/googleresearch/tree/master/combiner.",
"https://github.com/google-research/longrange-arena,"
] |
[
"FUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS",
"FUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS"
] | [
"X Chen \nEngineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n\n",
"X Liu xyliu@se.cuhk.edu.hk \nEngineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n\n",
"A Ragni \nEngineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n\n",
"Y Wang \nEngineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n\n",
"M J F Gales \nEngineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n\n"
] | [
"Engineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n",
"Engineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n",
"Engineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n",
"Engineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n",
"Engineering Department 1\nUniversity of Cambridge\nChinese University of Hong Kong\n"
] | [] | Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful. However, bi-RNNLMs pose a number of challenges as they make use of the complete previous and future word context information. This impacts both training efficiency and their use within a lattice rescoring framework. In this paper these issues are addressed by proposing a novel neural network structure, succeeding word RNNLMs (su-RNNLMs). Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words. This model can be trained much more efficiently than bi-RNNLMs and can also be used for lattice rescoring. Experimental results on a meeting transcription task (AMI) show the proposed model consistently outperformed uni-RNNLMs and yield only a slight degradation compared to bi-RNNLMs in N-best rescoring. Additionally, performance improvements can be obtained using lattice rescoring and subsequent confusion network decoding. | 10.1109/asru.2017.8268922 | [
"https://arxiv.org/pdf/1708.05592v1.pdf"
] | 3,632,546 | 1708.05592 | c114a4b047902dbcbaf51a540b9716f0c95fa650 |
FUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS
X Chen
Engineering Department 1
University of Cambridge
Chinese University of Hong Kong
X Liu xyliu@se.cuhk.edu.hk
Engineering Department 1
University of Cambridge
Chinese University of Hong Kong
A Ragni
Engineering Department 1
University of Cambridge
Chinese University of Hong Kong
Y Wang
Engineering Department 1
University of Cambridge
Chinese University of Hong Kong
M J F Gales
Engineering Department 1
University of Cambridge
Chinese University of Hong Kong
FUTURE WORD CONTEXTS IN NEURAL NETWORK LANGUAGE MODELS
Index Terms-Bidirectional recurrent neural networklan- guage modelsucceeding wordsspeech recognition
Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful. However, bi-RNNLMs pose a number of challenges as they make use of the complete previous and future word context information. This impacts both training efficiency and their use within a lattice rescoring framework. In this paper these issues are addressed by proposing a novel neural network structure, succeeding word RNNLMs (su-RNNLMs). Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words. This model can be trained much more efficiently than bi-RNNLMs and can also be used for lattice rescoring. Experimental results on a meeting transcription task (AMI) show the proposed model consistently outperformed uni-RNNLMs and yield only a slight degradation compared to bi-RNNLMs in N-best rescoring. Additionally, performance improvements can be obtained using lattice rescoring and subsequent confusion network decoding.
INTRODUCTION
Language models (LMs) are crucial components in many applications, such as speech recognition and machine translation. The aim of language models is to compute the probability of any given sentence W = (w1, w2, ..., wL), which can be calculated as P (W) = P (w1, w2, ..., wL) =
The task of LMs is to calculate the probability of word wt given its previous history w t−1 1 = w1, w2, ..., wt−1. n-gram LMs [1] and neural network based language mdoels (NNLMs) [2,3] are two widely used language models. In n-gram LMs, the most recent n−1 words are used as an approximation of the complete history, thus
P (wt|w t−1 1 ) ≈ P (wt|w t−1 t−n+1 )(2)
This n-gram assumption can also be used to construct a n-gram feedforward NNLMs [2]. In contrast, recurrent neural network LMs (RNNLMs) model the complete history via a recurrent connection. Most of previous work on language models has focused on utilising history information, the future word context information has not been extensively investigated. There have been several attempts to incorporate future context information into recurrent neural network language models. Individual forward and backward RNNLMs can be built, and these two LMs combined with a log-linear interpolation [4]. In [5], succeeding words were incorporated into RNNLM within a Maximum Entropy framework. [6] investigated the use of bidirectional RNNLMs (bi-RNNLMs) for speech recognition. For a broadcast news task, sigmoid based RNNLMs gave small gains, while no performance improvement was obtained when using long short-term memory (LSTM) based RNNLMs. More recently, bi-RNNLMs can produce consistent, and significant, performance improvements over unidirectional RNNLMs (uni-RNNLMs) on a range of speech recognition tasks [7].
Though they can yield performance gain, bi-RNNLMs pose several challenges for both model training and inference as they require the complete previous and future word context information to be taken into account. It is difficult to parallelise training efficiently. Lattice rescoring is also complicated for these LMs as future context needs to be incorporated. This means that the form of approximation used for uni-RNNLMs [8] is not suitable to apply. Hence, N-best rescoring is normally used [5,6,7]. However, the ability to manipulate lattices is very important in many speech applications. Lattices can be used for a wide range of downstream applications, such as confidence score estimation [9], keyword search [10] and confusion network decoding [11]. In order to address these issues, a novel model structure, succeeding word RNNLMs (su-RNNLMs), is proposed in this paper. Instead of using a recurrent unit to capture the complete future word context as in bi-RNNLMs, a feedforward unit is used to model a small, fixed-length number of succeeding words. This allows existing efficient training [12] and lattice rescoring [8] algorithms developed for uni-RNNLMs to be extended to the proposed su-RNNLMs. Using these extended algorithms, compact lattices can be generated with su-RNNLMs supporting lattice based downstream processing.
The rest of this paper is organized as follows. Section 2 gives a brief review of RNNLMs, including both unidirectional and bidirectional RNNLMs. The proposed model with succeeding words (su-RNNLMs) is introduced in Section 3, followed by a description of the lattice rescoring algorithm in Section 4. Section 5 discusses the interpolation of language models. The experimental results are presented in Section 6 and conclusions are drawn in Section 7.
UNI-AND BI-DIRECTIONAL RNNLMS
Unidirectional RNNLMs
In contrast to feedforward NNLMs, where only modeling the previous n − 1 words, recurrent NNLMs [13] represent the full non- truncated history w t−1 1 = w1, w2, ..., wt−1 for word wt using the 1-of-K encoding of the previous word wt−1 and a continuous vector ht−2 as a compact representation of the remaining context w t−2 1 . Figure 1 shows an example of this unidirectional RNNLM (uni-RNNLM). The most recent word wt−1 is used as input and projected into a low-dimensional, continuous, space via a linear projection layer. A recurrent hidden layer is used after this projection layer. The form of the recurrent layer can be based on a standard sigmoid based recurrent unit, with sigmoid activations [3], or more complicated forms such as gated recurrent unit (GRU) [14] and long short-term memory (LSTM) units [15]. A continuous vector ht−1 representing the complete history information w t−1 1 can be obtained using ht−2 and previous word wt−1. This vector is used as input of recurrent layer for the estimation of next word. An output layer with softmax function is used to calculate the probability P (wt|w t−1
1
). An additional node is often added at the output layer to model the probability mass of out-of-shortlist (OOS) words to speed up softmax computation by limiting vocabulary size [16]. Similarly, an out-of-vocabulary (OOV) node can be added in the input layer to model OOV words. The probability of word sequence W = w L 1 is calculated as,
Pu(w L 1 ) = L t=1 P (wt|w t−1 1 )(3)
Perplexity (PPL) is a metric used widely to evaluate the quality of language models. According to the definition in [17], the perplexity can be computed based on sentence probability with,
PPL = exp − 1 N J j=1 log Pu(Wj) = exp − 1 N J j=1 log Pu(w L j 1 ) = exp − 1 N J j=1 L j t=1 log P (wt|w t−1 1 )(4)
Where N is the total number of words and J is the number of sentence in the evaluation corpus. Lj is the number of word in jth sentence. From the above equation, the PPL is calculated based on the average log probability of each word, which for unidirectional LMs, yields the average sentence log probability. Uni-RNNLMs can be trained efficiently on Graphics Processing Units (GPUs) by using spliced sentence bunch (i.e. minibatch) mode [12]. Multiple sentences can be concatenated together to form a longer sequence and sets of these long sequences can then be aligned in parallel from left to right. This data structure is more efficient for minibatch based training as they have comparable sequence length [12]. When using these forms of language models for tasks like speech recognition, N-best rescoring is the most straightforward way to apply uni-RNNLMs. Lattice rescoring is also possible by introducing approximations [8] to control merging and expansion of different paths in lattice. This will be described in more detail in Section 4. and future word context w L t+1 are used to estimate the probability of current word P (wt|w t−1 1 , w L t+1 ). Two recurrent units are used to capture the previous and future information respectively. In the same fashion as uni-RNNLMs, ht−1 is a compact continuous vector of the history information w t−1 1 . Whileht+1 is another continuous vector to encode the future information w L t+1 . This future context vector is computed from the next word wt+1 and the previous future context vectorht+2 containing information of w L t+2 . The concatenation of ht−1 andht+1 is then fed into the output layer, with softmax function, to calculate the output probability. In order to reduce the number of parameter, the projection layer for the previous and future words are often shared.
Bidirectional RNNLMs
The probability of word sequence W = w L 1 can be computed using bi-RNNLMs as,
P b (w L 1 ) = 1 Z bP b (W) = 1 Z b L t=1 P (wt|w t−1 1 , w L t+1 ) (5) P b (W)
is the unnormalized sentence probability computed from the individual word probabilities of the bi-RNNLM. Z b is a sentencelevel normalization term to ensure the sentence probability is appropriately normalized. This is defined as,
Z b = W∈ΘP b (W)(6)
where Θ is the set of all possible sentences. Unfortunately, this normalization term is impractical to calculate for most tasks.
In a similar form to Equation 4, the PPL of bi-RNNLMs can be calculated based on sentence probability as,
PPL = exp − 1 N J j=1 log P b (w L j 1 ) = exp − 1 N J j=1 log 1 Z bP b (w L j 1 ) (7) = exp J N log(Z b ) − 1 N J j=1 L j t=1 log P (wt|w t−1 1 , w L j t+1 )
However, Z b is often infeasible to obtain. As a result, it is not possible to compute a valid perplexity from bi-RNNLMs. Nevertheless, the average log probability of each word can be used to get a "pseudo" perplexity (PPL).
PPLpseudo = exp − 1 N J j=1 L j t=1 log P (wt|w t−1 1 , w L j t+1 )(8)
This is the second term of the valid PPL of bi-RNNLMs shown in Equation 7. It is a "pseudo" PPL because the normalized sentence probability P b (W) is impossible to obtain and the unnormalized sentence probabilityP b (W) is used instead. Hence, the "pseudo" PPL of bi-RNNLMs is not comparable with the valid PPL of uni-RNNLMs. However, the value of "pseudo" PPL provides information on the average word probability from bi-RNNLMs since it is obtained using the word probability.
In order to achieve good performance for speech recognition, [7] proposed an additional smoothing of the bi-RNNLM probability at test time. The probability of bi-RNNLMs is smoothed as,
P (wi|w t−1 1 , w L t+1 ) = exp(αyi) V j exp(αyj)(9)
where yi is the activation before softmax function for node i in the output layer. α is an empirical smoothing factor, which is chosen as 0.7 in this paper. The use of both preceding and following context information in bi-RNNLMs presents challenges to both model training and inference. First, N-best rescoring is normally used for speech recognition [7]. Lattice rescoring is impractical for bi-RNNLMs as the computation of word probabilities requires information from the complete sentence.
Another drawback of bi-RNNLMs is the difficulty in training. The complete previous and future context information is required to predict the probability of each word. It is expensive to directly training bi-RNNLMs sentence by sentence, and difficult to parallelise the training for efficiency. In [6], all sentences in the training corpus were concatenated together to form a single sequence to facilitate minibatch based training. This sequence was then "chopped" into sub-sequences with the average sentence length. Bi-RNNLMs were then trained on GPU by processing multiple sequences at the same time. This allows bi-RNNLMs to be efficiently trained. However, issues can arise from the random cutting of sentences, history and future context vectors may be reset in the middle of a sentence. In [7], the bi-RNNLMs were trained in a more consistent fashion. Multiple sentences were aligned from left to right to form minibatches during bi-RNNLM training. In order to handle issues caused by variable sentence length, NULL tokens were appended to the ends of sentences to ensure that the aligned sentences had the same length. These NULL tokens were not used for parameter update. In this paper, this approach is adopted to train bi-RNNLMs as it gave better performance.
RNNLMS WITH SUCCEEDING WORDS
As discussed above, bi-RNNLMs are slow to train and difficult to use in lattice rescoring. In order to address these issues, a novel structure, the su-RNNLM, is proposed in this paper to incorporate future context information. The model structure is illustrated in Figure 3. In the same fashion as bi-RNNLMs, the previous history w t−1 1 is modeled with recurrent units (e.g. LSTM, GRU). However, instead of modeling the complete future context information, w L t+1 , using recurrent units, feedforward units are used to capture a finite number of succeeding words, w t+k t+1 . The softmax function is again applied at the output layer to obtain the probability of the current word P (wt|w t−1 1 , w t+k t+1 ). The word embedding in the projection layer are shared for all input words. When the succeeding words are beyond the sentence boundary, a vector of 0 is used as the word embedding vector. This is similar to the zero padding of the feedforward forward NNLMs at the beginning of each sentence [13].
As the number of succeeding words is finite and fixed for each word, its succeeding words can be organized as a n-gram future context and used for minibatch mode training as in feedforward NNLMs [13]. Su-RNNLMs can then be trained efficiently in a similar fashion to uni-RNNLMs in a spliced sentence bunch mode [12].
Compared with equations 3 and 5, the probability of word sequence w L 1 can be computed as
Ps(w L 1 ) = 1 Zs L t=1 P (wt|w t−1 1 , w t+k t+1 )(10)
Again, the sentence level normalization term Zs is difficult to compute and only "pseudo" PPL can be obtained. The probabilities of su-RNNLMs are also very sharp, which can be seen from the "pseudo" PPLs in Table 2 in Section 6. Hence, the bi-RNNLM probability smoothing given in Equation 9 is also required for su-RNNLMs to achieve good performance at evaluation time.
LATTICE RESCORING
Lattice rescoring with feedforward NNLMs is straightforward [13] whereas approximations are required for uni-RNNLMs lattice rescoring [8,18]. As mentioned in Section 2.2, N-best rescoring has previously been used for bi-RNNLMs. It is not practical for bi-RNNLMs to be used for lattice rescoring and generation as both the complete previous and future context information are required. However, lattices are very useful in many applications, such as confidence score estimation [9], keyword search [10] and confusion network decoding [11]. In contrast, su-RNNLMs require a fixed number of succeeding words, instead of the complete future context information. From Figure 3, su-RNNLMs can be viewed as a combination of uni-RNNLMs for history information and feedforward NNLMs for future context information. Hence, lattice rescoring is feasible for su-RNNLMs by extending the lattice rescoring algorithm of uni-RNNLMs by considering additional fixed length future contexts.
Lattice rescoring of uni-RNNLMs
In this paper, the n-gram approximation [8] based approach is used for uni-RNNLMs lattice rescoring. When considering merging of two paths, if their previous n − 1 words are identical, the two paths are viewed as "equivalent" and can be merged. This is illustrated in Figure 5 for the start node of word w4. The history information from the best path is kept for the following RNNLM probability computation and the histories of all other paths are discarded. For example, the path (w0, w2, w3) is kept and the other path (w1, w2, w3) is discarded given arc w4.
There are two types of approximation involved for uni-RNNLM lattice rescoring, which are the merge and cache approximations. The merge approximation controls the merging of two paths. In [8], the first path reaching the node was kept and all other paths with the same n-gram history were discarded irrespective of the associated scores. This introduces inaccuracies in the RNNLM probability calculation. The merge approximation can be improved by keeping the path with the highest accumulated score. This is the approach adopted in this work. For fast probability lookup in lattice rescoring, n-gram probabilities can be cached using n − 1 words as a key. A similar approach can be used with RNNLM probabilities. In [8], RNNLM probabilities were cached based on the previous n − 1 words, which is referred as cache approximation. Thus a word probability obtained from the cache may be derived from another history sharing the same n − 1 previous words. This introduces another inaccuracy. In order to avoid this inaccuracy yet maintain the efficiency, the cache approximation used in [8] is improved by adopting the complete history as key for caching RNNLM probabilities. Both modifications yielt small but consistent improvements over [8] on a range of tasks.
Lattice rescoring of su-RNNLMs
For lattice rescoring with su-RNNLMs, the n-gram approximation can be adopted and extended to support the future word context. In order to handle succeeding words correctly, paths will be merged only if the following succeeding words are identical. In this way, the path expansion is carried out in both directions. Any two paths with the same succeeding words and n − 1 previous words are merged. Figure 4 shows part of an example lattice generated by a 2-gram LM. In order to apply uni-RNNLM lattice rescoring using a 3-gram approximation, the grey shaded node in Figure 4 needs to be duplicated as word w3 has two distinct 3-gram histories, which are (w0, w2) and (w1, w2) respectively. Figure 5 shows the lattice after rescoring using a uni-RNNLM with 3-gram approximation. In order to apply su-RNNLMs for lattice rescoring, the succeeding words also need to be taken into account. Figure 6 is the expanded lattice using a su-RNNLM with 1 succeeding word. The grey shaded nodes in Figure 5 need to be expanded further as they have distinct succeeding words. The blue shaded nodes in Figure 6 are the expanded node in the resulting lattice.
Using the n-gram history approximation and given k succeeding words, the lattice expansion process is effectively a n + k-gram lattice expansion for uni-RNNLMs. For larger value of n and k, the resulting lattices can be very large. This can be addressed by pruning the lattice and doing initial lattice expansion with a uni-RNNLM.
LANGUAGE MODEL INTERPOLATION
For unidirectional language models, such as n-gram model and uni-RNNLMs, the word probabilities are normally combined using linear interpolation,
Pu(wt|w t−1 1 ) = (11) (1 − λ1)Pn(wt|w t−1 1 ) + λ1Pr(wt|w t−1 1 )
where Pn and Pr are the probabilities from n-gram and uni-RNN LMs respectively, λ1 is the interpolation weight of uni-RNNLMs.
However, it is not valid to directly combine uni-LMs (e.g unidirectional n-gram LMs or RNNLMs) and bi-LMs (or su-LMs) using linear interpolation due to the sentence level normalisation term required for bi-LMs (or su-LMs) in Equation 5. As described in [7], uni-LMs can be log-linearly interpolated with bi-LMs for speech recognition using, Fig. 6. Lattice generated by su-RNNLMs with 3-gram approximation for history context and 1 succeeding word.
P (wt|w t−1 1 , w L t+1 ) = (12) 1 Z Pu(wt|w t−1 1 ) (1−λ 2 ) P b (wt|w t−1 1 , w L t+1 ) λ 2
where Z is the appropriate normalisation term. The normalisation term can be discarded for speech recognition as it does not affect the hypothesis ranking. Pu and P b are the probabilities from uni-LMs and bi-RNNLMs respectively. λ2 is the log-linear interpolation weight of bi-RNNLMs. The issue of normalisation term in su-RNLMs is similar to that of bi-RNNLMs, as shown in Equation 10. Hence, log-linear interpolation can also be applied for the combination of su-RNNLMs and uni-LMs and is the approach used in this paper. By default, linear interpolation is used to combine uni-RNNLMs and n-gram LMs. A two-stage interpolation is used when including bi-RNNLMs and su-RNNLMs. The uni-RNNLMs and n-gram LMs are first interpolated using linear interpolation. These linearly interpolated probabilities are then log-linearly interpolated with those of bi-RNNLMs (or su-RNNLMs).
EXPERIMENTS
Experiments were conducted using the AMI IHM meeting corpus [19] to evaluated the speech recognition performance of various language models. The Kaldi training data configuration was used. A total of 78 hours of speech was used in acoustic model training. This consists of about 1M words of acoustic transcription. Eight meetings were excluded from the training set and used as the development and test sets. The Kaldi acoustic model training recipe [20] featuring sequence training [21] was applied for deep neural network (DNN) training. CMLLR transformed MFCC features [22] were used as the input and 4000 clustered context dependent states were used as targets. The DNN was trained with 6 hidden layers, and each layer has 2048 hidden nodes.
The first part of the Fisher corpus, 13M words, was used for additional language modeling training data. A 49k word decoding vocabulary was used for all experiments. All LMs were trained on the combined (AMI+Fisher), 14M word in total. A 4-gram KN smoothed back-off LM without pruning was trained and used for lattices generation. GRU based recurrent units were used for all unidirectional and bidirectional RNNLMs 1 . 512 hidden nodes were used in the hidden layer. An extended version of CUED-RNNLM [23] was developed for the training of uni-RNNLMs, bi-RNNLMs and su-RNNLMs. The related code and recipe will be available online 2 . The linear interpolation weight λ1 between 4-gram LMs and uni-RNNLMs was set to be 0.75 as it gave the best performance on the development data. The log-linear interpolation weight λ2 for bi-RNNLMs (or su-RNNLMs) was 0.3. The probabilities of bi-RNNLMs and su-RNNLMs were smoothed with a smoothing factor 0.7 as suggested in [7]. The 3-gram approximation was applied for the history merging of uni-RNNLMs and su-RNNLMs during lattice rescoring and generation [8]. Table 1 shows the word error rates of the baseline system with 4-gram and uni-RNN LMs. Lattice rescoring and 100-best rescoring are applied to lattices generated by the 4-gram LM. As expected, uni-RNNLMs yield a significant performance improvement over 4-gram LMs. Lattice rescoring gives a comparable performance with 100best rescoring. Confusion network (CN) decoding can be applied to lattices generated by uni-RNNLM lattice rescoring and additional performance improvements can be achieved. However, it is difficult to apply confusion network decoding to the 100-best 3 . amounts of future word context. When the number of succeeding words is 0, this is the baseline uni-RNNLMs. When the number of succeeding words is set to ∞, a bi-RNNLM with complete future context information is used. It can be seen that su-RNNLMs give a comparable training speed as uni-RNNLMs. The additional computational load of the su-RNNLMs mainly come from the feedforward unit for succeeding words as shown in Figure 3. The computation in this part is much less than that of other parts such as output layer and GRU layers. However, the training of su-RNNLMs is much faster than bi-RNNLMs as it is difficult to parallelise the training of bi-RNNLMs efficiently [7]. It is worth mentioning again that the PPLs of uni-RNNLMs can not be compared directly with the "pseudo" PPLs of bi-RNNLMs and su-RNNLMs. But both PPLs and "pseudo" PPLs reflect the average log probability of each word. From Table 2, with increasing number of succeeding words, the "pseudo" PPLs of the su-RNNLMs keeps decreasing, yielding comparable value as bi-RNNLMs. Table 2. Train speed and (Pseudo) Perplexity of uni-, bi-, and su-RNNLMs. 0 succeeding word is for uni-RNNLMs and ∞ for bi-RNNLMs. Table 3 gives the WER results of 100-best rescoring with various language models. For bi-RNNLMs (or su-RNNLMs), it is not possible to use linear interpolation. Thus a two stage approach is adopted as described in Section 5. This results in slight differences, second decimal place, between the uni-RNNLM case and the 0 future context su-RNNLM. The increasing number of the succeeding words consistently reduces the WER. With 1 succeeding word, the WERs were reduced by 0.2% absolutely. Su-RNNLMs with more than 2 succeeding words gave about 0.5% absolute WER reduction. Bi-RNNLMs (shown in the bottom line of Table 3) outperform su-RNNLMs by 0.1% to 0.2%, as it is able to incorporate the complete future context information with recurrent connection. Table 4 shows the WERs of lattice rescoring using su-RNNLMs. The lattice rescoring algorithm described in Section 4 was applied. Su-RNNLMs with 1 and 3 succeeding words were used for lattice rescoring. From Table 4, su-RNNLMs with 1 succeeding words give 0.2% WER reduction and using 3 succeeding words gives about 0.5% WER reduction. These results are consistent with the 100-best rescoring result in Table 3. Confusion network decoding can be applied on the rescored lattices and additional 0.3-0.4% WER performance improvements are obtained on dev and eval test sets. are faster for training and evaluation 2 http://mi.eng.cam.ac.uk/projects/cued-rnnlm/ 3 N-best list can be converted to lattice and CN decoding then can be applied, but it requires a much larger N-best list, such as 10K used in [8] Table 4. WERs of uni-RNNLMs and su-RNNLMs with lattice rescoring
CONCLUSIONS
In this paper, the use of future context information on neural network language models has been explored. A novel model structure is proposed to address the issues associated with bi-RNNLMs, such as slow train speed and difficulties in lattice rescoring. Instead of using a recurrent unit to capture the complete future information, a feedforward unit was used to model a finite number of succeeding words. The existing training and lattice rescoring algorithms for uni-RNNLMs are extended for the proposed su-RNNLMs. Experimental results show that su-RNNLMs achieved a slightly worse performances than bi-RNNLMs, but with much faster training speed. Furthermore, additional performance improvements can be obtained from lattice rescoring and subsequent confusion network decoding. Future work will examine improved pruning scheme to address the lattice expansion issues associated with larger future context.
Fig. 1 .
1An example unidirectional RNNLM.
Fig. 2 .
2An example bidirectional RNNLM.
Figure 2
2illustrates an example of bidirectional RNNLMs (bi-RNNLMs). Unlike uni-RNNLMs, both the history word context w t−1 1
Fig. 3 .
3An example su-RNNLM with 2 succeeding words.
Fig. 4 .
4Lattice generated by 2-gram LM.
Fig. 5 .
5Lattice generated by uni-RNNLMs with 3-gram approximation.
This research was funded under the ALTA Institute, University of Cambridge. Thanks to Cambridge English, University of Cambridge, for supporting this research. Xunying Liu is funded by MSRA grant no. 6904412 and CUHK grant no. 4055065.
Table 2
2gives the training speed measured with word per second (w/s) and ("pseudo") PPLs of various RNNLMs with difference 1 GRU and LSTM gave similar performance for this task, while GRU LMsTable 1. Baseline WER results on AMI corpusLM
rescore
dev
eval
Vit
CN
Vit
CN
ng4
-
23.8 23.5 24.2 23.9
+uni-rnn
100-best
21.7
-
22.1
-
lattice
21.7 21.5 21.9 21.7
.Table 3. WERs of uni-, bi, and su-RNNLMs with 100-best rescoring. 0 succeeding word is for uni-RNNLMs and ∞ for bi-RNNLMs.LM
#succ words
dev
eval
ng4
23.8 24.2
+uni-rnn
-
21.7 22.1
+su-rnn
0
21.7 22.1
1
21.5 21.8
2
21.3 21.7
3
21.3 21.6
4
21.4 21.6
5
21.3 21.6
6
21.3 21.6
7
21.4 21.6
∞
21.2 21.4
LM
#succ
dev
eval
words
Vit
CN
Vit
CN
ng4
-
23.8 23.5 24.2 23.9
+uni-rnn
-
21.7 21.5 21.9 21.7
+su-rnn
1
21.6 21.3 21.6 21.5
3
21.3 21.0 21.4 21.1
An empirical study of smoothing techniques for language modeling. Stanley Chen, Joshua Goodman, Computer Speech & Language. 134Stanley Chen and Joshua Goodman, "An empirical study of smoothing techniques for language modeling," Computer Speech & Language, vol. 13, no. 4, pp. 359-393, 1999.
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, Journal of Machine Learning Research. 3Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Chris- tian Jauvin, "A neural probabilistic language model," Journal of Machine Learning Research, vol. 3, pp. 1137-1155, 2003.
Recurrent neural network based language model. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, Sanjeev Khudanpur, Proc. ISCA INTERSPEECH. ISCA INTERSPEECHTomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur, "Recurrent neural network based lan- guage model.," in Proc. ISCA INTERSPEECH, 2010.
Achieving human parity in conversational speech recognition. Wayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, Geoffrey Zweig, arXiv:1610.05256arXiv preprintWayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig, "Achieving human parity in conversational speech recogni- tion," arXiv preprint arXiv:1610.05256, 2016.
Exploiting the succeeding words in recurrent neural network language models. Yangyang Shi, Martha Larson, Pascal Wiggers, Catholijn Jonker, Proc. ICSA INTERSPEECH. ICSA INTERSPEECHYangyang Shi, Martha Larson, Pascal Wiggers, and Catholijn Jonker, "Exploiting the succeeding words in recurrent neural network language models.," in Proc. ICSA INTERSPEECH, 2013.
Bidirectional recurrent neural network language models for automatic speech recognition. Ebru Arisoy, Abhinav Sethy, Stanley Bhuvana Ramabhadran, Chen, Proc. ICASSP. ICASSPIEEEEbru Arisoy, Abhinav Sethy, Bhuvana Ramabhadran, and Stanley Chen, "Bidirectional recurrent neural network lan- guage models for automatic speech recognition," in Proc. ICASSP. IEEE, 2015, pp. 5421-5425.
Investigating bidirectional recurrent neural network language models for speech recognition. Xie Chen, Anton Ragni, Xunying Liu, Mark Gales, Proc. ICSA INTERSPEECH. ICSA INTERSPEECHXie Chen, Anton Ragni, Xunying Liu, and Mark Gales, "Inves- tigating bidirectional recurrent neural network language mod- els for speech recognition.," in Proc. ICSA INTERSPEECH, 2017.
Two efficient lattice rescoring methods using recurrent neural network language models. Xunying Liu, Xie Chen, Yongqiang Wang, Mark Gales, Phil Woodland, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 248Xunying Liu, Xie Chen, Yongqiang Wang, Mark Gales, and Phil Woodland, "Two efficient lattice rescoring methods using recurrent neural network language models," IEEE/ACM Trans- actions on Audio, Speech, and Language Processing, vol. 24, no. 8, pp. 1438-1449, 2016.
Confidence measures for large vocabulary continuous speech recognition. Frank Wessel, Ralf Schluter, Klaus Macherey, Hermann Ney, Speech and Audio Processing. 9Frank Wessel, Ralf Schluter, Klaus Macherey, and Hermann Ney, "Confidence measures for large vocabulary continuous speech recognition," Speech and Audio Processing, IEEE Transactions on, vol. 9, no. 3, pp. 288-298, 2001.
Recurrent neural network language models for keyword search. Xie Chen, Anton Ragni, Jake Vasilakes, Xunying Liu, Kate Knill, Mark Gales, Proc. ICASSP. IEEE. ICASSP. IEEEXie Chen, Anton Ragni, Jake Vasilakes, Xunying Liu, Kate Knill, and Mark Gales, "Recurrent neural network language models for keyword search," in Proc. ICASSP. IEEE, 2017, pp. 5775-5779.
Finding consensus in speech recognition: word error minimization and other applications of confusion networks. Lidia Mangu, Eric Brill, Andreas Stolcke, Computer Speech & Language. 144Lidia Mangu, Eric Brill, and Andreas Stolcke, "Finding con- sensus in speech recognition: word error minimization and other applications of confusion networks," Computer Speech & Language, vol. 14, no. 4, pp. 373-400, 2000.
Efficient training and evaluation of recurrent neural network language models for automatic speech recognition. Xie Chen, Xunying Liu, Yongqiang Wang, Mark Gales, Phil Woodland, Speech, and Language Processing. Xie Chen, Xunying Liu, Yongqiang Wang, Mark Gales, and Phil Woodland, "Efficient training and evaluation of recurrent neural network language models for automatic speech recog- nition," IEEE/ACM Transactions on Audio, Speech, and Lan- guage Processing, 2016.
Continuous space language models. Holger Schwenk, Computer Speech & Language. 213Holger Schwenk, "Continuous space language models," Com- puter Speech & Language, vol. 21, no. 3, pp. 492-518, 2007.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," arXiv preprint arXiv:1412.3555, 2014.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Improved neural network based language modelling and adaptation. Junho Park, Xunying Liu, Mark Gales, Phil Woodland, Proc. ISCA INTERSPEECH. ISCA INTERSPEECHJunho Park, Xunying Liu, Mark Gales, and Phil Woodland, "Improved neural network based language modelling and adaptation," in Proc. ISCA INTERSPEECH, 2010.
The dawn of statistical asr and mt. Frederick Jelinek, Computational Linguistics. 354Frederick Jelinek, "The dawn of statistical asr and mt," Com- putational Linguistics, vol. 35, no. 4, pp. 483-494, 2009.
From feedforward to recurrent lstm neural networks for language modeling. Martin Sundermeyer, Hermann Ney, Ralf Schluter, Audio, Speech, and Language Processing. Martin Sundermeyer, Hermann Ney, and Ralf Schluter, "From feedforward to recurrent lstm neural networks for language modeling," Audio, Speech, and Language Processing,
. IEEE/ACM Transactions on. 233IEEE/ACM Transactions on, vol. 23, no. 3, pp. 517-529, 2015.
The AMI meeting corpus: A preannouncement. Jean Carletta, Machine learning for multimodal interaction. SpringerJean Carletta et al., "The AMI meeting corpus: A pre- announcement," in Machine learning for multimodal interac- tion, pp. 28-39. Springer, 2006.
The Kaldi speech recognition toolkit. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, ASRU, IEEE Workshop on. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Bur- get, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., "The Kaldi speech recognition toolkit," in ASRU, IEEE Workshop on, 2011.
Sequence-discriminative training of deep neural networks. Karel Veselỳ, Arnab Ghoshal, Lukás Burget, Daniel Povey, Proc. ICSA INTERSPEECH. ICSA INTERSPEECHKarel Veselỳ, Arnab Ghoshal, Lukás Burget, and Daniel Povey, "Sequence-discriminative training of deep neural networks.," in Proc. ICSA INTERSPEECH, 2013.
Maximum likelihood linear transformations for HMM-based speech recognition. Mark Gales, Computer Speech & Language. 122Mark Gales, "Maximum likelihood linear transformations for HMM-based speech recognition," Computer Speech & Lan- guage, vol. 12, no. 2, pp. 75-98, 1998.
CUED-RNNLM an open-source toolkit for efficient training and evaluation of recurrent neural network language models. Xie Chen, Xunying Liu, Mark Gales, Phil Woodland, Proc. ICASSP. ICASSPIEEEXie Chen, Xunying Liu, Mark Gales, and Phil Woodland, "CUED-RNNLM an open-source toolkit for efficient training and evaluation of recurrent neural network language models," in Proc. ICASSP. IEEE, 2015.
| [] |
[
"Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation",
"Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation"
] | [
"Qiu Ran \nPattern Recognition Center\nTencent Inc\nWeChat AIChina\n",
"Yankai Lin yankailin@tencent.com \nPattern Recognition Center\nTencent Inc\nWeChat AIChina\n",
"Peng Li \nPattern Recognition Center\nTencent Inc\nWeChat AIChina\n",
"Jie Zhou \nPattern Recognition Center\nTencent Inc\nWeChat AIChina\n"
] | [
"Pattern Recognition Center\nTencent Inc\nWeChat AIChina",
"Pattern Recognition Center\nTencent Inc\nWeChat AIChina",
"Pattern Recognition Center\nTencent Inc\nWeChat AIChina",
"Pattern Recognition Center\nTencent Inc\nWeChat AIChina"
] | [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"
] | Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process. However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing. To alleviate this problem, we propose a novel semiautoregressive model RecoverSAT in this work, which generates a translation as a sequence of segments. The segments are generated simultaneously while each segment is predicted token-by-token. By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors. Experimental results on three widelyused benchmark datasets show that our proposed model achieves more than 4× speedup while maintaining comparable performance compared with the corresponding autoregressive model. | 10.18653/v1/2020.acl-main.277 | [
"https://www.aclweb.org/anthology/2020.acl-main.277.pdf"
] | 219,559,126 | 2006.05165 | ecec341773d22fbef77b07260345badf853a667e |
Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Qiu Ran
Pattern Recognition Center
Tencent Inc
WeChat AIChina
Yankai Lin yankailin@tencent.com
Pattern Recognition Center
Tencent Inc
WeChat AIChina
Peng Li
Pattern Recognition Center
Tencent Inc
WeChat AIChina
Jie Zhou
Pattern Recognition Center
Tencent Inc
WeChat AIChina
Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20203059
Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process. However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing. To alleviate this problem, we propose a novel semiautoregressive model RecoverSAT in this work, which generates a translation as a sequence of segments. The segments are generated simultaneously while each segment is predicted token-by-token. By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors. Experimental results on three widelyused benchmark datasets show that our proposed model achieves more than 4× speedup while maintaining comparable performance compared with the corresponding autoregressive model.
Introduction
Although neural machine translation (NMT) has achieved state-of-the-art performance in recent years (Cho et al., 2014;Bahdanau et al., 2015;Vaswani et al., 2017), most NMT models still suffer from the slow decoding speed problem due to their autoregressive property: the generation of a target token depends on all the previously generated target tokens, making the decoding process intrinsically nonparallelizable.
Recently, non-autoregressive neural machine translation (NAT) models (Gu et al., 2018;Guo et al., 2019a;Wei et al., 2019) slow decoding speed problem by generating all target tokens independently in parallel, speeding up the decoding process significantly. Unfortunately, these models suffer from the multi-modality problem (Gu et al., 2018), resulting in inferior translation quality compared with autoregressive NMT.
To be specific, a source sentence may have multiple feasible translations, and each target token may be generated with respect to different feasible translations since NAT models discard the dependency among target tokens. This generally manifests as repetitive or missing tokens in the translations. Table 1 shows an example. The German phrase "viele Farmer" can be translated as either "lots of farmers" or "a lot of farmers". In the first translation (Trans. 1), "lots of" are translated w.r.t. "lots of farmers" while "of farmers" are translated w.r.t. "a lot of farmers" such that two "of" are generated. Similarly, "of" is missing in the second translation (Trans. 2). Intuitively, the multi-modality problem has a significant negative effect on the translation quality of NAT. Intensive efforts have been devoted to alleviate the above problem, which can be roughly divided into two lines. The first line of work leverages the iterative decoding framework to break the independence assumption, which first generates an initial translation and then refines the translation The segments are generated simultaneously while each segment is generated token-by-token conditioned on both the source tokens and the translation history of all segments (e.g., the token "are" in the first segment is predicted based on all the tokens colored green). Repetitive segments (e.g., the third segment "lots of") are detected and deleted automatically.
iteratively by taking both the source sentence and the translation of last iteration as input (Lee et al., 2018;Ghazvininejad et al., 2019). Nevertheless, it requires to refine the translations for multiple times in order to achieve better translation quality, which hurts decoding speed significantly. The other line of work tries to improve the vanilla NAT model to better capture target-side dependency by leveraging extra autoregressive layers in the decoder (Shao et al., 2019a;Wang et al., 2018), introducing latent variables and/or more powerful probabilistic frameworks to model more complex distributions (Kaiser et al., 2018;Akoury et al., 2019;Shu et al., 2019;Ma et al., 2019), guiding the training process with an autoregressive model Wei et al., 2019), etc. However, these models cannot alter a target token once it has been generated, which means these models are not able to recover from an error caused by the multi-modality problem.
To alleviate the multi-modality problem while maintaining a reasonable decoding speedup, we propose a novel semi-autoregressive model named RecoverSAT in this work. RecoverSAT features in three aspects: (1) To improve decoding speed, we assume that a translation can be divided into several segments which can be generated simultaneously.
(2) To better capture target-side dependency, the tokens inside a segment is autoregressively generated conditioned not only on the previously generated tokens in this segment but also on those in other segments. On one hand, we observe that repetitive tokens are more likely to occur within a short context. Therefore, autoregressively generating a segment is beneficial for reducing repetitive tokens. On the other hand, by conditioning on previously generated tokens in other segments, the model is capable of guessing what feasible translation candidates have been chosen by each segment and adapts accordingly, e.g., recovering from missing token errors. As a result, our model captures more targetside dependency such that the multi-modality problem can be alleviated naturally. (3) To make the model capable of recovering from repetitive token errors, we introduce a segment deletion mechanism into our model. Informally speaking, our model will mark a segment to be deleted once it finds the content has been translated in other segments.
We conduct experiments on three benchmark datasets for machine translation to evaluate the proposed method. The experimental results show that RecoverSAT is able to decode over 4× faster than the autoregressive counterpart while maintaining comparable performance. The source code of this work is released on https://github.com/ ranqiu92/RecoverSAT.
Background
Autoregressive Neural Machine Translation
Autoregressive neural machine translation (AT) generates the translation token-by-token conditioned on translation history. Denoting a source sentence as x = {x i } T i=1 and a target sentence as y = {y j } T j=1 , AT models the joint probability as:
P (y|x) = T t=1 P (y t |y <t , x).(1)
where y <t denotes the generated tokens before y t .
During decoding, the translation history dependency makes the AT model predict each token after all previous tokens have been generated, which makes the decoding process time-consuming.
Non-Autoregressive Neural Machine Translation
Non-autoregressive neural machine translation (NAT) (Gu et al., 2018) aims to accelerate the decoding process, which discards the dependency of translation history and models P (y|x) as a product of the conditionally independent probability of each token:
P (y|x) = T t=1 P (y t |x).(2)
The conditional independence enables the NAT models to generate all target tokens in parallel. However, independently predicting all target tokens is challenging as natural language often exhibits strong correlation across context. Since the model knows little information about surrounding target tokens, it may consider different possible translations when predicting different target tokens. The problem is known as the multi-modality problem (Gu et al., 2018) and significantly degrades the performance of NAT models.
Approach
Overview
RecoverSAT extends the original Transformer (Vaswani et al., 2017) to enable the decoder to perform generation autoregressively in local and non-autoregressively in global. An overview of the architecture of our RecoverSAT model is shown in Figure 1. As illustrated in the figure, RecoverSAT simultaneously predicts all segments "there are EOS", "lots of farmers EOS", "a lot DEL" and "doing this today EOS". And at each time step, it generates a token for each incomplete segment. The special token DEL denotes the segment should be deleted and EOS denotes the end of a segment. Combining all the segments, we obtain the final translation "there are lots of farmers doing this today".
Formally, assuming a translation y is generated as K segments S 1 , S 2 , · · · , S K , where S i is a subsequence of the translation 1 . For description simplicity, we assume that all the segments have the same length. RecoverSAT predicts a token for each segment conditioned on all previously generated tokens at each generation step, which can be formulated as:
P (y|x) = L t=1 K i=1 P (S i t |S 1 <t · · · S K <t ; x), (3)
where S i t denotes the t-th token in the i-th segment, S i <t = {S i 1 , · · · , S i t−1 } denotes the translation history in the i-th segment, and L is segment length.
Here, two natural problems arise for the decoding process:
• How to determine the length of a segment?
• How to decide a segment should be deleted?
We address the two problems in a uniform way in this work. Suppose the original token vocabulary is V , we extend it with two extra tokens EOS and DEL. Then for the segment S i , the most probable tokenŜ i t at time step t:
S i t = arg max S i t ∈V ∪{EOS,DEL} P (S i t |S 1 <t · · · S K <t ; x) (4)
has three possibilities:
(1)Ŝ i t ∈ V : the segment S i is incomplete and the decoding process for it should continue;
(2)Ŝ i t = EOS: the segment S i is complete and the decoding process for it should terminate;
(3)Ŝ i t = DEL: the segment S i is repetitive and should be deleted. Accordingly, the decoding process for it should terminate.
The entire decoding process terminates when all the segments meet EOS/DEL or reach the maximum token number. It should be noticed that we do not explicitly delete a segment when DEL is encountered but do it via post-processing. In other words, the model is trained to ignore the segment to be deleted implicitly.
Learning to Recover from Errors
As there is little target-side information available in the early stage of the decoding process, the errors caused by the multi-modality problem is inevitable. In this work, instead of reducing such errors directly, we propose two training mechanisms to teach our RecoverSAT model to recover dynamically according to the sentence length. In other words, we can predict the target sentence length to determine the segment number during inference. In this case, our model can also decode in constant time. from errors: (1) Dynamic Termination Mechanism: learning to determine segment length according to target-side context; (2) Segment Deletion Mechanism: learning to delete repetitive segments.
Dynamic Termination Mechanism
As shown in Section 3.1, instead of pre-specifying the lengths of segments, we let the model determine the lengths by emitting the EOS token. This strategy helps our model recover from multi-modality related errors in two ways:
1. The choice of the first few tokens is more flexible. Taking Figure 1 as an example, if the decoder decides the first token of the second segment is "of" instead of "lots" (i.e., "lots" is not generated in the second segment), it only needs to generate "lots" before "EOS" in the first segment in order to recover from missing token errors. In contrast, if the decoder decides the first token is "are", it can avoid repetitive token error by not generating "are" in the first segment;
2. As shown in Eq. 3, a token is generated conditioned on all the previously generated tokens in all the segments. Therefore, the decoder has richer target-side information to detect and recover from such errors.
However, it is non-trivial to train the model to learn such behaviour while maintaining a reasonable speedup. On one hand, as the decoding time of our RecoverSAT model is proportional to the maximum length of the segments, we should divide the target sentences of training instances into equal-length segments to encourage the model to generate segments with identical length. On the other hand, the model should be exposed to the multi-modality related errors to enhance its ability of recovering from such errors, which suggests that the target sentences of training instances should be divided randomly to simulate these errors.
To alleviate the problem, we propose a mixed annealing dividing strategy. To be specific, we randomly decide whether to divide a target sentence equally or randomly at each training step and gradually anneal to the equally-dividing method at the end of training. Formally, given the target sentence y and the segment number K, we define the segment dividing indice set r as follows:
s ∼ Bernoulli(p),(5)r = EQUAL(T, K − 1) s = 0 RAND(T, K − 1) s = 1 ,(6)
where Bernoulli(p) is the Bernoulli distribution with parameter p, EQUAL(n, m) = n m+1 , 2n m+1 , · · · , mn m+1 , RAND(n, m) sampling m non-duplicate indices from [1, n]. A larger value of p leads to better error recovering ability while a smaller one encourages the model to generate segments with similar lengths (in other words, better speedup). To balance the two aspects, we gradually anneal p from 1 to 0 in the training process, which achieves better performance (Section 4.5).
Segment Deletion Mechanism
Although the dynamic termination mechanism makes the model capable of recovering from missing token errors and reducing repetitive tokens, the model still can not recover from errors where token repetition errors have already occurred. We find the major errors of our model occur when generating the first token of each segment since it cannot see any history and future. In this situation, two repetitive segments will be generated. To alleviate this problem, we propose a segment-wise deletion strategy, which uses a special token DEL to indicate a segment is repetitive and should be deleted 2 .
A straightforward way to train the model to learn to delete a segment is to inject pseudo repetitive segments into the training data. The following is an example: Given the target sentence "there are lots of farmers doing this today", we first divide it into 3 segments "there are", "lots of farmers" and "doing this today". Then we copy the first two tokens of the second segment and append the special token DEL to the end to construct a pseudo repetitive segment "lots of DEL". Finally, we insert the repetitive segment to the right of the chosen segment, resulting in 4 segments. Formally, given the expected segment number K and the target sentence y, we first divide y into K − 1 segments S 1 , S 2 , · · · , S K−1 and then build a pseudo repetitive segment S i rep by copying the first m tokens of a randomly chosen segment S i and appending DEL to the end, m is uniformly sampled from [1, |S i |]. Finally, S i rep is inserted at the right side of S i . The final K segments are S 1 , S 2 , · · · , S i , S i rep , S i+1 , · · · , S K−1 . However, injecting such pseudo repetitive segments to all training instances will mislead the model that generating then deleting a repetitive segment is a must-to-have behaviour, which is not desired. Therefore, we inject pseudo repetitive segment into a training instance with probability q in this work.
Experiments
Datasets
We conduct experiments on three widely-used machine translation datasets: IWSLT16 En-De (196k pairs), WMT14 En-De (4.5M pairs) and WMT16 En-Ro (610k pairs). For fair comparison, we use the preprocessed datasets in Lee et al. (2018), of which sentences are tokenized and segmented into subwords using byte-pair encoding (BPE) (Sennrich et al., 2016) to restrict the vocabulary size. We use a shared vocabulary of 40k subwords for both source and target languages. For the WMT14 En-De dataset, we use newstest-2013 and newstest-2014 as validation and test sets respectively. For the WMT16 En-Ro dataset, we employ newsdev-2016 and newstest-2016 as validation and test sets respectively. For the IWSLT16 En-De dataset, we use test2013 as the validation set.
Experimental Settings
For model hyperparameters, we follow most of the settings in (Gu et al., 2018;Lee et al., 2018;Wei et al., 2019). For the IWSLT16 En-De dataset, we use a small Transformer model (d model = 278, d hidden = 507, n layer = 5, n head = 2, p dropout = 0.1). For the WMT14 En-De and WMT16 En-Ro datasets, we use a larger Transformer model (d model = 512, d hidden = 512, n layer = 6, n head = 8, p dropout = 0.1). We linearly anneal the learning rate from 3 × 10 −4 to 10 −5 as in Lee et al. (2018) for the IWSLT16 En-De dataset, while employing the warm-up learning rate schedule (Vaswani et al., 2017) with t warmup = 4000 for the WMT14 En-De and WMT16 En-Ro datasets. We also use label smoothing of value ls = 0.15 for all datasets. We utilize the sequence-level distillation (Kim and Rush, 2016), which replaces the target sentences in the training dataset with sentences generated by an autoregressive model, and set the beam size of the technique to 4. We use the encoder of the corresponding autoregressive model to initialize the encoder of RecoverSAT, and share the parameters of source and target token embedding layers and the pre-softmax linear layer. We measure the speedup of model inference in each task on a single NVIDIA P40 GPU with the batch size 1.
Baselines
We use the Transformer (Vaswani et al., 2017) as our AT baseline and fifteen latest strong NAT models as NAT baselines, including: (1)
Overall Results
The performance of our RecoverSAT model and the baselines is shown in Table 2. Due to the space limitation, we only show the results corresponding to the settings of the best BLEU scores for the baselines 3 . From Table 2, we can observe that:
(1) Our RecoverSAT model achieves comparable performance with the AT baseline (Transformer) while keeping significant speedup. When K = 2, the BLEU score gap is moderate (from 0.06 to 0.4, even better than Transformer on the WMT16 En→Ro and Ro→En tasks) and the speedup is about 2×. When K = 10, the BLEU scores drop less than 5% relatively, and the speedup is considerably good (over 4×).
(2) Our RecoverSAT model outperforms all the strong NAT baselines except CMLM (on the WMT16 En→Ro and Ro→En tasks). However, the performance gap is negligible (0.16 and 0.12 respectively), and CMLM is a multi-step NAT method which is significantly slower than our model. (Gu et al., 2018) and LPD denotes the length parallel decoding technique (Wei et al., 2019). n denotes the sample size of NPD or LPD. iter denotes the refinement number of the iterative decoding method.
(3) As K grows, the BLEU scores drop moderately and the speedup grows significantly, indicating that our RecoverSAT model has a good generalizability. For example, the BLEU scores drop less than 0.45 when K grows from 2 to 5, and drop no more than 0.90 except on the WMT14 De→En task when K further grows to 10. Meanwhile, the speedup for K = 10 is larger than 4×, which is considerably good.
(4) There are only 7 baselines (SynST, imitate-NAT+LPD, LV NAR, NART+LPD, FCL-NAT+NPD, ReorderNAT and NART-DCRF+LPD) achieving better speedup than our RecoverSAT model when K = 10. However, only Reorder-NAT and NART-DCRF+LPD achieve comparable BLEU scores with our model.The improvements of both ReorderNAT and NART-DCRF are complementary to our method. It is an interesting future work to join these works together.
Effect of Dynamic Termination Mechanism
As discussed in Section 3.2.1, the dynamic termination mechanism is used to train our RecoverSAT model to learn to determine segment length dynamically conditioned on target-side context such that it is recoverable from multi-modality related errors.
In this section, we investigate the effect of this mechanism and the results are shown in Table 3.
As multi-modality related errors generally manifest as repetitive or missing tokens in the translation, we propose two quantitative metrics "Rep" and "Mis" to measure these two phenomenons respectively. "Rep" is defined as the relative increment of repetitive token ratio w.r.t. to a reference AT model. And "Mis" is defined as the relative increment of missing token ratio given the references w.r.t. to a reference AT model. Formally, given the translationsŶ = {ŷ 1 · · ·ŷ k · · · } produced by the model to be evaluated and the trans-lationsŶ auto = {ŷ 1 auto · · ·ŷ k auto · · · } produced by the reference AT model, "Rep" is defined as
Rep = r(Ŷ) − r(Ŷ auto ) r(Ŷ auto ) ,(7)r(Y) = k |y k | j=2 1 9 i=1 1(y k j = y k j−i ) ≥ 1 k |y k | ,(8)
where 1(cond) = 1 if the condition cond holds otherwise 0, and y k j is the j-th token of the translation sentence y k .
GivenŶ,Ŷ auto and referencesȲ = {ȳ 1 · · ·ȳ k · · · }, "Mis" is defined as Table 3: Effect of the dynamic termination mechanism.
Mis = m(Ŷ,Ȳ) − m(Ŷ auto ,Ȳ) m(Ŷ auto ,Ȳ) ,(9)
The results are evaluated on the IWSLT16 En-De validation set. p is the parameter of Bernoulli distribution in Eq. 5. "Rep" and "Mis" measure the relative increment (%) of repetitive and missing token ratios (see Section 4.5), the smaller the better. "
Step" denotes the average number of decoding steps. And "1→0" denotes annealing p from 1 to 0 linearly.
where m(·, ·) computes the missing token ratio and is defined as follows:
c w (y k ,ȳ k ) = max c(ȳ k , w) − c(y k , w), 0 , m(Y,Ȳ) = k w∈ȳ k c w (y k ,ȳ k ) k |ȳ k | ,(10)
where c(y, w) is the occurrence number of a token w in the sentence y.
From Table 3, we can observe that: (1) By using the dynamic termination mechanism (p = 0.5, 1.0, 1 → 0, where p is the parameter of Bernoulli distribution (Eq. 5)), both repetitive and missing token errors are reduced ("Rep" & "Mis"), and the BLEU scores are increased, indicating the effectiveness of the mechanism; (2) As p grows larger, the average number of decoding steps ("Step") increases significantly. The reason is that more target sentences are divided into segments equally with smaller p during training and the model is biased to generate segments with similar lengths. However, if the model is not exposed to randomly divided segments (p = 0.0), it fails to learn to recover from multi-modality related errors and the BLEU score drops significantly. (3) By using the annealing dividing strategy (p = 1 → 0, see Section 3.2.1), we achieve a good balance between decoding speed and translation quality. Therefore, we use it as the default setting in this paper.
Effect of Segment Deletion Mechanism
In this section, we investigate the effect of the segment deletion mechanism and the results are shown in Table 4, where q is the probability of injecting pseudo repetitive segments to each training instance. From the results we can observe that:
(1) Without using the segment deletion mechanism (q = 0), the BLEU score drops significantly and the repetitive token errors ("Rep") increase drastically, indicating that the mechanism is effective for recovering from repetitive token errors. (2) As q grows larger, the average number of decoding steps ("Step") increases steadily because the model is misled that to generate then delete a repetitive segment is expected. Thus, q should not be too large.
(3) The repetitive token errors ("Rep") increase drastically when q > 0.7. We believe that the reason is that the pseudo repetitive segments are constructed randomly, making it hard to learn the underlying mapping. (4) The model achieves the best performance with q = 0.5. Therefore, we set q = 0.5 in our experiments. Figure 2 shows the translation quality of the Transformer, our RecoverSAT model with K = 10 and NAT on the IWSLT16 En-De validation set bucketed by different source sentence lengths. From the figure, we can observe that RecoverSAT surpasses NAT significantly and achieves comparable performance to the Transformer on all length buckets, which indicates the effectiveness of our model. Source die er greif endste Abteilung ist das Denk mal für die Kinder , das zum Ged enken an die 1,5 Millionen Kinder , die in den Konzent rations lagern und Gas k ammern vernichtet wurden , erbaut wurde .
Performance over Sentence Lengths
Reference the most tragic section is the children's mem orial , built in memory of 1.5 million children killed in concentration camps and gas cham bers .
NAT Translation the most tangible department department the monument monument the children , which was built commem commem orate 1.5 1.5 million children were destroyed in the concentration camps and gas cham bers .
RecoverSAT (K = 10) Table 5: Translation examples of NAT and RecoverSAT. "Forced Translation" denotes the generated sentence when we manually force the model to generate a certain token (colored green) at a certain position. We use yellow color to label repetitive tokens, red color to label missing tokens, and gray color to label the segments to be deleted. We use " " to concatenate sub-words and subscript numbers (e.g., [1]) to mark the beginning of each segment.
Case Study
We present translation examples of NAT and our RecoverSAT model on the WMT14 De→En validation set in Table 5. From the table, we can observe that: (1) The multi-modality problem (repetitive and missing tokens) is severe in the sentence generated by NAT, while it is effectively alleviated by RecoverSAT (see translations A to D); (2) Recov-erSAT can leverage target contexts to dynamically determine the segment length to reduce repetitive token errors (see translation B) or recover from missing token errors (see translations C and D); (3) RecoverSAT is capable of detecting and deleting the repetitive segments, even if there are multiple such segments (see translation D).
Related Work
There has been various work investigating to accelerate the decoding process of sequence generation models (Kalchbrenner et al., 2018;Gu et al., 2018). In the field of neural machine translation, which is the focus of this work, Gu et al. (2018) first propose non-autoregressive machine translation (NAT), which generates all target tokens simultaneously. Although accelerating the decoding process significantly, NAT suffers from the multimodality problem (Gu et al., 2018) which generally manifests as repetitive or missing tokens in translation. Therefore, intensive efforts have been devoted to alleviate the multi-modality problem in NAT. Wang et al. (2018) further propose a semi-autoregressive Transformer method, which generates segments autoregressively and predicts the tokens in a segment non-autoregressively. However, none of the above methods explicitly consider recovering from multi-modality related errors.
Recently, multi-step NAT models have also been investigated to address this issue. Lee et al. (2018) and Ghazvininejad et al. (2019) adopt an iterative decoding methods which have the potential to re-cover from generation errors. Besides, Stern et al. and Gu et al. (2019) also propose to use dynamic insertion/deletion to alleviate the generation repetition/missing. Different from these work, our model changes one-step NAT to a semi-autoregressive form, which maintains considerable speedup and enables the model to see the local history and future to avoid repetitive/missing words in decoding. Our work can further replace the one-step NAT to improve its performance.
Conclusion
In this work, we propose a novel semiautoregressive model RecoverSAT to alleviate the multi-modality problem, which performs translation by generating segments non-autoregressively and predicts the tokens in a segment autoregressively. By determining segment length dynamically, RecoverSAT is capable of recovering from missing token errors and reducing repetitive token errors. By explicitly detecting and deleting repetitive segments, RecoverSAT is able to recover from repetitive token errors. Experiments on three widely-used benchmark datasets show that our RecoverSAT model maintains comparable performance with more than 4× decoding speedup compared with the AT model.
Figure 1 :
1An overview of our RecoverSAT model. RecoverSAT generates a translation as a sequence of segments.
fertility-based model: NAT-FT(Gu et al., 2018); (2) iterative decoding based models: NAT-IR(Lee et al., 2018) and CMLM(Ghazvininejad et al., 2019); (3) models learning from AT teachers: imitate-NAT (Wei et al., 2019), NART (Li et al., 2019) and FCL-NAT (Guo et al., 2019b); (4) latent variable framework based models: LV NAR (Shu et al., 2019) and FlowSeq (Ma et al., 2019); (5) regularization framework based model: NAT-REG (Wang et al., 2019); (6) models introducing extra target-side dependencies: SAT (Wang et al., 2018), SynST (Akoury et al., 2019), NAT-FS (Shao et al., 2019a), PNAT (Bao et al., 2019), NART-DCRF (Sun et al., 2019) and ReorderNAT (Ran et al., 2019).
Figure 2 :
2Translation quality on the IWSLT16 En-De validation set over sentences in different length.
Wang et al. (2019) regularize the decoder hidden states of neighboring tokens to reduce repetitive tokens; Sun et al. (2019) utilize conditional random field to model target-side positional contexts; Shao et al. (2019a) and Shao et al. (2019b) introduce target-side information via specially designed training loss while Guo et al. (2019a) enhance the input of the decoder with target-side information; Kaiser et al. (2018), Akoury et al. (2019), Shu et al. (2019) and Ma et al. (2019) incorporate latent variables to guide generation; Li et al. (2019), Wei et al. (2019) and Guo et al. (2019b) use autoregressive models to guide the training process of NAT; Ran et al. (2019) and Bao et al. (2019) consider the reordering information in decoding.
have been investigated to mitigate the * indicates equal contribution † indicates corresponding authorSrc.
es gibt heute viele Farmer mit diesem Ansatz
Feasible there are lots of farmers doing this today
Trans.
there are a lot of farmers doing this today
Trans. 1 there are lots of of farmers doing this today
Trans. 2 there are a lot farmers doing this today
Table 1 :
1A multi-modality problem example: NAT
models generate each target token independently such
that they may correspond to different feasible transla-
tions, which usually manifests as repetitive (Trans. 1)
or missing (Trans. 2) tokens.
Table 2 :
2Performance (BLEU) of Transformer, the NAT/semi-autoregressive models and RecoverSAT on three widely-used machine translation benchmark datasets. NPD denotes the noisy parallel decoding technique
Table 4 :
4Effect of segment deletion mechanism. The
results are evaluated on the IWSLT16 En-De validation
set. q is the probability of injecting pseudo repetitive
segments to each training instance (see Section 3.2.2).
(0, 10] (10, 20] (20, 30] (30, 40] (40, 50]
>50
Length of Source Sentence
Translation A: [1] the EOS [2] most tangible department is the EOS [3] monument for children EOS [4] built to EOS [5] commem orate the 1.5 EOS [6] million children destroyed EOS [7] in the concentration camps and EOS [8] in DEL [9] gas EOS [10] cham bers . EOS Forced Translation B: [1] the EOS [2] most tangible department is the EOS [3] monument for children EOS [4] built to EOS [5] commem orate EOS [6] the 1.5 million children destroyed EOS [7] in the concentration camps and EOS [8] in DEL [9] gas EOS [10] cham bers . EOS C: [1] the EOS [2] most tangible department is the EOS [3] monument for children EOS [4] built to EOS [5] commem orate the 1.5 million children EOS [6] destroyed EOS [7] in concentration camps and EOS [8] in DEL [9] gas EOS [10] cham bers . EOS D: [1] the EOS [2] most tangible department is the EOS [3] monument for children EOS [4] built to EOS [5] commem orate the 1.5 million children destroyed EOS [6] in the concentration camps and EOS [7] in the DEL [8] in DEL [9] gas EOS [10] cham bers . EOS
Note that, by fixing segment length (token number of each segment) instead, the segment number K can be changed
It is more flexible to employ token-wise deletion strategy which could handle more complex cases. We will explore this in future.
A thorough comparison under other settings can be found in Appendix B.
AcknowledgmentsWe would like to thank all anonymous reviewers for their insightful comments.A Positional EncodingOur RecoverSAT model utilizes the positional encoding method inVaswani et al. (2017)to encode the information about the positions of source tokens. The positional embedding is defined as: where PE pos [i] is the i-th element of the positional embedding vector PE pos for the position pos, and d is the dimension of the positional embedding vector. Then we can compute the input vector of the encoder for the m-th source token w as:where E token w is the token embedding vector of w. However, we can not apply this method to target tokens directly. Since lengths of segments are dynamically determined, the positions of the tokens in the target sentence, except those in the first segment, are not available during generation. To solve the problem, we use the aforementioned method to independently encode the position in the corresponding segment of each token instead and adopt an absolute segment embedding method, which uses a distinct trainable vector to represent the position of each segment. Formally, the input vector of the decoder for the n-th target token v of the j-th segment is computed as:where E seg j is the segment embedding vector for the segment position j.Table 6: Performance (BLEU) of Transformer and the NAT/semi-autoregressive models on three widely-used machine translation benchmark datasets. NPD denotes the noisy parallel decoding technique(Gu et al., 2018)and LPD denotes the length parallel decoding technique(Wei et al., 2019). n denotes the sample size of NPD or LPD. iter denotes the refinement number of the iterative decoding method.
Syntactically supervised transformers for faster neural machine translation. Nader Akoury, Kalpesh Krishna, Mohit Iyyer, 10.18653/v1/P19-1122Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsNader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1269-1281.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations.
Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, Shujian Huang, Jiajun Chen, Lei Li, arXiv:1911.10677Non-autoregressive transformer by position learning. arXiv preprintYu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, Shujian Huang, Jiajun Chen, and Lei Li. 2019. Non-autoregressive transformer by position learning. arXiv preprint arXiv:1911.10677.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Caglar Bart Van Merriënboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, 10.3115/v1/D14-1179Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingKyunghyun Cho, Bart van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing, pages 1724-1734.
Mask-predict: Parallel decoding of conditional masked language models. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer, 10.18653/v1/D19-1633Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel de- coding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6114- 6123.
Non-autoregressive neural machine translation. Jiatao Gu, James Bradbury, Caiming Xiong, O K Victor, Richard Li, Socher, Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsJiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In Proceedings of Inter- national Conference on Learning Representations.
Levenshtein transformer. Jiatao Gu, Changhan Wang, Junbo Zhao, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing Systems32Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Proceedings of Ad- vances in Neural Information Processing Systems 32, pages 11181-11191.
Non-autoregressive neural machine translation with enhanced decoder input. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, Tie-Yan Liu, Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. the Thirty-Third AAAI Conference on Artificial Intelligence33Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019a. Non-autoregressive neural ma- chine translation with enhanced decoder input. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, volume 33, pages 3723-3730.
Fine-tuning by curriculum learning for non-autoregressive neural machine translation. Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu, arXiv:1911.08717arXiv preprintJunliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2019b. Fine-tuning by cur- riculum learning for non-autoregressive neural ma- chine translation. arXiv preprint arXiv:1911.08717.
Fast decoding in sequence models using discrete latent variables. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, Noam Shazeer, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine Learning80Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 2390-2399.
Efficient neural audio synthesis. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Van Den, Sander Oord, Koray Dieleman, Kavukcuoglu, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningNal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Flo- rian Stimberg, Aaron van den Oord, Sander Diele- man, and Koray Kavukcuoglu. 2018. Efficient neu- ral audio synthesis. In Proceedings of the 35th In- ternational Conference on Machine Learning, pages 2410-2419.
Sequencelevel knowledge distillation. Yoon Kim, Alexander M Rush, 10.18653/v1/D16-1139Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingYoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317-1327.
Deterministic non-autoregressive neural sequence modeling by iterative refinement. Jason Lee, Elman Mansimov, Kyunghyun Cho, 10.18653/v1/D18-1149Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingJason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182.
Hint-based training for non-autoregressive machine translation. Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu, 10.18653/v1/D19-1573Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019. Hint-based training for non-autoregressive machine translation. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5712-5717.
FlowSeq: Nonautoregressive conditional sequence generation with generative flow. Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, Eduard Hovy, 10.18653/v1/D19-1437Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neu- big, and Eduard Hovy. 2019. FlowSeq: Non- autoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 4273-4283.
Guiding non-autoregressive neural machine translation decoding with reordering information. Yankai Qiu Ran, Peng Lin, Jie Li, Zhou, arXiv:1911.02215arXiv preprintQiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2019. Guiding non-autoregressive neural machine transla- tion decoding with reordering information. arXiv preprint arXiv:1911.02215.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725.
Retrieving sequential information for non-autoregressive neural machine translation. Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie Zhou, 10.18653/v1/P19-1288Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsChenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, and Jie Zhou. 2019a. Retrieving sequential information for non-autoregressive neural machine translation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3013-3024.
Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation. Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, Jie Zhou, arXiv:1911.09320arXiv preprintChenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2019b. Minimizing the bag-of- ngrams difference for non-autoregressive neural ma- chine translation. arXiv preprint arXiv:1911.09320.
Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior. Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho, arXiv:1908.07181arXiv preprintRaphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2019. Latent-variable non- autoregressive neural machine translation with de- terministic inference using a delta posterior. arXiv preprint arXiv:1908.07181.
Insertion transformer: Flexible sequence generation via insertion operations. Mitchell Stern, William Chan, Jamie Kiros, Jakob Uszkoreit, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningMitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. In Proceedings of the 36th International Conference on Machine Learning, pages 5976-5985.
Fast structured decoding for sequence models. Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, Zhihong Deng, Advances in Neural Information Processing Systems. 32Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured decoding for sequence models. In Advances in Neu- ral Information Processing Systems 32, pages 3011- 3020.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008.
Semi-autoregressive neural machine translation. Chunqi Wang, Ji Zhang, Haiqing Chen, 10.18653/v1/D18-1044Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingChunqi Wang, Ji Zhang, and Haiqing Chen. 2018. Semi-autoregressive neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 479-488.
Non-autoregressive machine translation with auxiliary regularization. Yiren Wang, Fei Tian, Di He, Tao Qin, Chengxiang Zhai, Tie-Yan Liu, Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. the Thirty-Third AAAI Conference on Artificial Intelligence33Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, volume 33, pages 5377-5384.
Imitation learning for nonautoregressive neural machine translation. Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun, 10.18653/v1/P19-1125Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsBingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation learning for non- autoregressive neural machine translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1304- 1312.
| [] |
[
"Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs",
"Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs"
] | [
"Dongyeop Kang dongyeok@cs.cmu.edu \nCarnegie Mellon University\nPittsburghPAUSA\n",
"Hiroaki Hayashi hiroakih@cs.cmu.edu \nCarnegie Mellon University\nPittsburghPAUSA\n",
"Alan W Black \nCarnegie Mellon University\nPittsburghPAUSA\n",
"Eduard Hovy hovy@cs.cmu.edu \nCarnegie Mellon University\nPittsburghPAUSA\n"
] | [
"Carnegie Mellon University\nPittsburghPAUSA",
"Carnegie Mellon University\nPittsburghPAUSA",
"Carnegie Mellon University\nPittsburghPAUSA",
"Carnegie Mellon University\nPittsburghPAUSA"
] | [
"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing"
] | Generating a long, coherent text such as a paragraph requires a high-level control of different levels of relations between sentences (e.g., tense, coreference). We call such a logical connection between sentences as a (paragraph) flow. In order to produce a coherent flow of text, we explore two forms of intersentential relations in a paragraph: one is a human-created linguistical relation that forms a structure (e.g., discourse tree) and the other is a relation from latent representation learned from the sentences themselves. Our two proposed models incorporate each form of relations into document-level language models: the former is a supervised model that jointly learns a language model as well as discourse relation prediction, and the latter is an unsupervised model that is hierarchically conditioned by a recurrent neural network (RNN) over the latent information. Our proposed models with both forms of relations outperform the baselines in partially conditioned paragraph generation task. Our codes and data are publicly available 1 . | 10.18653/v1/d19-1589 | [
"https://www.aclweb.org/anthology/D19-1589.pdf"
] | 201,698,432 | 1908.11790 | 77d07a9ffefb6c227a4bd19e61a7e7c388950f86 |
Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs
November 3-7, 2019
Dongyeop Kang dongyeok@cs.cmu.edu
Carnegie Mellon University
PittsburghPAUSA
Hiroaki Hayashi hiroakih@cs.cmu.edu
Carnegie Mellon University
PittsburghPAUSA
Alan W Black
Carnegie Mellon University
PittsburghPAUSA
Eduard Hovy hovy@cs.cmu.edu
Carnegie Mellon University
PittsburghPAUSA
Linguistic Versus Latent Relations for Modeling Coherent Flow in Paragraphs
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaNovember 3-7, 20195809
Generating a long, coherent text such as a paragraph requires a high-level control of different levels of relations between sentences (e.g., tense, coreference). We call such a logical connection between sentences as a (paragraph) flow. In order to produce a coherent flow of text, we explore two forms of intersentential relations in a paragraph: one is a human-created linguistical relation that forms a structure (e.g., discourse tree) and the other is a relation from latent representation learned from the sentences themselves. Our two proposed models incorporate each form of relations into document-level language models: the former is a supervised model that jointly learns a language model as well as discourse relation prediction, and the latter is an unsupervised model that is hierarchically conditioned by a recurrent neural network (RNN) over the latent information. Our proposed models with both forms of relations outperform the baselines in partially conditioned paragraph generation task. Our codes and data are publicly available 1 .
Introduction
When composing multiple sentences into a paragraph, as in novels or academic papers, we often make design decisions in advance (Byrne, 1979) such as topic introduction and content ordering to ensure better coherence of the text. For instance, McKeown (1985); Swan (2002) proposed effective patterns for scientific writing: a hypothesis at first, followed by supporting sentences to validate the hypothesis, and lastly a concluding sentence. We call such a logical connection between sentences in a written paragraph as a flow. A coherent flow between sentences requires an understanding of various factors including tense, coreference, plans (Appelt, 1982;Hovy, 1991), scripts 1 https://github.com/dykang/flownet (Tomkins, 1978) and several others. We focus on the paragraph-level plan between sentences.
In text planning, underlying relations in text are broadly categorized into two forms: an explicit human-defined relation (e.g., a discourse tree) (Reiter and Dale, 2000) or an implicitly learned latent relation (Yang et al., 2016). While the former is defined and manuallly annotated based on linguistic theories, the latter is simply determinable from how people in fact put sentences together. In this work, we provide an empirical comparison between a linguistically-informed and a latent form of relations in context of a paragraph generation.
We compare the effectiveness of the two forms of relations using language modeling for paragraph generation. Due to the different characteristics of the two forms, we employ comparable but different components in addition to the base language model. For linguistic relations (e.g., discourse), we cast the problem into multi-task learning of supervised language modeling and discourse relation prediction. On the other hand, for latent relations, we learn an unsupervised hierarchical language model that is hierarchically conditioned by RNNs over linear operations between sentences.
We evaluate our models on partial paragraph generation task; producing the rest of text in a paragraph given some context of text. We observe that linguistically annotated discourse relations help produce more coherent text than the latent relations, followed by other baselines.
Related Work
There has been a variety of NLG systems that incorporate additional information between sentences (Appelt, 1982;Reiter and Dale, 2000;Gatt and Krahmer, 2018) which can be broadly categorized into two forms: linguistic and latent.
Linguistic relations are explicitly represented as external labels in the form of predefined rules or plans, formats, knowledge base, discourse parses, and more. Hovy (1985Hovy ( , 1990; Dalianis and Hovy (1996) integrated text planning in generation, where the plans are considered in knowledge, formatted rules and so forth. However, they are limited to small scale (i.e. few examples) and hand-written rules. Kang et al. (2017); Gardent et al. (2017); Kang et al. (2018b); Wang et al. (2018) used an external knowledge base to micro-planning for generating a corresponding text, while our work focuses on comparing two forms of relations from the text itself.
Moore and Paris (1993); Young and Moore (1994) utilized discourse structures such as rhetorical structure theory (RST) (Mann and Thompson, 1988) for parsing a document. A script (Tomkins, 1978) is another structured representation that describes a typical sequence of events in a particular context. Zhang et al. (2016); Ji and Eisenstein (2014) proposed better discourse parsers using neural networks. The prior works, however, used the discourse representations to describe the structure of the paragraph, while we focus on applicability of the discourse relations to language generation.
Latent relations use implicit information in a document such as hierarchical structure of the document: Lin et al. (2015); Chung et al. (2016) used hierarchical RNN for modeling a document. Similarly, the hierarchical model can be extended to other variants such as attention (Yang et al., 2016), encoder-decoder framework (Serban et al., 2017;Sordoni et al., 2015), auto-encoding , and multiscale (Chung et al., 2016). However, the hierarchical recurrence of sentences, which is dependent on topics, are less likely modeling a flow of a document.
We further summarize the fundamental differences between the two forms of relations in Appendix.
FlowNet: Language Modeling with
Inter-sentential Relations
We propose language models that incorporate each relation to capture a high-level flow of text.
Discourse-driven FlowNet
As a linguistic relation, we employ RST (Mann and Thompson, 1988) trees to represent discourse connections in the text. For simplicity, we limit usage of the discourse trees by only considering relations between adjacent phrases 2 : relations are inserted between adjacent phrases and represented as a flattened sequence of phrases and relations. If two consecutive RST relations are given, the deeper level of relation is chosen. If the central elementary discourse unit (EDU) or phrase is after its dependent, the relation is excluded. We consider each sequence of the flattened discourse relations as a writing flow. For example, people often write a text by elaborating basic information (Elaboration) and then describing a following statement attributed to the information (Attribution). We view discourse relations as additional labels to predict at the same time we predict next words in language modeling. Specifically, we propose to jointly train a model that predicts a sequence of words and a sequence of RST labels by taking advantage of shared representations, following previous sequence labeling problems such as named entity recognition (Collobert et al., 2011) and partof-speech tagging (Huang et al., 2015). Note that the RST relations are only used during training to obtain better representation for the two tasks, but not at test time.
Figure 1(a) shows our FlowNet using discourse relations. Let a paragraph be a sequence of sentences D={s 1 , s 2 , . . . , s M }. This model treats adjacent sentences as pairs for learning the standard seq2seq model. The first objective is to maximize the likelihood of the current sentence given the previous sentence. Hence, we maximize the following:
L s2s = j log P (w ij |w i,<j , s i−1 ) (1) where s i ={w i1 , w i2 , . . . , w iT i }, and T i is the num- ber of tokens of s i .
To better guide the model with discourse context, we use the shared representations to predict RST relations at the same time.
For each paragraph, we run the pre-trained RST parser (Ji and Eisenstein, 2014) and flatten the parse tree to obtain RST relations for each sentence Y i =(y 1 , . . . , y K i ), where K i is the number of discourse relations in s i . We then make a label sequence over tokens in the sentence with by placing y at the first word of EDUs and filling up the rest with a null relation o:
Y i = (o, . . . , o, y 1 , o, . . . , y K i , o, . . . , o)
. We incorporate a sequence labeling objective by employing conditional random field (Lafferty et al., 2001) to find the label sequence that maximizes the score function for each sentence
s i : S(s i , Y i ) = T i −1 j=1 W T y j ,y j+1 h j + b y j ,y j+1 where h j , W and b
are the hidden representation of w ij , weight matrix, and the bias vector corresponding to the pair of labels (y i , y i+1 ), respectively. For training, we maximize the conditional likelihood:
L CRF = S(s i , y i ) − y∈Yx log S(s i , y) (2)
where Y x represents all possible discourse label sequences. Decoding is done by greedily predicting the output sequence with maximum score. Both training and decoding can be computed using dynamic programming. The final objective is represented as the sum of two objective functions:
L disc = L s2s + α * L CRF(3)
where α is a scaling parameter to control the impact of CRF objective. The value is chosen empirically by searching based on validation set.
Delta-driven FlowNet
In this model, we aim to utilize latent representations to characterize the flow between sentences. Specifically we define delta, subtractions of hidden represenations of adjacent sentences as such latent information. Figure 1(b) shows how we hierarchically model different levels of information: words, sentences, and deltas. Each word is encoded using a RNN encoder g word . We take the last hidden representation of word as sentence embeddings s 1 , ..., s M . Similar to hierarchical RNN (Lin et al., 2015), each sentence representation is encoded using another RNN encoder g sent . While discourse flow provides an explicit relation symbols, delta flow calculates a latent relation by subtracting previous representation s i−1 from current representation s i 3 :
d(s i−1 , s i ) = d i−1 = s i − s i−1(4)
Given a sequence of M -1 delta relations d 1 , ..., d M −1 for a paragraph of M sentences, we again encode them using another RNN encoder g delta . The model takes the word, sentence and delta information altogether to predict the next (tth) word in the m-th sentence:
h t = f (h t−1 , x t , s m−1 , d m−2 )(5)
where x t is a word representation, s m−1 is a sentence representation and d m−2 is a delta information. Note that sentence representation is from the previous sentence, and delta information is calculated by two previous sentences. If there is no previous information given, the parameters are randomly initialized.
Experiment
Due to the absence of goal-oriented language generation task, we collect paragraph data and define a new task of generating partial text of a paragraph given some context.
Data
We collect paragraphs from three different domains: Papers are paragraphs extracted from academic manuscripts in computer science domain from the PeerRead (Kang et al., 2018a), and Fantasy and SciFi are paragraphs of two frequent categories extracted from the BookCorpus (Zhu et al., 2015), where paragraphs are extracted using the line breaker in the dataset.
We only use paragraphs whose lengths are from 4 to 7, in order to measure the performance change according to paragraph length. The dataset is randomly split by 0.9/0.05/0.05 for train, valid, and test set, respectively. Table 1 shows the numbers of paragraphs for each domain. All paragraphs are parsed into RST trees using the state-of-the-art discourse parser by Ji and Eisenstein (2014).
Bridging: Partial Paragraph Generation
We evaluate our models on partial text generation task; given a partial information (e.g., some sentences), producing the rest of text.
[1] Inside the club we moved straight for the bar. If only the first sentence is given, the generation can be too divergent. The existence of the last sentence makes the generation more coherent and converged to some point.
We evaluate it with one hard and one soft automatic metrics: METEOR (M) (Banerjee and Lavie, 2005) and VectorExtrema (VE) by calculating cosine similarity of averaged word embeddings (Pennington et al., 2014), and human performance.
Models and Setup
We compare various baseline seq2seq models which encode the context; a concatenated first and last sentences, and decode the intermediate words: S2S is attentional seq2seq model (Bahdanau et al., 2014), and HS2S: is a hierarchical version of the S2S by combining two baselines: HRNN (Lin et al., 2015) hierarchically models sequence of words and sentences, and HRED (Serban et al., 2017;Sordoni et al., 2015) encodes the given context and decodes the words. FlowNet (delta/disc.) is our proposed language model with delta and discourse relations, respectively.
We find the best hyper-parameters on validation set using grid search. Here are the final parameters used: 32 for batch size, 25 for maximum sentence length, 300 for word embedding size initialized by GloVe (Pennington et al., 2014), 1 LSTM layer (Hochreiter and Schmidhuber, 1997) with 512 size, clipping by 0.25, 0.2 learning rate and 0.5 decay rate with Adagrad (Duchi et al., 2011) optimizer, and 50, 000 for the vocabulary size. The total number of distinct discourse relations is 44.
Results
Papers
SciFi Fantasy In Table 2, both discourse and delta driven FlowNet outperform the baseline models across most of the metrics except for VecterExtrema on SciFi. Especially, as the number of training size increases (Papers<<SciFi<Fantasy), the improvements gained from the FlowNet become bigger. This is probably because the model learns more information of the (discourse or latent) relations from the larger data. Table 3 shows performance comparison among different delta operations: SUBTRACT, ADD, and FIRST: Satyrs never wear armor, including helmets, Newel began, using his hands expressively. LAST: Anyhow, as we actors were laying siege, a big chunk of the battlement dislodged from atop the tower.
REF: [M1] "But years ago I was in a play, and the helm was part of my costume. [M2] During the big battle scene, a few of us were assailing a castle. [M3] We had quite a set. [M4] The main tower must have been fifteen feet tall, fashioned from real stone. HUMAN: [M1] Actually he needed to wear any protectors to prevent him from a big accident. [M2] We planned to make a prank cam to make him wear those always. [M3] "I have a good idea," Newel kept talking continuously. [M4] "Let's play a role like we are under the attack.
S2S: [M1] he's a good man [M2] the UNK, the one who's a man who's a man and the other [M3] and the other, the one who 's a good friend [M4] he's a good man HS2S: [M1] i'm not sure that," he said [M2] i'm not sure that i'm not sure [M3] i'm not sure that i'm not a fool [M4] "i'm not sure that," he said FLOWNET (DELTA): [M1] he's a good man [M2] i'm not sure what to do [M3] i'm not sure that i'm not going to be a vampire [M4] he's a good man FLOWNET (DISC.): [M1] perhaps they were not quite good, but he was not a master, and they were the most powerful [M2] the only way to do not like a little, but i' d been in the world [M3] "you're right," he said "i am not a fool you're here [M4] you're going to be a bit more than the other MLP which is a multi-layer perceptron network. All scores are macro-averaged across datasets. While ADD shows good performance on ME-TEOR, SUBTRACT does on the soft metric (i.e., VecExt), indicating that subtraction can help the model capture the better semantics than the other functions. Figure 3 shows how performance changes on Fantasy as the paragraph lengths increase. Both of FlowNet achieve more improvements when generating longer paragraphs. Especially, discourse relations achieve the best performance at length 6 and 7. We conduct a comparison with human performance (See Figure 4). We randomly choose 100 samples per dataset and per paragraph length and ask an annotator to perform the bridging task on the final 1,000 samples. Human outperforms the models by large margins. FlowNet with discourse relations outperforms the FlowNet with latent relations and other baselines by a large margin. As the paragraph length increases or more data is trained, discourse relations become more useful. Table 4 shows an example paragraph with text produced by the models as well as reference and human annotation. Given only the partial context (i.e., first and last sentences), bridging task is very challenging even for human. The reference sentences and human annotations are semantically very different indeed. Among the latent models, FlowNet (delta) produces more coherent flow of text compared to S2S and HS2S. Surprisingly, FlowNet (discourse) enables generating more diverse sentences with a bit of coherence, because each sentence is generated based on the representation conditioned on the predicted RST discourse relation.
Conclusion and Discussion
We explore two forms of inter-sentential relations: linguistic relation such as discourse relations and a latent representation learned from the text. The proposed models for both relations achieve significant improvements over the baselines on partial paragraph generation task. Despite the empirical effectiveness and difference between the linguistic and latent relations, they are not directly aligned for comparison. A potential direction for future study is to directly couple them together and see whether one form contains the other, or vice versa. Another direction is to check their effectiveness on top of the recent pre-trained language models.
Figure 1 :
1FlowNet with linguistic (i.e., discourse) versus latent (i.e., delta) relation. (a) For each word, a form of discourse relation and next word are jointly predicted using CRF ( ) and language model, respectively. (b) Decoding w i is conditioned on previous word (w i−1 ), previous sentence (s i−1 ), and delta between two previous sentences (d i−2 ). Best viewed in color.
[ 2 ]
2Devlin ordered a beer for himself and a glass of my favorite wine for me. [3] I love that I didn't have to tell him what I wanted. [4] He knew me well and always thought about what I wanted or needed, in and out of bed.
Figure 2 :
2Bridging task: given [1] and [4] sentences, guessing [2,3] sentences (red, underlined).
Figure 2
2shows our bridging task. It requires a generation of masked sentences in the middle of a paragraph given the first and the last sentences.
Figure 3 :
3Comparison of paragraph lengths. Best viewed in color.
Figure 4 :
4Comparison (METEOR) with human performance (black bars): S2S (blue), HS2S (red), Flow:delta (yellow), and Flow:disc. (green). Best viewed in color.
Table 1 :
1Number of paragraphs in our dataset.
Table 2 :
2Performance on bridging task. METEOR and VectorExtrema are used. The higher the better.
Table 3 :
3Comparison
of different delta func-
tions.
Table 4 :
4An example paragraph and predicted texts in Fantasy dataset. Given FIRST and LAST sentences, the models generate middle sentences (e.g., [M1] → [M2]..). REF and HUMAN are reference middle sentences and sentences written by human annotator, respectively. Please find more examples in the appendix.
The full discourse tree can be incorporated using other types of language model such asTai et al. (2015).
Our experiment includes a comparison among other types of linear operations between sentences such as addition or a learnable function.
AcknowledgementsWe also thank Jason Weston, Dan Jurafsky, and anonymous reviewers for their helpful comments.
Planning natural-language utterances to satisfy multiple goals. ; Sri International Menlo Park Ca Arti-Ficial Intelligence Douglas E Appelt, Center, Technical reportDouglas E Appelt. 1982. Planning natural-language ut- terances to satisfy multiple goals. Technical report, SRI INTERNATIONAL MENLO PARK CA ARTI- FICIAL INTELLIGENCE CENTER.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationSatanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.
Teaching writing skills. Donn Byrne, LongmanDonn Byrne. 1979. Teaching writing skills. Longman.
Hierarchical multiscale recurrent neural networks. Junyoung Chung, Sungjin Ahn, Yoshua Bengio, abs/1609.01704CoRRJunyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural net- works. CoRR, abs/1609.01704.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, Journal of Machine Learning Research. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.
Aggregation in natural language generation. Hercules Dalianis, Eduard Hovy, Trends in Natural Language Generation An Artificial Intelligence Perspective. SpringerHercules Dalianis and Eduard Hovy. 1996. Aggrega- tion in natural language generation. In Trends in Natural Language Generation An Artificial Intelli- gence Perspective, pages 88-105. Springer.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.
Creating training corpora for nlg micro-planning. Claire Gardent, Anastasia Shimorina, Shashi Narayan, Laura Perez-Beltrachini, 55th annual meeting of the Association for Computational Linguistics (ACL). Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating train- ing corpora for nlg micro-planning. In 55th annual meeting of the Association for Computational Lin- guistics (ACL).
Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Albert Gatt, Emiel Krahmer, Journal of Artificial Intelligence Research. 61Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 9Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735- 1780.
Integrating text planning and production in generation. Eduard H Hovy, IJCAI. Eduard H. Hovy. 1985. Integrating text planning and production in generation. In IJCAI.
Pragmatics and natural language generation. H Eduard, Hovy, Artificial Intelligence. 432Eduard H Hovy. 1990. Pragmatics and natural lan- guage generation. Artificial Intelligence, 43(2):153- 197.
Approaches to the planning of coherent text. H Eduard, Hovy, Natural language generation in artificial intelligence and computational linguistics. SpringerEduard H Hovy. 1991. Approaches to the planning of coherent text. In Natural language generation in artificial intelligence and computational linguistics, pages 83-102. Springer.
Bidirectional lstm-crf models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, abs/1508.01991CoRRZhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. CoRR, abs/1508.01991.
Representation learning for text-level discourse parsing. Yangfeng Ji, Jacob Eisenstein, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics1Yangfeng Ji and Jacob Eisenstein. 2014. Represen- tation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 13-24.
A dataset of peer reviews (peerread): Collection, insights and nlp applications. Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine Van Zuylen, Sebastian Kohlmeier, Proceedings of NAACL-HLT. uard Hovy, and Roy SchwartzNAACL-HLTDongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Ed- uard Hovy, and Roy Schwartz. 2018a. A dataset of peer reviews (peerread): Collection, insights and nlp applications. In Proceedings of NAACL-HLT.
Detecting and explaining causes from text for a time series event. Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, Eduard Hovy, Conference on Empirical Methods on Natural Language Processing. Dongyeop Kang, Varun Gangal, Ang Lu, Zheng Chen, and Eduard Hovy. 2017. Detecting and explaining causes from text for a time series event. In Con- ference on Empirical Methods on Natural Language Processing.
Adventure: Adversarial training for textual entailment with knowledge-guided examples. Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy, The 56th Annual Meeting of the Association for Computational Linguistics (ACL). Melbourne, AustraliaDongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. 2018b. Adventure: Adversarial train- ing for textual entailment with knowledge-guided examples. In The 56th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), Mel- bourne, Australia.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando Pereira, ICML. John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In ICML.
A hierarchical neural autoencoder for paragraphs and documents. Jiwei Li, Minh-Thang Luong, Daniel Jurafsky, ACL. Jiwei Li, Minh-Thang Luong, and Daniel Jurafsky. 2015. A hierarchical neural autoencoder for para- graphs and documents. In ACL.
Hierarchical recurrent neural network for document modeling. Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, Sheng Li, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingRui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 899-907.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Chia-Wei Liu, Ryan Lowe, V Iulian, Michael Serban, Laurent Noseworthy, Joelle Charlin, Pineau, arXiv:1603.08023arXiv preprintChia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. arXiv preprint arXiv:1603.08023.
C William, Sandra A Mann, Thompson, Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse. 8William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243-281.
Discourse strategies for generating natural-language text. Kathleen R Mckeown, Artificial Intelligence. 271Kathleen R McKeown. 1985. Discourse strategies for generating natural-language text. Artificial Intelli- gence, 27(1):1-41.
Planning text for advisory dialogues: Capturing intentional and rhetorical information. D Johanna, Cécile L Moore, Paris, Computational linguistics. 194Johanna D Moore and Cécile L Paris. 1993. Planning text for advisory dialogues: Capturing intentional and rhetorical information. Computational linguis- tics, 19(4):651-694.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.
Building natural language generation systems. Ehud Reiter, Robert Dale, Cambridge university pressEhud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press.
A hierarchical latent variable encoder-decoder model for generating dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, C Aaron, Yoshua Courville, Bengio, AAAI. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating di- alogues. In AAAI, pages 3295-3301.
A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, Jian-Yun Nie, Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. the 24th ACM International on Conference on Information and Knowledge ManagementACMAlessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian- Yun Nie. 2015. A hierarchical recurrent encoder- decoder for generative context-aware query sugges- tion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Man- agement, pages 553-562. ACM.
The science of scientific writing. Judith A Swan, In BookJudith A. Swan. 2002. The science of scientific writ- ing. In Book.
Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, arXiv:1503.00075arXiv preprintKai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075.
Script theory: Differential magnification of affects. S Silvan, Tomkins, Nebraska symposium on motivation. University of Nebraska PressSilvan S Tomkins. 1978. Script theory: Differential magnification of affects. In Nebraska symposium on motivation. University of Nebraska Press.
Describing a knowledge base. Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Ji Heng, Kevin Knight, abs/1809.01797CoRR. Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. 2018. Describing a knowledge base. In CoRR, vol- ume abs/1809.01797.
Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesXiaodong He, Alex Smola, and Eduard HovyZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.
Dpocl: A principled approach to discourse planning. Michael Young, Johanna D Moore, Proceedings of the Seventh International Workshop on Natural Language Generation. the Seventh International Workshop on Natural Language GenerationAssociation for Computational LinguisticsR Michael Young and Johanna D Moore. 1994. Dpocl: A principled approach to discourse planning. In Proceedings of the Seventh International Workshop on Natural Language Generation, pages 13-20. As- sociation for Computational Linguistics.
Variational neural discourse relation recognizer. Biao Zhang, Deyi Xiong, Jinsong Su, Qun Liu, Rongrong Ji, Hong Duan, Min Zhang, EMNLP. Biao Zhang, Deyi Xiong, Jinsong Su, Qun Liu, Ron- grong Ji, Hong Duan, and Min Zhang. 2016. Vari- ational neural discourse relation recognizer. In EMNLP.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Richard S Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, IEEE International Conference on Computer Vision (ICCV). Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), pages 19- 27.
| [
"https://github.com/dykang/flownet"
] |
[
"How multilingual is Multilingual BERT?",
"How multilingual is Multilingual BERT?"
] | [
"Telmo Pires telmop@google.com \nGoogle Research\n\n",
"Eva Schlinger eschling@google.com \nGoogle Research\n\n",
"Dan Garrette dhgarrette@google.com \nGoogle Research\n\n"
] | [
"Google Research\n",
"Google Research\n",
"Google Research\n"
] | [
"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics"
] | In this paper, we show that Multilingual BERT (M-BERT), released byDevlin et al. (2019)as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs. | 10.18653/v1/p19-1493 | [
"https://www.aclweb.org/anthology/P19-1493.pdf"
] | 174,798,142 | 1906.01502 | 008a05fd7fef1d77bca8cbb1350fed1dfdaf34d5 |
How multilingual is Multilingual BERT?
July 28 -August 2, 2019
Telmo Pires telmop@google.com
Google Research
Eva Schlinger eschling@google.com
Google Research
Dan Garrette dhgarrette@google.com
Google Research
How multilingual is Multilingual BERT?
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyJuly 28 -August 2, 20194996
In this paper, we show that Multilingual BERT (M-BERT), released byDevlin et al. (2019)as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.
Introduction
Deep, contextualized language models provide powerful, general-purpose linguistic representations that have enabled significant advances among a wide range of natural language processing tasks (Peters et al., 2018b;Devlin et al., 2019). These models can be pre-trained on large corpora of readily available unannotated text, and then fine-tuned for specific tasks on smaller amounts of supervised data, relying on the induced language model structure to facilitate generalization beyond the annotations. Previous work on model probing has shown that these representations are able to encode, among other things, syntactic and named entity information, but they have heretofore focused on what models trained on English capture about English (Peters et al., 2018a;Tenney et al., 2019b,a). * Google AI Resident.
In this paper, we empirically investigate the degree to which these representations generalize across languages. We explore this question using Multilingual BERT (henceforth, M-BERT), released by Devlin et al. (2019) as a single language model pre-trained on the concatenation of monolingual Wikipedia corpora from 104 languages. 1 M-BERT is particularly well suited to this probing study because it enables a very straightforward approach to zero-shot cross-lingual model transfer: we fine-tune the model using task-specific supervised training data from one language, and evaluate that task in a different language, thus allowing us to observe the ways in which the model generalizes information across languages.
Our results show that M-BERT is able to perform cross-lingual generalization surprisingly well. More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages improves transfer, M-BERT is also able to transfer between languages written in different scriptsthus having zero lexical overlap-indicating that it captures multilingual representations. We further show that transfer works best for typologically similar languages, suggesting that while M-BERT's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order.
Models and Data
Like the original English BERT model (henceforth, EN-BERT), M-BERT is a 12 layer transformer (Devlin et al., 2019), but instead of be- ing trained only on monolingual English data with an English-derived vocabulary, it is trained on the Wikipedia pages of 104 languages with a shared word piece vocabulary. It does not use any marker denoting the input language, and does not have any explicit mechanism to encourage translationequivalent pairs to have similar representations.
For NER and POS, we use the same sequence tagging architecture as Devlin et al. (2019). We tokenize the input sentence, feed it to BERT, get the last layer's activations, and pass them through a final layer to make the tag predictions. The whole model is then fine-tuned to minimize the cross entropy loss for the task. When tokenization splits words into multiple pieces, we take the prediction for the first piece as the prediction for the word.
Named entity recognition experiments
We perform NER experiments on two datasets: the publicly available CoNLL-2002 and-2003 sets, containing Dutch, Spanish, English, and German (Tjong Kim Sang, 2002;Sang and Meulder, 2003); and an in-house dataset with 16 languages, 2 using the same CoNLL categories. Table 1 shows M-BERT zero-shot performance on all language pairs in the CoNLL data.
Part of speech tagging experiments
We perform POS experiments using Universal Dependencies (UD) (Nivre et al., 2016) data for 41 languages. 3 We use the evaluation sets from Zeman et al. (2017). Table 2: POS accuracy on a subset of UD languages. Figure 1: Zero-shot NER F1 score versus entity word piece overlap among 16 languages. While performance using EN-BERT depends directly on word piece overlap, M-BERT's performance is largely independent of overlap, indicating that it learns multilingual representations deeper than simple vocabulary memorization.
Vocabulary Memorization
Because M-BERT uses a single, multilingual vocabulary, one form of cross-lingual transfer occurs when word pieces present during fine-tuning also appear in the evaluation languages. In this section, we present experiments probing M-BERT's dependence on this superficial form of generalization: How much does transferability depend on lexical overlap? And is transfer possible to languages written in different scripts (no overlap)?
Effect of vocabulary overlap
If M-BERT's ability to generalize were mostly due to vocabulary memorization, we would expect zero-shot performance on NER to be highly dependent on word piece overlap, since entities are often similar across languages. To measure this effect, we compute E train and E eval , the sets of word pieces used in entities in the training and evaluation datasets, respectively, and define overlap as the fraction of common word pieces used in the entities: overlap = |E train ∩E eval | / |E train ∪E eval |. Figure 1 plots NER F1 score versus entity overlap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both M-BERT and EN-BERT. 4 We can see that performance using EN-BERT depends directly on word piece overlap: the ability to transfer deteriorates as word piece overlap diminishes, and F1 scores are near zero for languages written in different scripts. M-BERT's performance, on the other hand, is flat for a wide range of overlaps, and even for language pairs with almost no lexical overlap, scores vary between 40% and 70%, showing that M-BERT's pretraining on multiple languages has enabled a representational capacity deeper than simple vocabulary memorization. 5 To further verify that EN-BERT's inability to generalize is due to its lack of a multilingual representation and not an inability of its Englishspecific word piece vocabulary to represent data in other languages, we evaluate on non-cross-lingual NER and see that it performs comparably to a previous state of the art model (see Table 3).
Generalization across scripts
M-BERT's ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table 4 shows a sample of POS results for transfer across scripts.
Among the most surprising results, an M-BERT model that has been fine-tuned using only POSlabeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single POStagged Devanagari word. This provides clear evidence of M-BERT's multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data.
However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that M-BERT's multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section 4.2, is typological similarity. English and Japanese have a different order of subject, verb 5 Individual language trends are similar to aggregate plots.
Encoding Linguistic Structure
In the previous section, we showed that M-BERT's ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation. In this section, we present probing experiments that investigate the nature of that representation: How does typological similarity affect M-BERT's ability to generalize? Can M-BERT generalize from monolingual inputs to code-switching text? Can the model generalize to transliterated text without transliterated language model pretraining?
Effect of language similarity
Following Naseem et al. (2012), we compare languages on a subset of the WALS features (Dryer and Haspelmath, 2013) relevant to grammatical ordering. 6 Figure 2 plots POS zero-shot accuracy against the number of common WALS features. As expected, performance improves with similarity, showing that it is easier for M-BERT to map linguistic structures when they are more similar, although it still does a decent job for low similarity languages when compared to EN-BERT. Table 5 shows macro-averaged POS accuracies for transfer between languages grouped according to two typological features: subject/object/verb order, and adjective/noun order 7 (Dryer and Haspelmath, 2013). The results reported include only zero-shot transfer, i.e. they do not include cases training and testing on the same language. We can see that performance is best when transferring between languages that share word order features, suggesting that while M-BERT's multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order.
Generalizing across typological features
Code switching and transliteration
Code-switching (CS)-the mixing of multiple languages within a single utterance-and transliteration-writing that is not in the language's standard script-present unique test cases for M-BERT, which is pre-trained on monolingual, standard-script corpora. Generalizing to codeswitching is similar to other cross-lingual transfer scenarios, but would benefit to an even larger degree from a shared multilingual representation. Likewise, generalizing to transliterated text is similar to other cross-script transfer experiments, but has the additional caveat that M-BERT was not pre-trained on text that looks like the target.
We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script. Table 6 shows the results for mod- Table 6: M-BERT's POS accuracy on the code-switched Hindi/English dataset from Bhat et al. (2018), on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch POS. els fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version). For script-corrected inputs, i.e., when Hindi is written in Devanagari, M-BERT's performance when trained only on monolingual corpora is comparable to performance when training on codeswitched data, and it is likely that some of the remaining difference is due to domain mismatch. This provides further evidence that M-BERT uses a representation that is able to incorporate information from multiple languages.
However, M-BERT is not able to effectively transfer to a transliterated target, suggesting that it is the language model pre-training on a particular language that allows transfer to that language. M-BERT is outperformed by previous work in both the monolingual-only and code-switched supervision scenarios. Neither Ball and Garrette (2018) nor Bhat et al. (2018) use contextualized word embeddings, but both incorporate explicit transliteration signals into their approaches.
Multilingual characterization of the feature space
In this section, we study the structure of M-BERT's feature space. If it is multilingual, then the transformation mapping between the same sentence in 2 languages should not depend on the sentence itself, just on the language pair.
Experimental Setup
We sample 5000 pairs of sentences from WMT16 (Bojar et al., 2016) and feed each sentence (separately) to M-BERT with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [CLS] and [SEP], to get a vector for each sentence, at each layer l, v
EN→DE = 1 M i v (l) DE i − v (l) EN i , where M is the number of pairs. Finally, we translate each sentence, v (l) EN i , byv (l)
EN→DE , find the closest German sentence vector 8 , and measure the fraction of times the nearest neighbour is the correct pair, which we call the "nearest neighbor accuracy".
Results
In Figure 3, we plot the nearest neighbor accuracy for EN-DE (solid line). It achieves over 50% accuracy for all but the bottom layers, 9 which seems to imply that the hidden representations, although separated in space, share a common subspace that represents useful linguistic information, in a language-agnostic way. Similar curves are obtained for EN-RU, and UR-HI (in-house dataset), showing this works for multiple languages.
As to the reason why the accuracy goes down in the last few layers, one possible explanation is that since the model was pre-trained for language modeling, it might need more language-specific information to correctly predict the missing word.
Conclusion
In this work, we showed that M-BERT's robust, often surprising, ability to generalize crosslingually is underpinned by a multilingual representation, without being explicitly trained for it. The model handles transfer across scripts and to code-switching fairly well, but effective transfer to typologically divergent and transliterated targets will likely require the model to incorporate an explicit multilingual training objective, such as that used by Lample and Conneau (2019) or Artetxe and Schwenk (2018).
As to why M-BERT generalizes across languages, we hypothesize that having word pieces used in all languages (numbers, URLs, etc) which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space.
It is our hope that these kinds of probing experiments will help steer researchers toward the most promising lines of inquiry by encouraging them to focus on the places where current contextualized word representation approaches fall short.
A Model Parameters
All models were fine-tuned with a batch size of 32, and a maximum sequence length of 128 for 3 epochs. We used a learning rate of 3e−5 with learning rate warmup during the first 10% of steps, and linear decay afterwards. We also applied 10% dropout on the last layer. No parameter tuning was performed. We used the BERT-Base, Multilingual Cased checkpoint from https://github. com/google-research/bert. Table 7: NER results on the CoNLL test sets for EN-BERT. The row is the fine-tuning language, the column the evaluation language. There is a big gap between this model's zero-shot performance and M-BERT's, showing that the pre-training is helping in cross-lingual transfer. Table 8: POS accuracy on the UD test sets for a subset of European languages using EN-BERT. The row specifies a fine-tuning language, the column the evaluation language. There is a big gap between this model's zeroshot performance and M-BERT's, showing the pretraining is helping learn a useful cross-lingual representation for grammar.
B CoNLL Results for EN-BERT
C Some POS Results for EN-BERT
Figure 2 :
2Zero-shot POS accuracy versus number of common WALS features. Due to their scarcity, we exclude pairs with no common features. SVO SOV SVO 81.55 66.52 SOV 63.98 64.22 (a) Subj./verb/obj. order. AN NA AN 73.29 70.94 NA 75.10 79.64 (b) Adjective/noun order.
Figure 3 :
3LANG . For each pair of sentences, Accuracy of nearest neighbor translation for EN-DE, EN-RU, and HI-UR.DE i ), we compute the vector pointing from one to the other and average it over all pairs:v (l)
Table 1 :
1NER F1 results on the CoNLL data.
Table 2
2shows M-BERT zeroshot results for four European languages. We see that M-BERT generalizes well across languages, achieving over 80% accuracy for all pairs. Galician, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Marathi, Dutch, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, Tamil, Telugu, Turkish, Urdu, and Chinese.2 Arabic, Bengali, Czech, German, English, Spanish,
French, Hindi, Indonesian, Italian, Japanese, Korean, Por-
tuguese, Russian, Turkish, and Chinese.
3 Arabic, Bulgarian, Catalan, Czech, Danish, German,
Greek, English, Spanish, Estonian, Basque, Persian, Finnish,
French,
Table 3 :
3NER F1 results fine-tuning and evaluating on the same language (not zero-shot transfer).
Table 4 :
4POS accuracy on the UD test set for languages with different scripts. Row=fine-tuning, column=eval. and object, while English and Bulgarian have the same, and M-BERT may be having trouble generalizing across different orderings.
Table 5 :
5Macro-average POS accuracies when transferring between SVO/SOV languages or AN/NA languages. Row = fine-tuning, column = evaluation.
Fine-tuning \Eval EN DE NL ES EN 91.07 24.38 40.62 49.99 DE 55.36 73.32 54.84 50.80 NL 59.36 27.57 84.23 53.15 ES 55.09 26.13 48.75 81.84
Fine-tuning \Eval EN DE ES IT EN 96.94 38.31 50.38 46.07 DE 28.62 92.63 30.23 25.59 ES 28.78 46.15 94.36 71.50 IT 52.48 48.08 76.51 96.41
https://github.com/google-research/bert
Results on CoNLL data follow the same trends, but those trends are more apparent with 16 languages than with 4.
In terms of 2 distance.9 Our intuition is that the lower layers have more "token level" information, which is more language dependent, particularly for languages that share few word pieces.
AcknowledgementsWe would like to thank Mark Omernick, Livio Baldini Soares, Emily Pitler, Jason Riesa, and Slav Petrov for the valuable discussions and feedback.
Hebrew, Croatian, Indonesian, Italian, Latvian, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian. Russian Romanian, Slovak, Slovenian, Chinese Swedish, Sov, Basque, Farsi, Hindi, Japanese, Korean, Marathi, Tamil, Turkish Telugu, Urdu , 81A (Order of Subject, Object and Verb). Bulgarian, Catalan, Czech, Danish, English, Spanish, Estonian, Finnish, French, Galician88A (Order of Demonstrative and Noun), and 89A (Order of Numeral and Noun)81A (Order of Subject, Object and Verb), 85A (Order of Adposition and Noun), 86A (Order of Genitive and Noun), 87A (Order of Adjective and Noun), 88A (Order of Demon- strative and Noun), and 89A (Order of Numeral and Noun). 7 SVO languages: Bulgarian, Catalan, Czech, Danish, English, Spanish, Estonian, Finnish, French, Galician, He- brew, Croatian, Indonesian, Italian, Latvian, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, and Chinese. SOV Languages: Basque, Farsi, Hindi, Japanese, Korean, Marathi, Tamil, Telugu, Turkish, and Urdu.
Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Mikel Artetxe, Holger Schwenk, arXiv:1812.10464arXiv preprintMikel Artetxe and Holger Schwenk. 2018. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. arXiv preprint arXiv:1812.10464.
Part-of-speech tagging for code-switched, transliterated texts without explicit language identification. Kelsey Ball, Dan Garrette, Proceedings of EMNLP. EMNLPKelsey Ball and Dan Garrette. 2018. Part-of-speech tagging for code-switched, transliterated texts with- out explicit language identification. In Proceedings of EMNLP.
Universal dependency parsing for Hindi-English code-switching. Irshad Bhat, A Riyaz, Manish Bhat, Dipti Shrivastava, Sharma, Proceedings of NAACL. NAACLIrshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2018. Universal dependency parsing for Hindi-English code-switching. In Proceedings of NAACL.
Results of the WMT16 metrics shared task. Ondřej Bojar, Yvette Graham, Amir Kamran, Miloš Stanojević, Proceedings of the First Conference on Machine Translation. the First Conference on Machine Translation2Shared Task PapersOndřej Bojar, Yvette Graham, Amir Kamran, and Miloš Stanojević. 2016. Results of the WMT16 metrics shared task. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of NAACL. NAACLJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL.
. Matthew S Dryer, Martin Haspelmath, edi- tors. 2013. WALS OnlineMatthew S. Dryer and Martin Haspelmath, edi- tors. 2013. WALS Online.
. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016.
Neural architectures for named entity recognition. Proceedings of NAACL. NAACLNeural architectures for named entity recognition. In Proceedings of NAACL.
Guillaume Lample, Alexis Conneau, arXiv:1901.07291Crosslingual language model pretraining. arXiv preprintGuillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.
Selective sharing for multilingual dependency parsing. Tahira Naseem, Regina Barzilay, Amir Globerson, Proceedings of ACL. ACLTahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of ACL.
Universal dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T Mcdonald, Slav Petrov, Sampo Pyysalo, Proceedings of LREC. LRECNatalia Silveira, Reut Tsarfaty, and Daniel ZemanJoakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan T. McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A mul- tilingual treebank collection. In Proceedings of LREC.
Dissecting contextual word embeddings: Architecture and representation. Matthew Peters, Mark Neumann, Luke Zettlemoyer, Wen-Tau Yih, Proceedings of EMNLP. EMNLPMatthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of EMNLP.
Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of NAACL. NAACLMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word rep- resentations. In Proceedings of NAACL.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of CoNLL. CoNLLErik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL.
BERT rediscovers the classical NLP pipeline. Ian Tenney, Dipanjan Das, Ellie Pavlick, Proceedings of ACL. ACLIan Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of ACL.
What do you learn from context? Probing for sentence structure in contextualized word representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, Thomas Mccoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, Ellie Pavlick, Proceedings of ICLR. ICLRIan Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contex- tualized word representations. In Proceedings of ICLR.
Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. Erik F , Tjong Kim Sang, Proceedings of CoNLL. CoNLLErik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In Proceedings of CoNLL.
. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic Jr, Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Aljoscha Burchardt. Nizar Habash, Herman Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de-PaivaVivien MacketanzÇ agrı ÇöltekinDaniel Zeman, Martin Popel, Milan Straka, Jan Ha- jic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- cis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Mis- silä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le- ung, Marie-Catherine de Marneffe, Manuela San- guinetti, Maria Simi, Hiroshi Kanayama, Valeria de- Paiva, Kira Droganova, Héctor Martínez Alonso, Ç agrı Çöltekin, Umut Sulubacak, Hans Uszkor- eit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga
CoNLL 2017 shared task: Multilingual parsing from raw text to universal dependencies. Mohammed Kayadelen, Ali Attia, Zhuoran Elkahky, Emily Yu, Saran Pitler, Michael Lertpradit, Jesse Mandl, Hector Kirchner, Jana Fernandez Alcalde, Esha Strnadová, Ruli Banerjee, Antonio Manurung, Atsuko Stella, Sookyoung Shimada, Gustavo Kwak, Tatiana Mendonca, Rattima Lando, Josie Nitisaroj, Li, Proceedings of CoNLL. CoNLLKayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Str- nadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multi- lingual parsing from raw text to universal dependen- cies. In Proceedings of CoNLL.
| [
"https://github.com/google-research/bert"
] |
[
"UNIREX: A Unified Learning Framework for Language Model Rationale Extraction",
"UNIREX: A Unified Learning Framework for Language Model Rationale Extraction"
] | [
"Aaron Chan chanaaro@usc.edu \nUniversity of Southern California\n\n",
"Maziar Sanjabi maziars@fb.com \nMetaAI\n",
"Lambert Mathias mathiasl@fb.com \nMetaAI\n",
"Liang Tan liangtan@fb.com \nMetaAI\n",
"Shaoliang Nie snie@fb.com \nMetaAI\n",
"Xiaochang Peng xiaochang@fb.com \nMetaAI\n",
"Xiang Ren xiangren@usc.edu \nUniversity of Southern California\n\n",
"Hamed Firooz mhfirooz@fb.com \nMetaAI\n"
] | [
"University of Southern California\n",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"University of Southern California\n",
"MetaAI"
] | [] | An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction. Ideally, rationale extraction should be faithful (reflective of LM's actual behavior) and plausible (convincing to humans), without compromising the LM's (i.e., task model's) task performance. Although attribution algorithms and select-predict pipelines are commonly used in rationale extraction, they both rely on certain heuristics that hinder them from satisfying all three desiderata. In light of this, we propose UNIREX, a flexible learning framework which generalizes rationale extractor optimization as follows: (1) specify architecture for a learned rationale extractor; (2) select explainability objectives (i.e., faithfulness and plausibility criteria); and (3) jointly train the task model and rationale extractor on the task using selected objectives. UNIREX enables replacing prior works' heuristic design choices with a generic learned rationale extractor in (1) and optimizing it for all three desiderata in (2)-(3). To facilitate comparison between methods w.r.t. multiple desiderata, we introduce the Normalized Relative Gain (NRG) metric. Across five English text classification datasets, our best UNIREX configuration outperforms the strongest baselines by an average of 32.9% NRG. Plus, we find that UNIREXtrained rationale extractors' faithfulness can even generalize to unseen datasets and tasks. | 10.18653/v1/2022.bigscience-1.5 | [
"https://www.aclanthology.org/2022.bigscience-1.5.pdf"
] | 245,218,726 | 2112.08802 | cff8df2dd6280102908776c929fac7b0642bac8a |
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
May 27, 2022
Aaron Chan chanaaro@usc.edu
University of Southern California
Maziar Sanjabi maziars@fb.com
MetaAI
Lambert Mathias mathiasl@fb.com
MetaAI
Liang Tan liangtan@fb.com
MetaAI
Shaoliang Nie snie@fb.com
MetaAI
Xiaochang Peng xiaochang@fb.com
MetaAI
Xiang Ren xiangren@usc.edu
University of Southern California
Hamed Firooz mhfirooz@fb.com
MetaAI
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
May 27, 2022Proceedings of BigScience Episode #5 -Workshop on Challenges & Perspectives in Creating Large Language Models, pages 51 -67
An extractive rationale explains a language model's (LM's) prediction on a given task instance by highlighting the text inputs that most influenced the prediction. Ideally, rationale extraction should be faithful (reflective of LM's actual behavior) and plausible (convincing to humans), without compromising the LM's (i.e., task model's) task performance. Although attribution algorithms and select-predict pipelines are commonly used in rationale extraction, they both rely on certain heuristics that hinder them from satisfying all three desiderata. In light of this, we propose UNIREX, a flexible learning framework which generalizes rationale extractor optimization as follows: (1) specify architecture for a learned rationale extractor; (2) select explainability objectives (i.e., faithfulness and plausibility criteria); and (3) jointly train the task model and rationale extractor on the task using selected objectives. UNIREX enables replacing prior works' heuristic design choices with a generic learned rationale extractor in (1) and optimizing it for all three desiderata in (2)-(3). To facilitate comparison between methods w.r.t. multiple desiderata, we introduce the Normalized Relative Gain (NRG) metric. Across five English text classification datasets, our best UNIREX configuration outperforms the strongest baselines by an average of 32.9% NRG. Plus, we find that UNIREXtrained rationale extractors' faithfulness can even generalize to unseen datasets and tasks.
Introduction
Large neural language models (LMs) have yielded state-of-the-art performance on various natural language processing (NLP) tasks (Devlin et al., 2018;Liu et al., 2019). However, LMs' complex reasoning processes are notoriously opaque (Rudin, 2019), posing concerns about the societal implications of using LMs for high-stakes decision-making * Work done while AC was a research intern at Meta AI. (Bender et al., 2021). Thus, explaining LMs' behavior is crucial for promoting trust, ethics, and safety in NLP systems (Doshi-Velez and Kim, 2017;Lipton, 2018). Given a LM's (i.e., task model's) predicted label on a text classification instance, an extractive rationale is a type of explanation that highlights the tokens that most influenced the model to predict that label (Luo et al., 2021). Ideally, rationale extraction should be faithful (Ismail et al., 2021;Jain et al., 2020) and plausible (DeYoung et al., 2019), without hurting the LM's task performance (DeYoung et al., 2019) (Fig. 1).
Configuring the rationale extractor and its training can greatly impact these desiderata, yet prior works have commonly adopted two suboptimal heuristics. First, many works rely in some way on attribution algorithms (AAs), which extract rationales via handcrafted functions (Sundararajan et al., 2017;Ismail et al., 2021;Situ et al., 2021). AAs cannot be directly trained and tend to be computeintensive (Bastings and Filippova, 2020). Also, AAs can be a bottleneck for plausibility, as producing human-like rationales is a complex objec-tive requiring high capacity rationale extractors (Narang et al., 2020;DeYoung et al., 2019). Second, many works use a specialized select-predict pipeline (SPP), where a predictor module is trained to solve the task using only tokens chosen by a selector module (Jain et al., 2020;Yu et al., 2021;Paranjape et al., 2020). Instead of faithfulness optimization, SPPs heuristically aim for "faithfulness by construction" by treating the selected tokens as a rationale for the predictor's output (which depends only on those tokens). Still, SPPs typically have worse task performance than vanilla LMs since SPPs hide the full input from the predictor.
To tackle this challenge, we propose the UNIfied Learning Framework for Rationale EXtraction (UNIREX), which generalizes rationale extractor optimization as follows: (1) specify architecture for a learned rationale extractor; (2) select explainability objectives (i.e., faithfulness and plausibility criteria); and (3) jointly train the task model and rationale extractor on the task using selected objectives (Sec. 3). UNIREX enables replacing prior works' heuristic design choices in (1) with a generic learned rationale extractor and optimizing it for all three desiderata in (2)-(3).
UNIREX provides significant flexibility in performing (1)-(3). For (1), any model architecture is applicable, but we study Transformer LM based rationale extractors in this work (Zaheer et al., 2020;DeYoung et al., 2019). We focus on two architectures: (A) Dual LM, where task model and rationale extractor are separate and (B) Shared LM, where task model and rationale extractor share parameters. For (2), any faithfulness and plausibility criteria can be used. Following DeYoung et al. (2019), we focus on comprehensiveness and sufficiency as faithfulness criteria, while using similarity to gold rationales as plausibility criteria. For (3), trade-offs between the three desiderata can be easily managed during rationale extractor optimization by setting arbitrary loss weights for the faithfulness and plausibility objectives. Plus, though computing the faithfulness criteria involves discrete (nondifferentiable) token selection, using Shared LM can approximate end-to-end training and enable both task model and rationale extractor to be optimized w.r.t. all three desiderata (Sec. 3.3).
To evaluate all three desiderata in aggregate, we introduce the Normalized Relative Gain (NRG) metric. Across five English text classification datasets -SST, Movies, CoS-E, MultiRC, and e-SNLI (Carton et al., 2020;DeYoung et al., 2019)our best UNIREX configuration outperforms the strongest baselines by an average of 32.9% NRG (Sec. 4.2), showing that UNIREX can optimize rationale extractors for all three desiderata. In addition, we verify our UNIREX design choices via extensive ablation studies (Sec. 4.3). Furthermore, UNIREX-trained extractors have high generalization power, yielding high plausiblity with minimal gold rationale supervision (Sec. 4.4) and high faithfulness on unseen datasets and tasks (Sec. 4.5).
Finally, our user study shows that humans judge UNIREX rationales as more plausible than rationales extracted using other methods (Sec. 4.6).
Problem Formulation
Rationale Extraction Let F task = f task (f enc (·)) be a task model for M -class text classification (Sec. A.1), where f enc is the text encoder and f task is the task output head. Typically, F task has a BERT-style architecture (Devlin et al., 2018), in which f enc is a Transformer (Vaswani et al., 2017) while f task is a linear layer with softmax classifier. Let x i = [x t i ] n t=1 be the n-token input sequence (e.g., a sentence) for task instance i, and F task (x i ) ∈ R M be the logit vector for the output of the task model. Letŷ i = arg max j F task (x i ) j be the class predicted by F task . Given F task , x i , andŷ i , the goal of rationale extraction is to output vector s i = [s t i ] n t=1 ∈ R n , such that each s t i ∈ R is an importance score indicating how much token x t i influenced F task to predict classŷ i . Let F ext be a rationale extractor, such that s i = F ext (F task , x i ,ŷ i ). F ext can be a learned or heuristic function. In practice, the final rationale is often obtained by binarizing s i as r i ∈ {0, 1} n , via the top-k% strategy: DeYoung et al., 2019;Jain et al., 2020;Pruthi et al., 2020;Chan et al., 2021). For top-k%, let r (k) i be the "important" (i.e., ones) tokens in r i , when using 0 ≤ k ≤ 100.
r t i = 1 if s t i is one of the top-k% scores in s i ; oth- erwise, r t i = 0 (
Faithfulness means how well a rationale reflects F task 's true reasoning process for predictingŷ i (Jacovi and Goldberg, 2020). Hence, faithfulness metrics measure how much the r (k) i tokens impact pŷ i (x i ), which denotes F task 's confidence probability forŷ i when using x i as input (DeYoung et al., 2019;Shrikumar et al., 2017;Hooker et al., 2018;Pruthi et al., 2020)
(k) i is removed from the in- put: comp = pŷ i (x i ) − pŷ i (x i \r (k) i ). Sufficiency (suff) measures the change in pŷ i when only r (k) i is kept in the input: suff = pŷ i (x i ) − pŷ i (r (k) i ).
High faithfulness is signaled by high comp and low suff.
Plausibility means how convincing a rationale is to humans (Jacovi and Goldberg, 2020). This can be measured by automatically computing the similarity between F ext 's rationales (either s i or r i ) and human-annotated gold rationales (DeYoung et al., 2019), or by asking human annotators to rate whether F ext 's rationales make sense for predictingŷ i (Strout et al., 2019;Doshi-Velez and Kim, 2017). Typically, a gold rationale is a binary vector r * i ∈ {0, 1} n , where ones/zeros indicate important/unimportant tokens (Lei et al., 2016).
Task Performance, w.r.t. rationale extraction, concerns how much F task 's task performance (on test set) drops when F task is trained with explainability objectives (i.e., faithfulness, plausibility) for F ext . As long as F task is trained with non-task losses, F task 's task performance can be affected.
UNIREX
Given task model F task , UNIREX generalizes rationale extractor optimization as follows: (1) choose architecture for a learned rationale extractor F ext ; (2) select explainability objectives (i.e., faithfulness loss L faith and plausibility loss L plaus ); and (3) jointly train F task and F ext using L task (task loss), L faith , and L plaus . UNIREX training consists of two backpropagation paths (Fig. 2). The first path is used to update F task w.r.t. L task and L faith . Whereas L task is computed w.r.t. the task target y i , L faith is computed only using the task input x i and the top-k% important tokens r (k) i (obtained via F ext ), based on some combination of comp and suff (Sec. 2). The second path is used to update F ext w.r.t. L plaus , which encourages importance scores s i to approximate gold rationale r * i . Thus, UNIREX frames rationale extraction as the following optimization problem:
min F task , Fext L task (x i , y i ; F task ) + α f L faith (x i , r (k) i ; F task ) + α p L plaus (x i , r * i ; F ext ),(1)
where α f and α p are loss weights. If F task and F ext share parameters, then the shared parameters will be optimized w.r.t. all losses. During inference, for task input x i , we first use F task to predict y i , then use F ext to output a rationale r i for F task 's predictionŷ i . Below, we discuss options for the rationale extractor and explainability objectives.
Rationale Extractor
In UNIREX, F ext is a learned function by default. Learned F ext can be any model that transforms x t i into s t i . Given their success in NLP explainability (DeYoung et al., 2019), we focus on pre-trained Transformer LMs and highlight two architectures: Dual LM (DLM) and Shared LM (SLM) (Fig. 3). For DLM, F task and F ext are two separate Transformer LMs. DLM provides more dedicated capacity for F ext , which can help F ext output plausible rationales. For SLM, F task and F ext are two Transformer LMs sharing encoder f enc , while F ext has its own output head f ext . SLM leverages multitask learning between F task and F ext , which can improve faithfulness since F ext gets more information about F task 's reasoning process. Unlike heuristic F ext (Sec. A.2), learned F ext can be optimized for faithfulness/plausibility, but cannot be used out of the box without training. Learned F ext is preferred if: (A) optimizing for both faithfulness and plausibility, and (B) gold rationales are available for plausibility optimization (Sec. A.3).
Explainability Objectives
After selecting F ext , we specify the explainability objectives, which can be any combination of faithfulness and plausibility criteria. In prior approaches (e.g., AA, SPPs), the rationale extractor is not optimized for both faithfulness and plausibility, but UNIREX makes this possible. For any choice of learned F ext , UNIREX lets us easily "plug and play" different criteria and loss weights, based on our needs and domain knowledge, to find those that best balance the rationale extraction desiderata.
Faithfulness Evaluating rationale faithfulness is still an open problem with many existing metrics, and UNIREX is not tailored for any specific metric. Still, given the prevalence of comp/suff (Sec. 2), we focus on comp/suff based objectives.
Recall that comp measures the importance of tokens in r (k) i as how pŷ i (x i ), F task 's predicted probability for classŷ i , changes when those tokens are removed from x i . Intuitively, we want pŷ i (x i ) to be higher than pŷ i (x i \r (k) i ), so higher comp is better. Since comp is defined for a single class' probability rather than the label distribution, we can define the comp loss L comp via cross-entropy loss L CE , as in 53 Figure the following difference criterion for L comp :
Lcomp-diff = LCE(Ftask(xi), yi) − LCE(Ftask(xi\r (k) i ), yi)) (2) L CE (F task (xi), yi) = −yi log(F task (xi))(3)
For training stability, we compute comp loss for target class y i here instead of F task 's predicted classŷ i , sinceŷ i is a moving target during training. Using L comp-diff , it is possible for
L CE (F task (x i \r (k)
i ), y i )) to become much larger than L CE (F task (x i ), y i ), leading to arbitrarily negative losses. To avoid this, we can add margin m c to the loss function, giving the margin criterion:
Lcomp-margin = max(−mc, LCE(Ftask(xi), yi) − LCE(Ftask(xi\r (k) i ), yi)) + mc(4)
Recall that suff measures the importance of tokens in r (k) i as how pŷ i (x i ), F task 's predicted probability for classŷ i , changes when they are the only tokens kept in x i . Based on suff's definition, we want pŷ i (r (k) i ) to be higher than pŷ i (x i ), so lower suff is better. For suff loss L suff , we define the difference and margin criteria analogously with margin m s but the opposite sign (since lower suff is better):
Lsuff-diff = LCE(Ftask(r (k) i ), yi) − LCE(Ftask(xi), yi) (5) Lsuff-margin = max(−ms, LCE(Ftask(r (k) i ), yi) − LCE(Ftask(xi), yi)) + ms(6)
In our experiments, we find that the marginbased comp/suff criteria are effective (Sec. 4.3), though others (e.g., KL Div, MAE) can be used too (Sec. A.4.1). Note that r (k) i is computed via top-k% thresholding (Sec. 2), so we also need to specify a set K of threshold values. We separately compute the comp/suff losses for each k ∈ K, then obtain the final comp/suff losses by averaging over all k values via area-over-precision-curve (AOPC) (DeYoung et al., 2019). To reflect this, we denote the comp and suff losses as L comp,K and L suff,K , respectively. Let α f L faith = α c L comp,K + α s L suff,K , where α c and α s are loss weights.
Plausibility Plausibility is defined as how convincing a rationale is to humans (Jacovi and Goldberg, 2020), i.e., whether humans would agree the rationale supports the model's prediction. While optimizing for plausibility should ideally involve human-in-the-loop feedback, this is prohibitive. Instead, many works consider gold rationales as a cheaper form of plausibility annotation (DeYoung et al., 2019;Narang et al., 2020;Jain et al., 2020). Thus, if gold rationale supervision is available, then we can optimize for plausibility. With gold rationale r * i for input x i , plausibility optimization entails training F ext to predict binary importance label r * ,t i for each token x t i . This is essentially token classification, so one natural choice for L plaus is the token-level binary cross-entropy (BCE) criterion:
Lplaus-BCE = − t r * ,t i log(Fext(x t i ))(7)
Besides BCE loss, we can also consider other criteria like sequence-level KL divergence and L1 loss. See Sec. A.4.2 for discussion of these and other plausibility criteria.
Training and Inference
After setting F ext , L faith , and L plaus , we can move on to training F task and F ext . Since top-k% rationale binarization (Sec. 3.2) is not differentiable, by default, we cannot backpropagate L faith through all of F ext 's parameters. Thus, F task is trained via L task and L faith , while F ext is only trained via L plaus . This means F ext 's rationales r i are indirectly optimized for faithfulness by regularizing F task such that its behavior aligns with r i . The exception is if we are using the SLM variant, where encoder f enc is shared by F task and F ext . In this case, f enc is optimized w.r.t. all losses, f task is optimized w.r.t. L task and L faith , and f ext is optimized w.r.t. L plaus . SLM is a simple way to approximate end-to-end training of F task and F ext . In contrast, past SPPs have used more complex methods like reinforcement learning (Lei et al., 2016) and the reparameterization trick (Bastings et al., 2019), whose training instability can hurt task performance (Jain et al., 2020). Now, we summarize the full learning objective. Given that cross-entropy loss L task = L CE (F task (x i ), y i ) is used to train F task to predict y i , the full learning objective is:
L = Ltask + α f Lfaith + αpLplaus = Ltask + αcLcomp,K + αsLsuff,K + αpLplaus.(8)
During inference, we use F task to predict y i , then use F ext to output r i for F task 's predicted labelŷ i .
Experiments
We present empirical results demonstrating UNIREX's effectiveness in managing trade-offs between faithfulness, plausibility, and task performance during rationale extractor optimization. To aggregately evaluate multiple desiderata, we introduce the Normalized Relative Gain (NRG) metric, which is based on the ARG metric from Ye et al. (2021). NRG normalizes raw metrics (e.g., F1, sufficiency) to scores between 0 and 1 (higher is better). Given a set of raw metric scores Z = {z 1 , z 2 , ...} (each from a different method), NRG(z i ) captures z i 's value relative to min(Z) and max(Z). If higher values are better for the given metric (e.g., F1), then we have:
NRG(z i ) = z i −min(Z) max(Z)−min(Z)
. If lower values are better (e.g., sufficiency), then we have:
NRG(z i ) = max(Z)−z i max(Z)−min(Z)
. After computing NRG for multiple raw metrics, we can aggregate them w.r.t. desiderata via averaging. Let FNRG, PNRG, and TNRG be the NRG values for faithfulness, plausibility, and task performance, respectively. Finally, we compute the composite NRG as: CNRG = FNRG+PNRG+TNRG 3
. Results Reporting For all results, we report average over three seeds and the five k values. We jan et al., 2017). We also experiment with IG for L2E (Situ et al., 2021), which distills knowledge from an AA to an LM. The second category is SPPs: FRESH (Jain et al., 2020) and A2R (Yu et al., 2021). For FRESH, we use a strong variant where IG rationales are directly given to the predictor, rather than output by a trained selector. A2R aims to improve SPP task performance by regularizing the predictor with an attention-based predictor that uses the full input. In addition, we introduce FRESH+P and A2R+P, which augment FRESH and A2R, respectively, with plausibility optimization. The third category is AA-based regularization: SGT (Ismail et al., 2021), which uses a sufficiency-based criterion to optimize for faithfulness. We also consider SGT+P, which augments SGT with plausibility optimization.
Main Results
Fig. 4-6 display the main results. In Fig. 4/5, we compare the CNRG for all methods and datasets, without/with gold rationales. In both plots, we see that UNIREX variants achieve the best CNRG across all datasets, indicating that they are effective in balancing the three desiderata. In particular, UNIREX (DLM-FP) and UNIREX (SLM- Figure 6: NRG Comparison by Desiderata. We show FNRG, PNRG, and TNRG for all methods, averaged over all datasets. FP) have very high CNRG scores, both yielding more than 30% improvement over the strongest baselines. Fig. 6 compares methods w.r.t. desiderata NRG (i.e., FNRG, PNRG, TNRG). Here, the left/right plots show methods without/with gold rationales. Again, we see that UNIREX variants achieve a good NRG balance of faithfulness, plausibility, and task performance. Meanwhile, many baselines (e.g., AA (IG), A2R, SGT+P) do well on some desiderata but very poorly on others.
Ablation Studies
We present five ablation studies to validate the effectiveness of our UNIREX design choices. The ablation results are displayed in Table 1. In this table, each of the five sections shows results for a different ablation. Thus, all numbers within the same section and column are comparable.
Extractor Type In the Ext Type (F) section, we compare four heuristic rationale extractors, using AA-F. Rand uses random importance scores, Gold directly uses the gold rationales, Inv uses the inverse of the gold rationales, and IG uses IG. All heuristics yield similar task performance, but IG dominates on all faithfulness metrics. This makes sense because IG is computed using F task 's inputs/parameters/outputs, while the others do not have this information. For plausibility, Gold is the best, Inv is the worst, and Rand and IG are about the same, as none of the heuristics are optimized for plausibility. In the Ext Type (FP) section, we compare four learned rationale extractors. By default, attribution algorithms' dimension scores are pooled into token scores via sum pooling. AA-FP (Sum) uses IG with sum pooling, while AA-FP (MLP) replaces the sum pooler with a MLP-based pooler to increase capacity for plausibility optimization. Task performance for all four methods is similar, AA-FP (Sum) dominates on faithfulness, and DLM-FP and SLM-FP dominate on plausibility. AA-FP (MLP) does not perform as well on faithfulness but slightly improves on plausibility compared to AA-FP (Sum).
Comp/Suff Losses The Comp/Suff Loss section compares different combinations of Comp and Suff losses, using SLM-FP. Note that SLM-FP (Comp+Suff) is equivalent to SLM-FP shown in other tables/sections. As expected, SLM-FP (Comp) does best on Comp, but SLM-FP (Comp+Suff) actually does best on Suff. Meanwhile, SLM-FP, (Suff) does second-best on Suff but is much worse on Comp. This shows that Comp and Suff are complementary for optimization.
Suff Criterion The Suff Criterion section compares different Suff criteria, using SLM-FP. SLM-FP (KLDiv) uses the KL divergence criterion, SLM-FP (MAE) uses the MAE criterion, and SLM-FP (Margin) uses the margin criterion. SLM-FP (Margin) is equivalent to SLM-FP in other ta- bles/sections. All criteria yield similar performance and plausibility, while Margin is slightly better on faithfulness.
SLM Extractor Head
The SLM Ext Head section compares different extractor heads, using SLM-FP. Linear is the default choice and uses a linear layer. MLP-2048-2 uses a MLP with two 2048-dim hidden layers. MLP-4096-3 uses a MLP with three 4096-dim hidden layers. All three output head types yield similar performance, but decreasing head capacity yields better faithfulness, while increasing head capacity heads yields better plausibility. This trades off faithfulness and plausibility, although larger heads will be more compute-intensive.
Gold Rationale Data Efficiency
UNIREX supports arbitrary amounts of gold rationale supervision and allows us to account for data efficiency. In Fig. 7, we compare plausibility (in AUPRC) for γ = [0.5, 1, 5, 10, 20, 100] (i.e., % of train instances with gold rationales). We compare AA (IG) and four UNIREX variants (AA-F, AA-FP, DLM-FP, SLM-FP). AA (IG) and AA-F do not use gold rationales and thus have the same AUPRC for all γ. Standard deviation is shown by the error bands. UNIREX (DLM-FP) and UNIREX (SLM-FP) dominate across all γ values, with AUPRC slowly decreasing as γ decreases. Even at γ = 0.5, they can still achieve high AUPRC. This suggests that UNIREX's gold rationale batching procedure (Sec. A.3) is effective for learning from minimal gold rationale supervision and demonstrates how UNIREX enables us to manage this trade-off. See Sec. A.6 for similar results on CoS-E.
Zero-Shot Faithfulness Transfer
In We want to show that, even if F task yields poor task performance on unseen datasets, F ext 's rationales can still be faithful. As expected, all methods achieve much lower task performance in the third setting than in the first two settings. However, faithfulness does not appear to be strongly correlated with task performance, as unseen tasks' comp/suff scores are similar to seen tasks'. Across all datasets, DLM-FP has the best faithfulness and is the only method whose comp is always higher than suff. AA-F is not as consistently strong as DLM-FP, but almost always beats AA (IG) on comp and suff. Meanwhile, AA (IG) has the worst comp and suff overall. Ultimately, these results suggest that UNIREX-trained models' faithfulness (i.e., alignment between F task 's and F ext 's outputs) is a dataset/task agnostic property (i.e., can generalize across datasets/tasks), further establishing UNIREX's utility in low-resource settings.
User Study on Plausibility
Gold rationale based plausibility evaluation is noisy because gold rationales are for the target label, not a model's predicted label. Thus, we conduct two five-annotator user studies (Table 3) to get a better plausibility measurement. Given 50 random test instances from SST, we get the rationales for SGT+P, A2R+P, UNIREX (AA-FP), and UNIREX (DLM-FP), plus the gold rationales. For each instance, we threshold all rationales to have the same number of positive tokens as the gold rationale. The first user study is forward simulation (Hase and Bansal, 2020; Jain et al., 2020). Here, the annotator is given an input and a rationale for some model's prediction, then asked what (binary) sentiment label the model most likely predicted. For forward simulation, we also consider a No Rationale baseline, where no tokens are highlighted. For No Rationale and Gold, the target label is the correct choice. Annotators are also asked to rate their confidence (4point Likert scale) in their answer to this question. The second user study involves giving a subjective rating of how plausible the rationale is (Hase and Bansal, 2020). Here, the annotator is given the input, rationale, and model's predicted label, then asked to rate (5-point Likert scale) how aligned the rationale is with the prediction. In both forward simulation and subjective rating, we find that DLM-FP performs best among all non-oracle methods and even beats Gold on accuracy, further supporting that DLM-FP rationales are plausible. As expected, the fact that Gold does not achieve near-100% accuracy shows the discrepancy between evaluating plausibility based on the target label (i.e., gold rationale similarity) and F task 's predicted label (forward simulation). Meanwhile, SGT+P and AA-FP, which had lower AUPRC/TF1 in our automatic evaluation, also do worse in accuracy/alignment. Also, users found SGT+P and AA-FP rationales harder to understand, as shown by their lower confidence scores. Meanwhile, A2R+P had high AUPRC/TF1, but gets very low accuracy/alignment because A2R+P's predicted label often not the target label, leading to misalignment with its gold-like rationale. A2R+P is a great example of how automatic plausibility evaluation can be misleading. For the accuracy, confidence, and alignment questions, we achieved Fleiss' Kappa (Fleiss, 1971) inter-annotator agreement scores of 0.2456 (fair), 0.1282 (slight), and, 0.1561 (slight), respectively. This lack of agreement shows the difficulty of measuring plausibility. Connection to UNIREX Unlike prior works, UNIREX enables both the task model and rationale extractor to be jointly optimized for faithfulness, plausibility, and task performance. As a result, UNIREX-trained rationale extractors achieve a better balance of faithfulness and plausibility, without compromising the task model's performance. Also, by using a learned rationale extractor, which generally only requires one model forward pass, UNIREX does not have the computational expenses that limit many AAs.
References
A Appendix
A.1 Text Classification
Here, we formalize the text classification problem in more detail.
Let D = {X , Y} N i=1 be a dataset, where X = {x i } N i=1 are the text inputs, Y = {y * i } N i=1
are the labels, and N is the number of instances (x i , y * i ) in D. We also assume D can be partitioned into train set D train , dev set D dev , and test set D test . Let F task = f task (f enc (·)) be a task LM, where f enc is the text encoder, and f task is the task output head. Typically, F task has a BERT-style architecture (Devlin et al., 2018), in which f enc is a Transformer (Vaswani et al., 2017) while f task is a linear layer. Below, we define the sequence classification (SST, Movies, MultiRC, e-SNLI) and multi-choice QA (CoS-E) tasks, which are different types of text classification.
Sequence Classification
In sequence classification, x i is a token sequence (e.g., a single sentence, a pair of sentences), while y * i is the target class for x i . Here, we assume a fixed label space Y = {1, ..., M } of size M , where y * i ∈ Y for all i. Thus, f task outputs a vector of size M , such that F task (x i ) = f task (f enc (x i )) =ŷ i ∈ R M is the logit vector used to classify x i . Givenŷ i = [ŷ i,j ] M j=1 , let y i = arg max jŷi,j be the class predicted by F task . The goal of sequence classification is to learn F task such that y * i = y i , for all (x i , y * i ) (Minaee et al., 2021).
Multi-Choice QA Instead of a fixed label space, multi-choice QA has a different (but fixed-size) set of answer choices per instance. For instance i, let q i be the question (e.g., "A friend is greeting me, what would they say?") and
A i = {a i,j } M j=1
be the corresponding answer choices (e.g., {"say hello", "greet", "associate", "socialize", "smile"}), where M is now the number of answer choices. Define x i,j = q i ⊕ a i,j , where ⊕ denotes concatenation. In multi-choice QA, we have
x i = {x i,j } M j=1 , while y * i ∈ A i
is the correct answer for x i . Thus, f task outputs a scalar, such that F task (x i,j ) = f task (f enc (x i,j )) =ŷ i,j ∈ R is the logit for x i,j . Givenŷ i = [ŷ i,j ] M j=1 , let j ′ = arg max jŷi,j , where y i = a i,j ′ is the answer predicted by F task . The goal of multi-choice QA is to learn F task such that y * i = y i , for all (x i , y * i ) (Talmor et al., 2018).
A.2 Heuristic Rationale Extractors
A heuristic F task is an AA, which can be any handcrafted function that calculates an importance score s t i for each input token x t i (Bastings and Filippova, 2020). AAs are typically gradient-based (Sundararajan et al., 2017;Denil et al., 2014;Lundberg and Lee, 2017;Li et al., 2015) or perturbationbased (Li et al., 2016;Poerner et al., 2018;Kádár et al., 2017) methods. Gradient-based methods compute s t i via the gradient of F task 's outputŷ i w.r.t. x t i , via one or more F task backward passes. Perturbation-based methods measure s t i asŷ i 's change when perturbing (e.g., removing) x t i , via multiple F task forward passes.
AAs can be used out of the box without training and are designed to satisfy certain faithfulnessrelated axiomatic properties (Sundararajan et al., 2017;Lundberg and Lee, 2017). However, AAs' lack of learnable parameters means they cannot be optimized for faithfulness/plausibility. Thus, if F task is trained for explainability using AA-based rationales, then only F task is optimized. Also, faithful AAs tend to be compute-intensive, requiring many F task backward/forward passes per instance (Sundararajan et al., 2017;Lundberg and Lee, 2017;Li et al., 2016).
A.3 Gold Rationale Supervision
If a learned rationale extractor is chosen, UNIREX enables users to specify how much gold rationale supervision to use. Ideally, each train instance would be annotated with a gold rationale. In this case, we could directly minimize the plausibility loss for each train instance. However, since gold rationales can be expensive to annotate, UNIREX provides a special batching procedure for training with limited gold rationale supervision.
Given N train = |D train | train instances, let 0 < γ < 100 be the percentage of train instances with gold rationales, N gold = ⌈ γ 100 N train ⌉ ≥ 1 be the number of train instances with gold rationales, b be the desired train batch size, and β > 1 be a scaling factor. Define D gold ⊆ D train as the set of train instances with gold rationales, where |D gold | = N gold . Note that, if all train instances have gold rationales, then D gold = D train and γ = 100.
Each batch is constructed as follows: (1) randomly sample b gold = max(1, b β ) instances from D gold without replacement, then (2) randomly sample b − b gold instances from D train \D gold without replacement. This results in a batch with b total train instances, b gold with gold rationales and the rest without. Since N gold is generally small, we only sample from D gold without replacement for a given batch, but not a given epoch. Thus, instances from D gold may appear more than once in the same epoch. However, we do sample from D train \D gold without replacement for each batch and epoch, so every instance in D train \D gold appears exactly once per epoch.
After constructing the batch, we compute the plausibility loss for the batch as follows:
b i=1 1 (x i ,y * i )∈D gold L plaus (F ext (x i ), r * i ),
where L plaus is the plausibility loss for train instance (x i , y * i ). This function zeroes out the plausibility loss for instances without gold rationales, so that plausibility is only being optimized with respect to instances with gold rationales. However, in Sec. ??, we show that it is possible to achieve high plausibility via rationale extractors trained on minimal gold rationale supervision.
A.4 Explainability Objectives
A.4.1 Faithfulness
Sufficiency In addition, to the criteria presented in Sec. 3.2, we consider two other sufficiency loss functions. The first is the KL divergence criterion used in (Ismail et al., 2021), which considers the entire label distribution and is defined as L suff-KL = KL(F task (r (k) i )) || F task (x i )). The second is the mean absolute error (MAE) criterion, which is defined as L suff-MAE = |L CE (F task (r
(k) i )), y * i ) − L CE (F task (x i ), y * i )|.
Unlike the difference criterion L suff-diff and margin criterion L suff-margin (Sec. 3.2), the MAE criterion assumes that using r (k) i as input should not yield better task performance than using x i as input. In our experiments, we find that L suff-margin is effective, though others (e.g., KL divergence, MAE) can be used too.
A.4.2 Plausibility
Similar to faithfulness, UNIREX places no restrictions on the choice of plausibility objective. As described in Sec. 3.2, given gold rationale r * i for input x i , plausibility optimization entails training F ext to predict binary importance label r * ,t i for each token x t i . This is essentially binary token classification, so one natural choice for L plaus is the token-level binary cross-entropy (BCE) criterion: L plaus-BCE = − t r * ,t i log(F ext (x t i )) (Sec. 3.2). Another option is the sequence-level KL divergence criterion, which is defined as: L plaus-KL = KL(F ext (x i ) || r * i ). Additionally, we can directly penalize F ext (x i ) in the logit space via a linear loss, defined as:
L plaus-linear = Φ(r * i ) F ext (x i ),
where Φ(u) = −2u + 1 maps positive and negative tokens to −1 and +1, respectively. The linear loss directly pushes the logits corresponding to positive/negative tokens to be higher/lower and increase the margin between them. To prevent linear loss values from becoming arbitrarily negative, we can also lower bound the loss with a margin m p , yielding: L plaus-linear-margin = max(−m p , L plaus-linear ) + m p .
A.5 Implementation Details LM Architecture While many prior works use BERT (Devlin et al., 2018) Transformer LMs, BERT is limited to having sequences with up to 512 tokens, which is problematic since many datasets (e.g., Movies) contain much longer sequences. Meanwhile, BigBird (Zaheer et al., 2020) is a state-of-the-art Transformer LM designed to handle long input sequences with up to 4096 tokens. Thus, we use BigBird-Base, which is initialized with RoBERTa-Base (Liu et al., 2019), in all of our experiments (i.e., both baselines and UNIREX). We obtain the pre-trained BigBird-Base model from the Hugging Face Transformers library (Wolf et al., 2019). Note that UNIREX is agnostic to the choice of LM architecture, so RNNs, CNNs, and other Transformer LMs are also supported by UNIREX. However, we leave exploration of other LM architectures for future work.
Training Building upon Sec. ??, we discuss additional training details here. We find that α c = 0.5 and α s = 0.5 are usually best. For the batching factor β (Sec. A.3), we use 2. For model selection, we choose the model with the best dev performance averaged over three seeds. We can also perform model selection based on dev explainability metrics, but we leave this extended tuning for future work. All experiments are implemented us- creases. Interestingly, UNIREX (AA-FP) yields a noticeable dip in AUPRC for lower γ values. Since AA-FP has limited capacity (via the task model) for plausibility optimization, it is possible that this fluctuation is due to random noise. We leave further analysis of this for future work.
A.7 Additional Empirical Results
In this subsection, we present additional results from our experiments. Besides the aggregated results shown in Sec. 4 of the main text, Tables 4-10 contain more detailed results, using both raw and NRG metrics. Specifically, Tables 4-8 show all raw/NRG results for each dataset, Table 9 shows the ablation results for all raw metrics, and Table 10 includes the zero-shot explainability transfer results for UNIREX (SLM-FP). Generally, the computation of NRG should involve globally aggregating the raw metrics for all available methods, as done in the main results. However, for a number of more focused experiments (Tables 9-10), only a subset of the available methods are considered. Thus, to make the faithfulness results in Tables 9-10 easier to digest, we introduce a metric called Comp-Suff Difference (CSD), which locally aggregates comp and suff as: CSD = comp − suff. Therefore, since higher/lower comp/suff signal higher faithfulness, then higher CSD signals higher faithfulness.
Figure 1 :
1Desiderata of Rationale Extraction. Unlike prior works, UNIREX enables optimizing for all three desiderata.
2 :
2UNIREX Framework. UNIREX enables jointly optimizing the task model (Ftask) and rationale extractor (Fext), w.r.t. faithfulness (Lfaith), plausibility (Lplaus), and task performance (Ltask).
Figure 3 :
3Rationale Extractor Types.
Datasets
We primarily use SST (Socher et al., 2013; Carton et al., 2020), Movies (Zaidan and Eisner, 2008), CoS-E (Rajani et al., 2019), Mul-tiRC (Khashabi et al., 2018), and e-SNLI (Camburu et al., 2018), all of which have gold rationale annotations. The latter four datasets were taken from the ERASER benchmark (DeYoung et al., 2019). Metrics We use the metrics from the ERASER explainability benchmark (DeYoung et al., 2019). For faithfulness, we use comprehensiveness (Comp) and sufficiency (Suff), for k = [1, 5, 10, 20, 50] (DeYoung et al., 2019). For plausibility, we use area under precision-recall curve (AUPRC) and token F1 (TF1) to measure similarity to gold rationales (DeYoung et al., 2019; Narang et al., 2020). For task performance, we follow (DeYoung et al., 2019) and (Carton et al., 2020) in using accuracy (SST, CoS-E) and macro F1 (Movies, MultiRC, e-SNLI).
Figure 4 :
4Composite NRG Comparison (w/o Plausibility Optimization). Composite NRG (CNRG) is the mean of the three desiderata NRG scores. For each dataset, we use CNRG to compare methods that do not optimize for plausibility.
Figure 5 :
5Composite NRG Comparison (w/ Plausibility Optimization). Composite NRG (CNRG) is the mean of the three desiderata NRG scores. For each dataset, we use CNRG to compare methods that do optimize for plausibility. denote each UNIREX configuration with "([rationale extractor]-[explainability objectives])". F, P, and FP denote faithfulness, plausibility, and faith-fulness+plausibility, respectively.Baselines The first category is AAs, which are not trained: AA (Grad) (Simonyan et al., 2013), AA (Input*Grad) (Denil et al., 2014), AA (DeepLIFT) (Lundberg and Lee, 2017), AA (IG) (Sundarara
Figure 7 :
7Gold Rationale Data Efficiency on SST.
Faithfulness
Many prior works have tried to improve the faithfulness of extractive rationales through the use of AAs(Bastings and Filippova, 2020). Typically, this involves designing gradientbased(Sundararajan et al., 2017; Denil et al., 2014; Lundberg and Lee, 2017; Li et al., 2015) or perturbation-based(Li et al., 2016; Poerner et al., 2018; Kádár et al., 2017) AAs. However, attribution algorithms cannot be optimized and tend to be compute-intensive (often requiring multiple LM forward/backward passes). Recently, Ismail et al.(2021) addressed the optimization issue by regularizing the task model to yield faithful rationales via the AA, while other works (Situ et al., 2021; Schwarzenberg et al., 2021) addressed the compute cost issue by training an LM (requiring only one forward pass) to mimic an AA's behavior. Another line of work aims to produce faithful rationales by construction, via SPPs (Jain et al., 2020; Yu et al., 2021; Paranjape et al., 2020; Bastings et al., 2019; Yu et al., 2019; Lei et al., 2016). Still, SPPs' faithfulness can only guarantee sufficiency -not comprehensiveness (DeYoung et al., 2019). Also, SPPs generally perform worse than vanilla LMs because they hide much of the original text input from the predictor and are hard to train end-to-end.Plausibility Existing approaches for improving extractive rationale plausibility typically involve supervising LM-based extractors(Bhat et al., 2021) orSPPs (Jain et al., 2020; Paranjape et al., 2020; DeYoung et al., 2019) with gold rationales. However, existing LM-based extractors have not been trained for faithfulness, while SPPs' faithfulness by construction comes at the great cost of task performance. Meanwhile, more existing works focus on improving the plausibility of free-text rationales(Narang et al., 2020; Lakhotia et al., 2020; Camburu et al., 2018), often with task-specific pipelines(Rajani et al., 2019; Kumar and Talukdar, 2020).
ing PyTorch-Lightning (Paszke et al., 2019; Falcon and The PyTorch Lightning team, 2019).A.6 Gold Rationale Data Efficiency
Fig
. ?? shows the gold rationale data efficiency results for CoS-E, using the same setup as Sec. ??. Overall, we see that the CoS-E results are quite similar to the SST results. Again, UNIREX (DLM-FP) and UNIREX (SLM-FP) dominate across all γ values, with AUPRC slowly decreasing as γ de-
Figure 8 :
8Gold Rationale Data Efficiency on CoS-E.
81 (±0.74) -0.070 (±0.061) 0.145 (±0.023) 0.215 (±0.038) UNIREX (AA-F) 93.19 (±0.40) 0.360 (±0.055) 0.405 (±0.031) 0.045 (±0.024) UNIREX (DLM-FP) 93.81 (±0.18) 0.151 (±0.056) 0.319 (±0.090) 0.167 (±0.036) UNIREX (SLM-FP) 93.68 (±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) Yelp Vanilla 92.50 (±2.07) -0.156 (±0.028) 0.067 (±0.004) 0.222 (±0.031) UNIREX (AA-F) 90.75 (±1.30) -0.138 (±0.120) 0.096 (±0.026) 0.233 (±0.096) UNIREX (DLM-FP) 92.37 (±0.46) 0.169 (±0.060) 0.265 (±0.094) 0.097 (±0.033) UNIREX (SLM-FP) 86.60 (±1.57) 0.114 (±0.056) 0.175 (±0.055) 0.060 (±0.001) Amazon Vanilla 91.13 (±0.28) -0.120 (±0.038) 0.096 (±0.008) 0.217 (±0.033) UNIREX (AA-F) 86.60 (±0.95) -0.111 (±0.161) 0.100 (±0.042) 0.210 (±0.122) UNIREX (DLM-FP) 89.35 (±2.22) 0.133 (±0.039) 0.232 (±0.072) 0.098 (±0.033) UNIREX (SLM-FP) 81.82 (±7.62) 0.097 (±0.027) 0.147 (±0.012) 0.050 (±0.51 (±0.99) -0.125 (±0.068) 0.104 (±0.007) 0.229 (±0.064) UNIREX (AA-F) 35.69 (±2.30) -0.028 (±0.084) 0.076 (±0.008) 0.104 (±0.076) UNIREX (DLM-FP) 35.52 (±1.26) 0.053 (±0.012) 0.140 (±0.049) 0.087 (±0.045) UNIREX (SLM-FP) 38.17 (±0.96) 0.039 (±0.031) 0.087 (±0.016) 0.048 (±0.63 (±4.72) -0.058 (±0.075) 0.154 (±0.001) 0.212 (±0.074) UNIREX (AA-F) 47.99 (±6.33) 0.026 (±0.080) 0.087 (±0.022) 0.061 (±0.071) UNIREX (DLM-FP) 31.97 (±2.80) 0.047 (±0.017) 0.149 (±0.052) 0.102 (±0.053) UNIREX (SLM-FP) 17.42 (±4.04) 0.027 (±0.047) 0.091 (±0.027) 0.064 (±0.033)
. Recently, comprehensiveness and sufficiency have emerged as popular faithfulness metrics (DeYoung et al., 52 2019). Comprehensiveness (comp) measures the change in pŷ i when r
±0.003) 0.152 (±0.012) 83.74 (±0.84) 94.16 (±0.39) SLM-FP (Comp+Suff) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 93.68 (±0.67) ±0.039) 0.113 (±0.013) 82.55 (±0.84) 93.68 (±0.67) SLM-FP (MLP-2048-2) 0.323 (±0.071) 0.144 (±0.012) 83.82 (±0.77) 93.67 (±0.18) SLM-FP (MLP-4096-3) 0.295 (±0.057) 0.154 (±0.027) 84.53 (±0.61) 93.19 (±0.79)Table 1: UNIREX Ablation Studies on SST.Ablation
UNIREX Config
Faithfulness
Plausibility
Performance
Comp (↑)
Suff (↓)
AUPRC (↑)
Acc (↑)
Ext Type (F)
AA-F (Rand)
0.171 (±0.040) 0.327 (±0.050) 44.92 (±0.00) 94.05 (±0.35)
AA-F (Gold)
0.232 (±0.088) 0.249 (±0.021) 100.00 (±0.00) 93.81 (±0.54)
AA-F (Inv)
0.242 (±0.010) 0.357 (±0.019) 20.49 (±0.00) 93.47 (±1.81)
AA-F (IG)
0.292 (±0.051) 0.171 (±0.038) 48.13 (±1.14) 92.97 (±0.44)
Ext Type (FP)
AA-FP (Sum)
0.296 (±0.067) 0.185 (±0.048) 47.60 (±2.44) 93.25 (±0.45)
AA-FP (MLP)
0.285 (±0.051) 0.197 (±0.100) 54.82 (±1.97) 93.23 (±0.92)
DLM-FP
0.319 (±0.090) 0.167 (±0.036) 85.80 (±0.74) 93.81 (±0.18)
SLM-FP
0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 93.68 (±0.67)
Comp/Suff Loss
SLM-FP (Comp)
0.350 (±0.048) 0.310 (±0.049) 82.79 (±0.62) 93.59 (±0.11)
SLM-FP (Suff)
0.166 (Suff Criterion
SLM-FP (KL Div)
0.306 (±0.098) 0.131 (±0.005) 82.62 (±0.88) 93.06 (±0.25)
SLM-FP (MAE)
0.278 (±0.058) 0.143 (±0.008) 82.66 (±0.61) 93.78 (±0.13)
SLM-FP (Margin)
0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 93.68 (±0.67)
SLM Ext Head
SLM-FP (Linear)
0.302 (
Table 2 :
2Zero-Shot Faithfulness Transfer from SST.
Table 2 ,
2we investigate if F ext 's faithfulness, via UNIREX training on some source dataset, can generalize to unseen target datasets/tasks in a zero-shot setting (i.e., no fine-tuning on target datasets). Plausibility is not evaluated here, since these unseen datasets do not have gold rationales. As the source model, we compare various SST-trained models: AA (IG) and UNIREX (AA-F, DLM-FP). First, we evaluate on unseen datasets for a seen task (sentiment analysis (SA)):Yelp (Zhang et al., 2015) and Amazon(McAuley and Leskovec, 2013). Second, we evaluate on unseen datasets for unseen tasks:Stormfront (hate speech detection (HSD), binary
F1) (de Gibert et al., 2018), OffenseEval (offen-
sive speech detection (OSD), macro F1) (Zampieri
et al., 2019), and SemEval2018 (irony detection
(ID), binary F1) (Van Hee et al., 2018).
Table 3 :
3Plausibility User Study on SST.
Aya Abdelsalam Ismail, Hector Corrada Bravo, and Soheil Feizi. 2021. Improving deep learning interpretability by saliency guided training. Advances in Neural Information Processing Systems, 34. Akos Kádár, Grzegorz Chrupała, and Afra Alishahi. Jiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2015. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066. Mo Yu, Yang Zhang, Shiyu Chang, and Tommi Jaakkola. 2021. Understanding interlocking dynamics of cooperative rationalization. Advances in Neural Information Processing Systems, 34.Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019.
Interpretable neural predictions with differentiable
binary variables. arXiv preprint arXiv:1905.08160.
Jasmijn Bastings and Katja Filippova. 2020. The ele-
phant in the interpretability room: Why use atten-
tion as explanation when we have saliency methods?
arXiv preprint arXiv:2010.05607.
Emily M Bender, Timnit Gebru, Angelina McMillan-
Major, and Shmargaret Shmitchell. 2021. On the
dangers of stochastic parrots: Can language models
be too big?. In Proceedings of the 2021 ACM Confer-
ence on Fairness, Accountability, and Transparency,
pages 610-623.
Meghana Moorthy Bhat, Alessandro Sordoni, and Sub-
habrata Mukherjee. 2021. Self-training with few-shot
rationalization: Teacher explanations aid student in
few-shot nlu. arXiv preprint arXiv:2109.08259.
Oana-Maria Camburu, Tim Rocktäschel, Thomas
Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natu-
ral language inference with natural language expla-
nations. arXiv preprint arXiv:1812.01193.
Samuel Carton, Anirudh Rathore, and Chenhao Tan.
2020. Evaluating and characterizing human ratio-
nales. arXiv preprint arXiv:2010.04736.
Aaron Chan, Jiashu Xu, Boyuan Long, Soumya Sanyal,
Tanishq Gupta, and Xiang Ren. 2021. Salkg: Learn-
ing from knowledge graph explanations for common-
sense reasoning. Advances in Neural Information
Processing Systems, 34.
Ona de Gibert, Naiara Perez, Aitor García-Pablos,
and Montse Cuadros. 2018. Hate speech dataset
from a white supremacy forum. arXiv preprint
arXiv:1809.04444.
Misha Denil, Alban Demiraj, and Nando De Freitas.
2014. Extraction of salient sentences from labelled
documents. arXiv preprint arXiv:1412.6815.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understand-
ing. arXiv preprint arXiv:1810.04805.
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani,
Eric Lehman, Caiming Xiong, Richard Socher, and
Byron C Wallace. 2019. Eraser: A benchmark to
evaluate rationalized nlp models. arXiv preprint
arXiv:1911.03429.
Finale Doshi-Velez and Been Kim. 2017. Towards a
rigorous science of interpretable machine learning.
arXiv preprint arXiv:1702.08608.
William Falcon and The PyTorch Lightning team. 2019.
PyTorch Lightning.
Joseph L Fleiss. 1971. Measuring nominal scale agree-
ment among many raters. Psychological bulletin,
76(5):378.
Peter Hase and Mohit Bansal. 2020. Evaluating ex-
plainable ai: Which algorithmic explanations help
users predict model behavior?
arXiv preprint
arXiv:2005.01831.
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans,
and Been Kim. 2018. A benchmark for interpretabil-
ity methods in deep neural networks. arXiv preprint
arXiv:1806.10758.
Alon Jacovi and Yoav Goldberg. 2020. Towards faith-
fully interpretable nlp systems: How should we
define and evaluate faithfulness? arXiv preprint
arXiv:2004.03685.
Sarthak Jain, Sarah Wiegreffe, Yuval Pinter, and
Byron C Wallace. 2020.
Learning to faith-
fully rationalize by construction. arXiv preprint
arXiv:2005.00115.
2017. Representation of linguistic form and func-
tion in recurrent neural networks. Computational
Linguistics, 43(4):761-780.
Daniel Khashabi, Snigdha Chaturvedi, Michael Roth,
Shyam Upadhyay, and Dan Roth. 2018. Looking
beyond the surface: A challenge set for reading com-
prehension over multiple sentences. In Proceedings
of the 2018 Conference of the North American Chap-
ter of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Pa-
pers), pages 252-262.
Sawan Kumar and Partha Talukdar. 2020. Nile: Natu-
ral language inference with faithful natural language
explanations. arXiv preprint arXiv:2005.12116.
Kushal Lakhotia, Bhargavi Paranjape, Asish Ghoshal,
Wen-tau Yih, Yashar Mehdad, and Srinivasan Iyer.
2020. Fid-ex: Improving sequence-to-sequence mod-
els for extractive rationale generation. arXiv preprint
arXiv:2012.15482.
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016.
Rationalizing neural predictions. arXiv preprint
arXiv:1606.04155.
Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Un-
derstanding neural networks through representation
erasure. arXiv preprint arXiv:1612.08220.
Zachary C Lipton. 2018. The mythos of model inter-
pretability: In machine learning, the concept of in-
terpretability is both important and slippery. Queue,
16(3):31-57.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-
dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining ap-
proach. arXiv preprint arXiv:1907.11692.
Scott M Lundberg and Su-In Lee. 2017. A unified ap-
proach to interpreting model predictions. In Proceed-
ings of the 31st international conference on neural
information processing systems, pages 4768-4777.
Siwen Luo, Hamish Ivison, Caren Han, and Josiah Poon.
2021. Local interpretations for explainable natu-
ral language processing: A survey. arXiv preprint
arXiv:2103.11072.
Julian McAuley and Jure Leskovec. 2013. Hidden fac-
tors and hidden topics: understanding rating dimen-
sions with review text. In Proceedings of the 7th
ACM conference on Recommender systems, pages
165-172.
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Nar-
jes Nikzad, Meysam Chenaghlu, and Jianfeng Gao.
2021. Deep learning-based text classification: A
comprehensive review. ACM Computing Surveys
(CSUR), 54(3):1-40.
Sharan Narang, Colin Raffel, Katherine Lee, Adam
Roberts, Noah Fiedel, and Karishma Malkan. 2020.
Wt5?! training text-to-text models to explain their
predictions. arXiv preprint arXiv:2004.14546.
Bhargavi Paranjape, Mandar Joshi, John Thickstun,
Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020.
An information bottleneck approach for controlling
conciseness in rationale extraction. arXiv preprint
arXiv:2005.00652.
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, et al. 2019. Pytorch: An imperative style,
high-performance deep learning library. Advances
in neural information processing systems, 32:8026-
8037.
Nina Poerner, Benjamin Roth, and Hinrich Schütze.
2018. Evaluating neural network explanation meth-
ods using hybrid documents and morphological
agreement. arXiv preprint arXiv:1801.06422.
Danish Pruthi, Bhuwan Dhingra, Livio Baldini Soares,
Michael Collins, Zachary C Lipton, Graham Neubig,
and William W Cohen. 2020. Evaluating explana-
tions: How much do explanations from the teacher
aid students? arXiv preprint arXiv:2012.00893.
Nazneen Fatema Rajani, Bryan McCann, Caiming
Xiong, and Richard Socher. 2019. Explain your-
self! leveraging language models for commonsense
reasoning. arXiv preprint arXiv:1906.02361.
Cynthia Rudin. 2019. Stop explaining black box ma-
chine learning models for high stakes decisions and
use interpretable models instead. Nature Machine
Intelligence, 1(5):206-215.
Robert Schwarzenberg, Nils Feldhus, and Sebastian
Möller. 2021. Efficient explanations from empirical
explainers. arXiv preprint arXiv:2103.15429.
Avanti Shrikumar, Peyton Greenside, and Anshul Kun-
daje. 2017. Learning important features through
propagating activation differences. In International
Conference on Machine Learning, pages 3145-3153.
PMLR.
Karen Simonyan, Andrea Vedaldi, and Andrew Zis-
serman. 2013. Deep inside convolutional networks:
Visualising image classification models and saliency
maps. arXiv preprint arXiv:1312.6034.
Xuelin Situ, Ingrid Zukerman, Cecile Paris, Sameen
Maruf, and Gholamreza Haffari. 2021. Learning
to explain: Generating stable explanations fast. In
Proceedings of the 59th Annual Meeting of the Asso-
ciation for Computational Linguistics and the 11th
International Joint Conference on Natural Language
Processing (Volume 1: Long Papers), pages 5340-
5355.
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D Manning, Andrew Y Ng, and
Christopher Potts. 2013. Recursive deep models for
semantic compositionality over a sentiment treebank.
In Proceedings of the 2013 conference on empiri-
cal methods in natural language processing, pages
1631-1642.
Julia Strout, Ye Zhang, and Raymond J Mooney. 2019.
Do human rationales improve machine explanations?
arXiv preprint arXiv:1905.13714.
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017.
Axiomatic attribution for deep networks. In Inter-
national Conference on Machine Learning, pages
3319-3328. PMLR.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
Jonathan Berant. 2018. Commonsenseqa: A question
answering challenge targeting commonsense knowl-
edge. arXiv preprint arXiv:1811.00937.
Cynthia Van Hee, Els Lefever, and Véronique Hoste.
2018. Semeval-2018 task 3: Irony detection in en-
glish tweets. In Proceedings of The 12th Interna-
tional Workshop on Semantic Evaluation, pages 39-
50.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In Advances in neural information pro-
cessing systems, pages 5998-6008.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pier-
ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
et al. 2019. Huggingface's transformers: State-of-
the-art natural language processing. arXiv preprint
arXiv:1910.03771.
Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren.
2021. Crossfit: A few-shot learning challenge for
cross-task generalization in nlp. arXiv preprint
arXiv:2104.08835.
Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S
Jaakkola. 2019. Rethinking cooperative rationaliza-
tion: Introspective extraction and complement con-
trol. arXiv preprint arXiv:1910.13294.
Manzil Zaheer, Guru Guruganesh, Kumar Avinava
Dubey, Joshua Ainslie, Chris Alberti, Santiago On-
tanon, Philip Pham, Anirudh Ravula, Qifan Wang,
Li Yang, et al. 2020. Big bird: Transformers for
longer sequences. In NeurIPS.
Omar Zaidan and Jason Eisner. 2008. Modeling an-
notators: A generative approach to learning from
annotator rationales. In Proceedings of the 2008 con-
ference on Empirical methods in natural language
processing, pages 31-40.
Marcos Zampieri, Shervin Malmasi, Preslav Nakov,
Sara Rosenthal, Noura Farra, and Ritesh Kumar.
2019. Semeval-2019 task 6: Identifying and catego-
rizing offensive language in social media (offenseval).
arXiv preprint arXiv:1903.08983.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level Convolutional Networks for Text
Classification. arXiv:1509.01626 [cs].
Table 4 :
4Main Results on SST.Method
Composite
Faithfulness
Plausibility
Performance
NRG (↑)
NRG (↑)
Comp (↑)
Suff (↓)
NRG (↑)
AUPRC (↑)
TF1 (↑)
NRG (↑)
F1 (↑)
AA (Grad)
0.481
0.457
0.184 (±0.023) 0.107 (±0.017)
0.028
13.31 (±0.91)
5.02 (±0.00)
0.957
95.33 (±0.65)
AA (Input*Grad)
0.503
0.359
0.148 (±0.031) 0.137 (±0.019)
0.194
8.68 (±0.37)
37.58 (±0.55)
0.957
95.33 (±0.65)
AA (DeepLIFT)
0.468
0.259
0.122 (±0.029) 0.172 (±0.022)
0.187
9.00 (±0.16)
36.15 (±1.45)
0.957
95.33 (±0.65)
AA (IG)
0.439
0.173
0.134 (±0.016) 0.219 (±0.044)
0.188
8.88 (±0.21)
36.39 (±1.29)
0.957
95.33 (±0.65)
L2E
0.550
0.445
0.000 (±0.007) 0.026 (±0.015)
0.248
16.68 (±10.20) 38.92 (±4.07)
0.957
95.33 (±0.65)
SGT
0.553
0.474
0.124 (±0.053) 0.071 (±0.064)
0.184
10.05 (±1.23)
34.64 (±1.67)
1.000
96.33 (±0.76)
FRESH
0.645
0.732
0.234 (±0.034) 0.000 (±0.000)
0.305
17.02 (±6.22)
48.26 (±5.87)
0.899
94.00 (±1.44)
A2R
0.431
0.764
0.267 (±0.050) 0.000 (±0.000)
0.244
35.44 (±21.69) 19.78 (±25.56)
0.284
79.78 (±7.14)
UNIREX (AA-F)
0.601
0.744
0.505 (±0.134) 0.122 (±0.100)
0.189
9.14 (±2.51)
36.28 (±1.84)
0.870
93.33 (±1.61)
SGT+P
0.586
0.604
0.152 (±0.013) 0.022 (±0.004)
0.183
9.16 (±1.59)
35.33 (±0.41)
0.971
95.66 (±1.16)
FRESH+P
0.491
0.691
0.193 (±0.062) 0.000 (±0.000)
0.710
65.78 (±11.16) 68.70 (±15.78)
0.070
74.84 (±12.22)
A2R+P
0.585
0.764
0.267 (±0.076) 0.000 (±0.000)
0.991
93.53 (±0.93)
88.77 (±1.22)
0.000
73.22 (±0.75)
UNIREX (DLM-P)
0.667
0.024
0.024 (±0.003) 0.238 (±0.004)
1.000
94.32 (±0.12)
89.53 (±1.63)
0.978
95.83 (±0.29)
UNIREX (AA-FP)
0.543
0.514
0.428 (±0.174) 0.195 (±0.105)
0.193
8.53 (±0.46)
37.71 (±3.12)
0.921
94.50 (±1.00)
UNIREX (DLM-FP)
0.744
0.326
0.283 (±0.217) 0.216 (±0.005)
0.991
93.65 (±0.36)
88.68 (±2.29)
0.913
94.33 (±1.61)
UNIREX (SLM-FP)
0.754
0.362
0.313 (±0.059) 0.213 (±0.014)
0.965
91.70 (±1.84)
86.17 (±1.20)
0.935
94.83 (±0.76)
Table 5 :
5Main Results on Movies.Method
Composite
Faithfulness
Plausibility
Performance
NRG (↑)
NRG (↑)
Comp (↑)
Suff (↓)
NRG (↑)
AUPRC (↑)
TF1 (↑)
NRG (↑)
Acc (↑)
AA (Grad)
0.537
0.504
0.331 (±0.012) 0.352 (±0.007)
0.130
37.33 (±0.62) 22.65 (±0.00)
0.977
63.56 (±1.27)
AA (Input*Grad)
0.573
0.361
0.249 (±0.018) 0.385 (±0.008)
0.383
39.56 (±0.54) 44.43 (±0.40)
0.977
63.56 (±1.27)
AA (DeepLIFT)
0.605
0.346
0.254 (±0.035) 0.403 (±0.042)
0.491
42.82 (±1.83) 51.72 (±1.26)
0.977
63.56 (±1.27)
AA (IG)
0.578
0.327
0.216 (±0.007) 0.378 (±0.010)
0.429
40.07 (±5.47) 48.34 (±3.16)
0.977
63.56 (±1.27)
L2E
0.544
0.493
0.005 (±0.003) 0.010 (±0.008)
0.161
23.56 (±1.09) 37.80 (±1.10)
0.977
63.56 (±1.27)
SGT
0.618
0.367
0.197 (±0.040) 0.324 (±0.015)
0.491
43.68 (±4.68) 51.00 (±3.05)
0.995
64.35 (±0.46)
FRESH
0.302
0.546
0.037 (±0.036) 0.000 (±0.000)
0.261
32.35 (±7.66) 39.37 (±0.70)
0.101
24.81 (±3.46)
A2R
0.277
0.516
0.014 (±0.021) 0.000 (±0.000)
0.282
41.61 (±3.85) 33.12 (±9.06)
0.032
21.77 (±1.31)
UNIREX (AA-F)
0.690
0.538
0.297 (±0.141) 0.286 (±0.084)
0.554
46.97 (±3.41) 53.99 (±1.66)
0.978
63.58 (±0.61)
SGT+P
0.601
0.367
0.201 (±0.032) 0.328 (±0.022)
0.436
41.30 (±6.70) 47.95 (±1.65)
1.000
64.57 (±0.33)
FRESH+P
0.374
0.515
0.013 (±0.021) 0.013 (±0.021)
0.606
53.40 (±12.87) 53.17 (±7.83)
0.000
20.36 (±0.66)
A2R+P
0.488
0.500
0.001 (±0.001) 0.000 (±0.000)
0.951
73.59 (±0.81) 67.63 (±1.54)
0.012
20.91 (±0.48)
UNIREX (DLM-P)
0.751
0.267
0.180 (±0.016) 0.390 (±0.035)
0.997
76.07 (±1.63) 69.76 (±0.27)
0.990
64.13 (±0.46)
UNIREX (AA-FP)
0.685
0.551
0.395 (±0.109) 0.381 (±0.101)
0.537
45.21 (±4.46) 53.91 (±3.23)
0.968
63.14 (±0.33)
UNIREX (DLM-FP)
0.814
0.492
0.293 (±0.043) 0.321 (±0.070)
0.997
76.38 (±0.57) 69.52 (±0.24)
0.953
62.50 (±1.34)
UNIREX (SLM-FP)
0.807
0.494
0.390 (±0.087) 0.424 (±0.110)
0.983
75.12 (±0.41) 69.25 (±0.41)
0.944
62.09 (±2.12)
Table 6 :
6Main Results on CoS-E.Method
Composite
Faithfulness
Plausibility
Performance
NRG (↑)
NRG (↑)
Comp (↑)
Suff (↓)
NRG (↑)
AUPRC (↑)
TF1 (↑)
NRG (↑)
F1 (↑)
AA (Grad)
0.498
0.462
0.222 (±0.028) 0.120 (±0.018)
0.035
22.27 (±0.17) 13.81 (±0.00)
0.997
69.80 (±0.60)
AA (Input*Grad)
0.506
0.289
0.225 (±0.048) 0.260 (±0.059)
0.231
18.51 (±0.23) 43.45 (±0.05)
0.997
69.80 (±0.60)
AA (DeepLIFT)
0.493
0.249
0.225 (±0.012) 0.292 (±0.014)
0.234
18.80 (±0.19) 43.51 (±0.04)
0.997
69.80 (±0.60)
AA (IG)
0.499
0.280
0.162 (±0.086) 0.222 (±0.086)
0.220
18.71 (±0.40) 41.79 (±1.33)
0.997
69.80 (±0.60)
L2E
0.522
0.366
0.007 (±0.006) 0.042 (±0.024)
0.205
24.48 (±2.71) 32.63 (±6.12)
0.997
69.80 (±0.60)
SGT
0.594
0.564
0.214 (±0.105) 0.033 (±0.077)
0.224
18.60 (±0.42) 42.42 (±0.51)
0.995
69.73 (±0.13)
FRESH
0.675
0.571
0.176 (±0.029) 0.000 (±0.000)
0.617
24.68 (±7.98) 48.02 (±3.04)
0.838
64.47 (±3.41)
A2R
0.217
0.404
-0.010 (±0.029) 0.000 (±0.000)
0.249
18.72 (±0.67) 45.45 (±0.02)
0.000
36.39 (±0.00)
UNIREX (AA-F)
0.711
0.956
0.505 (±0.050) -0.071 (±0.020)
0.236
18.82 (±0.40) 43.68 (±0.38)
0.939
66.17 (±4.58)
SGT+P
0.630
0.665
0.280 (±0.029) 0.283 (±0.039)
0.226
18.63 (±0.52) 42.71 (±0.39)
1.000
69.91 (±0.81)
FRESH+P
0.404
0.413
0.000 (±0.013) 0.000 (±0.000)
0.739
55.87 (±10.13) 63.70 (±9.58)
0.060
38.41 (±5.34)
A2R+P
0.516
0.422
0.011 (±0.024) 0.000 (±0.000)
0.977
70.86 (±1.30) 76.21 (±1.68)
0.150
41.42 (±8.73)
UNIREX (DLM-P)
0.708
0.123
0.127 (±0.010) 0.322 (±0.017)
0.999
71.80 (±0.27) 77.94 (±0.57)
1.000
69.91 (±0.76)
UNIREX (AA-FP)
0.706
1.000
0.545 (±0.045) -0.077 (±0.099)
0.231
19.13 (±0.71) 42.66 (±1.18)
0.888
66.17 (±4.58)
UNIREX (DLM-FP)
0.751
0.327
0.135 (±0.072) 0.165 (±0.029)
0.998
71.89 (±0.41) 77.63 (±0.62)
0.929
67.53 (±1.06)
UNIREX (SLM-FP)
0.784
0.377
0.198 (±0.038) 0.171 (±0.027)
0.997
71.69 (±0.21) 77.79 (±0.09)
0.979
69.20 (±1.58)
Table 7 :
7Main Results on MultiRC.Method
Composite
Faithfulness
Plausibility
Performance
NRG (↑)
NRG (↑)
Comp (↑)
Suff (↓)
NRG (↑)
AUPRC (↑)
TF1 (↑)
NRG (↑)
F1 (↑)
AA (Grad)
0.587
0.518
0.313 (±0.009) 0.380 (±0.025)
0.244
59.80 (±1.32)
15.27 (±0.00)
0.999
90.78 (±0.27)
AA (Input*Grad)
0.503
0.287
0.205 (±0.005) 0.446 (±0.020)
0.223
32.98 (±1.37)
43.13 (±0.86)
0.999
90.78 (±0.27)
AA (DeepLIFT)
0.508
0.270
0.195 (±0.012) 0.448 (±0.014)
0.254
33.47 (±1.31)
46.44 (±0.04)
0.999
90.78 (±0.27)
AA (IG)
0.596
0.473
0.308 (±0.011) 0.414 (±0.020)
0.317
47.83 (±1.04)
37.87 (±1.39)
0.999
90.78 (±0.27)
L2E
0.606
0.460
0.009 (±0.015) 0.036 (±0.022)
0.358
58.11 (±0.97)
31.35 (±0.27)
0.999
90.78 (±0.27)
SGT
0.595
0.503
0.288 (±0.025) 0.361 (±0.038)
0.298
42.46 (±3.03)
41.70 (±1.78)
0.985
90.23 (±0.16)
FRESH
0.518
0.661
0.120 (±0.075) 0.000 (±0.000)
0.361
38.77 (±6.82)
53.71 (±3.30)
0.530
72.92 (±8.71)
A2R
0.273
0.564
0.053 (±0.048) 0.000 (±0.000)
0.256
48.48 (±11.14) 29.54 (±24.72)
0.000
52.72 (±14.08)
UNIREX (AA-F)
0.622
0.539
0.330 (±0.018) 0.383 (±0.055)
0.340
45.29 (±3.02)
43.69 (±1.98)
0.987
90.31 (±0.19)
SGT+P
0.608
0.524
0.286 (±0.034) 0.339 (±0.032)
0.311
43.03 (±1.69)
42.59 (±1.63)
0.988
90.36 (±0.08)
FRESH+P
0.614
0.695
0.143 (±0.072) 0.000 (±0.000)
0.603
56.21 (±10.47) 64.09 (±5.59)
0.544
73.44 (±12.88)
A2R+P
0.800
0.751
0.182 (±0.097) 0.000 (±0.000)
0.992
87.30 (±0.44)
77.31 (±0.72)
0.656
77.31 (±0.72)
UNIREX (DLM-P)
0.842
0.525
0.311 (±0.011) 0.371 (±0.032)
1.000
87.85 (±0.13)
77.63 (±0.35)
1.000
90.80 (±0.33)
UNIREX (AA-FP)
0.626
0.529
0.341 (±0.008) 0.406 (±0.046)
0.363
44.79 (±0.81)
47.18 (±0.83)
0.985
90.21 (±0.08)
UNIREX (DLM-FP)
0.857
0.588
0.335 (±0.018) 0.346 (±0.023)
0.991
86.99 (±0.40)
77.53 (±0.15)
0.992
90.51 (±0.12)
UNIREX (SLM-FP)
0.864
0.603
0.353 (±0.017) 0.356 (±0.015)
0.994
87.58 (±0.14)
77.22 (±0.28)
0.994
90.59 (±0.09)
Table 8 :
8Main Results on e-SNLI.Ablation
Method
Performance
Faithfulness
Plausibility
Table 10 :
10Zero-Shot Explainability Transfer from SST to Unseen Datasets/Tasks.
. Rand Unirex (aa-F, 94.05 (±0.35) -0.156 (±-0.156) 0.171 (±0.040) 0.327 (±0.050) 44.92 (±0.00) 46.15 (±0.00UNIREX (AA-F, Rand) 94.05 (±0.35) -0.156 (±-0.156) 0.171 (±0.040) 0.327 (±0.050) 44.92 (±0.00) 46.15 (±0.00)
. Unirex (aa-F, Gold, 93.81 (±0.54) -0.017 (±0.070) 0.232 (±0.088) 0.249 (±0.021) 100.00 (±0.00) 100.00 (±0.00) UNIREX (AA-F, Inv) 93.47 (±1.81) -0.115 (±0.018) 0.242 (±0.010) 0.357 (±0.019) 20.49 (±0.00) 0.00 (±0.00UNIREX (AA-F, Gold) 93.81 (±0.54) -0.017 (±0.070) 0.232 (±0.088) 0.249 (±0.021) 100.00 (±0.00) 100.00 (±0.00) UNIREX (AA-F, Inv) 93.47 (±1.81) -0.115 (±0.018) 0.242 (±0.010) 0.357 (±0.019) 20.49 (±0.00) 0.00 (±0.00)
. Unirex (aa-F, Ig), 93.81 (±0.55) -0.138 (±0.040) 0.119 (±0.009) 0.258 (±0.031) 49.94 (±1.77) 50.75 (±0.54UNIREX (AA-F, IG) 93.81 (±0.55) -0.138 (±0.040) 0.119 (±0.009) 0.258 (±0.031) 49.94 (±1.77) 50.75 (±0.54)
±0.55) -0.138 (±0.040) 0.119 (±0.009) 0.258 (±0.031) 49.94 (±1.77) 50.75 (±0.54Ext Type (FP) UNIREX (AA-FP, Sum) 93. 81Ext Type (FP) UNIREX (AA-FP, Sum) 93.81 (±0.55) -0.138 (±0.040) 0.119 (±0.009) 0.258 (±0.031) 49.94 (±1.77) 50.75 (±0.54)
. Unirex (aa-Fp, Mlp), 93.23 (±0.92) 0.087 (±0.134) 0.285 (±0.051) 0.197 (±0.100) 54.82 (±1.97) 49.62 (±0.65UNIREX (AA-FP, MLP) 93.23 (±0.92) 0.087 (±0.134) 0.285 (±0.051) 0.197 (±0.100) 54.82 (±1.97) 49.62 (±0.65)
0.040 (±0.096) 0.350 (±0.048) 0.310 (±0.049) 82.79 (±0.62) 70.74 (±0.81Comp/Suff Loss UNIREX (SLM-FP, Comp) 93.59 (±0.11). Comp/Suff Loss UNIREX (SLM-FP, Comp) 93.59 (±0.11) 0.040 (±0.096) 0.350 (±0.048) 0.310 (±0.049) 82.79 (±0.62) 70.74 (±0.81)
. Unirex (slm-Fp, Suff) 94.16 (±0.39) 0.014 (±0.010) 0.166 (±0.003) 0.152 (±0.012) 83.74 (±0.84) 70.94 (±0.86UNIREX (SLM-FP, Suff) 94.16 (±0.39) 0.014 (±0.010) 0.166 (±0.003) 0.152 (±0.012) 83.74 (±0.84) 70.94 (±0.86)
. Unirex (slm-Fp, Comp+Suff) 93.68 (±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 70.65 (±0.44UNIREX (SLM-FP, Comp+Suff) 93.68 (±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 70.65 (±0.44)
Slm-Fp, Div, 93.06 (±0.25) 0.174 (±0.100) 0.306 (±0.098) 0.131 (±0.005) 82.62 (±0.88) 70.43 (±0.65Suff Criterion UNIREX. Suff Criterion UNIREX (SLM-FP, KL Div) 93.06 (±0.25) 0.174 (±0.100) 0.306 (±0.098) 0.131 (±0.005) 82.62 (±0.88) 70.43 (±0.65)
. Mae ) Unirex (slm-Fp, 93.78 (±0.13) 0.135 (±0.053) 0.278 (±0.058) 0.143 (±0.008) 82.66 (±0.61) 70.25 (±0.45UNIREX (SLM-FP, MAE) 93.78 (±0.13) 0.135 (±0.053) 0.278 (±0.058) 0.143 (±0.008) 82.66 (±0.61) 70.25 (±0.45)
. Unirex (slm-Fp, Margin) 93.68 (±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 70.65 (±0.44UNIREX (SLM-FP, Margin) 93.68 (±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 70.65 (±0.44)
±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 70.65 (±0.44SLM Ext Head UNIREX (SLM-FP, Linear) 93. 68SLM Ext Head UNIREX (SLM-FP, Linear) 93.68 (±0.67) 0.189 (±0.030) 0.302 (±0.039) 0.113 (±0.013) 82.55 (±0.84) 70.65 (±0.44)
. Unirex (slm-Fp, Mlp-, 2048-2) 93.67 (±0.18) 0.179 (±0.060) 0.323 (±0.071) 0.144 (±0.012) 83.82 (±0.77) 70.93 (±0.87UNIREX (SLM-FP, MLP-2048-2) 93.67 (±0.18) 0.179 (±0.060) 0.323 (±0.071) 0.144 (±0.012) 83.82 (±0.77) 70.93 (±0.87)
. Unirex (slm-Fp, Mlp-, 4096-3) 93.19 (±0.79) 0.141 (±0.030) 0.295 (±0.057) 0.154 (±0.027) 84.53 (±0.61) 71.41 (±0.91UNIREX (SLM-FP, MLP-4096-3) 93.19 (±0.79) 0.141 (±0.030) 0.295 (±0.057) 0.154 (±0.027) 84.53 (±0.61) 71.41 (±0.91)
UNIREX Ablation Studies on SST. 9Table 9: UNIREX Ablation Studies on SST.
| [] |
[
"Deep Clustering of Text Representations for Supervision-free Probing of Syntax",
"Deep Clustering of Text Representations for Supervision-free Probing of Syntax"
] | [
"Vikram Gupta vikramgupta@sharechat.co \nShareChat\nIndia\n",
"Haoyue Shi \nToyota Technological Institute at Chicago\nILUSA\n",
"Kevin Gimpel kgimpel@ttic.edu \nToyota Technological Institute at Chicago\nILUSA\n",
"Mrinmaya Sachan mrinmaya.sachan@inf.ethz.ch \nDepartment of Computer Science\nETH Zurich\n\n"
] | [
"ShareChat\nIndia",
"Toyota Technological Institute at Chicago\nILUSA",
"Toyota Technological Institute at Chicago\nILUSA",
"Department of Computer Science\nETH Zurich\n"
] | [] | We explore deep clustering of text representations for unsupervised model interpretation and induction of syntax. As these representations are high-dimensional, out-of-the-box methods like KMeans do not work well. Thus, our approach jointly transforms the representations into a lowerdimensional cluster-friendly space and clusters them. We consider two notions of syntax: part of speech induction (POSI) and constituency labelling (CoLab) in this work. Interestingly, we find that Multilingual BERT (mBERT) contains surprising amount of syntactic knowledge of English; possibly even as much as English BERT (E-BERT). Our model can be used as a supervision-free probe which is arguably a less-biased way of probing. We find that unsupervised probes show benefits from higher layers as compared to supervised probes. We further note that our unsupervised probe utilizes E-BERT and mBERT representations differently, especially for POSI. We validate the efficacy of our probe by demonstrating its capabilities as a unsupervised syntax induction technique. Our probe works well for both syntactic formalisms by simply adapting the input representations. We report competitive performance of our probe on 45-tag English POSI, state-of-the-art performance on 12tag POSI across 10 languages, and competitive results on CoLab. We also perform zero-shot syntax induction on resource impoverished languages and report strong results. | 10.1609/aaai.v36i10.21317 | [
"https://arxiv.org/pdf/2010.12784v2.pdf"
] | 244,800,754 | 2010.12784 | c38184c7ed9d798c83dbb48c8231e5a950a9b420 |
Deep Clustering of Text Representations for Supervision-free Probing of Syntax
Vikram Gupta vikramgupta@sharechat.co
ShareChat
India
Haoyue Shi
Toyota Technological Institute at Chicago
ILUSA
Kevin Gimpel kgimpel@ttic.edu
Toyota Technological Institute at Chicago
ILUSA
Mrinmaya Sachan mrinmaya.sachan@inf.ethz.ch
Department of Computer Science
ETH Zurich
Deep Clustering of Text Representations for Supervision-free Probing of Syntax
We explore deep clustering of text representations for unsupervised model interpretation and induction of syntax. As these representations are high-dimensional, out-of-the-box methods like KMeans do not work well. Thus, our approach jointly transforms the representations into a lowerdimensional cluster-friendly space and clusters them. We consider two notions of syntax: part of speech induction (POSI) and constituency labelling (CoLab) in this work. Interestingly, we find that Multilingual BERT (mBERT) contains surprising amount of syntactic knowledge of English; possibly even as much as English BERT (E-BERT). Our model can be used as a supervision-free probe which is arguably a less-biased way of probing. We find that unsupervised probes show benefits from higher layers as compared to supervised probes. We further note that our unsupervised probe utilizes E-BERT and mBERT representations differently, especially for POSI. We validate the efficacy of our probe by demonstrating its capabilities as a unsupervised syntax induction technique. Our probe works well for both syntactic formalisms by simply adapting the input representations. We report competitive performance of our probe on 45-tag English POSI, state-of-the-art performance on 12tag POSI across 10 languages, and competitive results on CoLab. We also perform zero-shot syntax induction on resource impoverished languages and report strong results.
Introduction
Contextualized text representations (Peters et al. 2018a;Devlin et al. 2019) have been used in many supervised NLP problems such as part-of-speech (POS) tagging (Tsai et al. 2019), syntactic parsing (Kitaev and Klein 2018;Zhou and Zhao 2019;Mrini et al. 2019), and coreference resolution (Lee, He, and Zettlemoyer 2018;Joshi et al. 2019;Wu et al. 2020), often leading to significant improvements. Recent works have shown that these representations encode linguistic information including POS (Belinkov et al. 2017), morphology (Peters et al. 2018a), and syntactic structure (Linzen, Dupoux, and Goldberg 2016;Peters et al. 2018b;Tenney, Das, and Pavlick 2019;Hewitt and Manning 2019).
While there has been a lot of focus on using contextualized representations in supervised settings for either solving Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. NLP problems and interpreting these representations, the efficacy of these representations for unsupervised learning is not well explored 1 . Most of the recent work in "probing" contextual representations have focused on building supervised classifiers and using accuracy to interpret these representations. This has led to a debate as it is not clear if the supervised probe is probing the model or trying to solve the task (Hewitt and Manning 2019;Pimentel et al. 2020).
Thus, in this work, we explore a new clustering-based approach to probe contextualized text representations. Our probe allows for studying text representations with relatively less task-specific transformations due to the absence of supervision. Thus, our approach is arguably a less biased way to discover linguistic structure than supervised probes (Hewitt and Manning 2019;Pimentel et al. 2020;Zhou and Srikumar 2021).We focus on two syntactic formalisms: part-of-speech induction (POSI) and constituency labelling (CoLab), and explore the efficacy of contextualized representations towards encoding syntax in an unsupervised manner. We investigate the research question: Do contextualized representations encode enough information for unsupervised syntax induction? How do these perform on POSI, which has been traditionally solved using smaller context windows and morphology and span-based CoLab?
For both formalisms, we find that naively clustering text representations does not perform well. We speculate that this is because contextualized text representations are highdimensional and not very friendly to existing clustering approaches. Thus, we develop a deep clustering approach (Xie, Girshick, and Farhadi 2016;Ghasedi Dizaji et al. 2017;Chang et al. 2017;Yang, Parikh, and Batra 2016;Yang et al. 2017) which transforms these representations into a lower dimensional, clustering friendly latent space. This transformation is learnt jointly with the clustering using a combination of reconstruction and clustering objectives. The procedure iteratively refines the transformation and the clustering using an auxiliary target distribution derived from the current soft clustering. As this process is 1 Some recent work such as DIORA (Drozdov et al. 2019b,a) has explored specialized methods for unsupervised discovery and representation of constituents using ELMo (Peters et al. 2018a). (Jin et al. 2019) used ELMo with a normalizing flow model while (Cao, Kitaev, and Klein 2020) used RoBERTa (Liu et al. 2019b) for unsupervised constituency parsing. repeated, it gradually improves the transformed representations as well as the clustering. We show a t-SNE visualization of mBERT embeddings and embeddings learned by our deep clustering probe (SyntDEC) in Figure 1. We further explore architectural variations such as pretrained subword embeddings from fastText (Joulin et al. 2017), a continuous bag of words (CBoW) loss (Mikolov et al. 2013), and span representations (Toshniwal et al. 2020) to incorporate task-dependent information into the latent space and observe sugnificant improvements. It is important to note that we do not claim that clustering contextualized representations is the optimal approach for POSI as representations with short context (Lin et al. 2015), (He, Neubig, and Berg-Kirkpatrick 2018) and word-based POSI (Yatbaz, Sert, and Yuret 2012) have shown best results. Our approach explores the potential of contextualized representations for unsupervised induction of syntax and acts as an unsupervised probe for interpreting these representations. Nevertheless, we report competitive many-to-one (M1) accuracies for POSI on the 45-tag Penn Treebank WSJ dataset as compared to specialized state-of-the-art approaches in the literature (He, Neubig, and Berg-Kirkpatrick 2018) and improve upon the state of the art on the 12 tag universal treebank dataset across multiple languages (Stratos, Collins, and Hsu 2016;Stratos 2019). We further show that our approach can be used in a zero-shot crosslingual setting where a model trained on one language can used for evaluation in another language. We observe impressive crosslingual POSI performance, showcasing the representational power of mBERT, especially when the languages are related. Our method also achieves competitive results on CoLab on the WSJ test set, outperforming the initial DIORA approach (Drozdov et al. 2019b) and performing comparably to recent DIORA variants (Drozdov et al. 2019a) which incorporate more complex methods such as latent chart parsing and discrete representation learning. In contrast to specialized state-of-the-art methods for syntax induction, our framework is more general as it demonstrates good performance for both CoLab and POSI by simply adapting the input representations.
We further investigate the effectiveness of multilingual BERT (mBERT) (Devlin et al. 2019) for POSI across multiple languages and CoLab in English and see improvement in performance by using mBERT for both tasks even in En- glish. This is in contrast with the supervised experiments where both mBERT and E-BERT perform competitively. In contrast to various supervised probes in the literature (Liu et al. 2019a;Tenney, Das, and Pavlick 2019), our unsupervised probe finds that syntactic information is captured in higher layers on average than what was previously reported (Tenney, Das, and Pavlick 2019). Upon further layer-wise analysis of the two probes, we find that while supervised probes show that all layers of E-BERT contain syntactic information fairly uniformly, middle layers lead to a better performance on the investigated syntactic tasks with our unsupervised probe.
Problem Definition
We consider two syntax induction problems in this work: 1. Part-of-speech induction (POSI): determining part of speech of words in a sentence. 2. Constituency label induction (CoLab): determining the constituency label for a given constituent (span of contiguous tokens). 2 Figure 2 shows an illustration for the two tasks. In order to do well, both tasks require reasoning about the context. This motivates us to use contextualized representations, which have shown an ability to model such information effectively. Letting [m] denote {1, 2, . . . , m}, we model unsupervised syntax induction as the task of learning a mapping function C : X −→ [m]. For POSI, X is the set of word tokens in the corpus and m is the number of part-of-speech tags. 3 For CoLab, X is the set of constituents across all sentences in the corpus and m is the number of constituent labels. For each element x ∈ X, let c(x) denote the context of x in the sentence containing x. The number m of true clusters is assumed to be known. For CoLab, we also assume gold constituent spans from manually annotated constituency parse trees, focusing only on determining constituent labels, following Drozdov et al. (2019a).
Proposed Method
We address unsupervised syntax induction via clustering, where C defines a clustering of X into m clusters. We define a deep embedded clustering framework and modify it to support common NLP objectives such as continuous bag of words (Mikolov et al. 2013). Our framework jointly transforms the text representations into a lower-dimensions and learns the clustering parameters in an end-to-end setup.
Deep Clustering
Unlike traditional clustering approaches that work with fixed, and often hand-designed features, deep clustering (Xie, Girshick, and Farhadi 2016;Ghasedi Dizaji et al. 2017;Chang et al. 2017;Yang, Parikh, and Batra 2016;Yang et al. 2017) transforms the data X into a latent feature space Z with a mapping function f θ : X −→ Z, where θ are learnable parameters. The dimensionality of Z is typically much smaller than X. The datapoints are clustered by simultaneously learning a clusteringC : Z → [m].While C might have been hard to learn directly (due to the high dimensionality of X), learningC may be easier. Deep Embedded Clustering: We draw on a particular deep clustering approach: Deep Embedded Clustering (DEC; Xie, Girshick, and Farhadi 2016). Our approach consists of two stages: (a) a pretraining stage, and (b) a joint representation learning and clustering stage. In the pretraining stage, a mapping function f θ is pretrained using a stacked autoencoder (SAE). The SAE learns to reconstruct X through the bottleneck Z, i.e., X encoder − −−−− → Z decoder − −−−− → X . We use mean squared error (MSE) as the reconstruction loss:
L rec = ||X − X || 2 = x∈X ||x − x || 2
The encoder parameters are used to initialize the mapping function f θ .
In the joint representation learning and clustering stage, we finetune the encoder f θ trained in the pretraining stage to minimize a clustering loss L KL . The goal of this step is to learn a latent space that is amenable to clustering. We learn a set of m cluster centers {µ i ∈ Z} m i=1 of the latent space Z and alternate between computing an auxiliary target distribution and minimizing the Kullback-Leibler (KL) divergence. First, a soft cluster assignment is computed for each embedded point. Then, the mapping function f θ is refined along with the cluster centers by learning from the assignments using an auxiliary target distribution. This process is repeated. The soft assignment is computed via the Student's t-distribution. The probability of assigning data point i to cluster j is denoted q ij and defined:
q ij = (1 + ||z i − µ j || 2 /ν) − ν+1 2 j (1 + ||z i − µ j || 2 /ν) − ν+1 2
where ν is set to 1 in all experiments. Then, a cluster assignment hardening loss (Xie, Girshick, and Farhadi 2016) is used to make these soft assignment probabilities more peaked. This is done by letting cluster assignment probability distribution q approach a more peaked auxiliary (target) distribution p:
p ij = q 2 ij /n j j q 2 ij /n j n j = i q ij
By squaring the original distribution and then normalizing it, the auxiliary distribution p forces assignments to have more peaked probabilities. This aims to improve cluster purity, put emphasis on data points assigned with high confidence, and to prevent large clusters from distorting the latent space. The divergence between the two probability distributions is formulated as the Kullback-Leibler divergence:
L KL = i j p ij log p ij q ij
The representation learning and clustering model is learned end-to-end.
SyntDEC: DEC for Syntax Induction
We further modify DEC for syntax induction: a) CBoW autoencoders: While DEC uses a conventional autoencoder, i.e., the input and output are the same, we modify it to support the continuous bag of words (CBoW) objective (Mikolov et al. 2013). This helps focus the low-dimensional representations to focus on context words, which are expected to be helpful for POSI. In particular, given a set of tokens c(x) that defines the context for an element x ∈ X, CBoW combines the distributed representations of tokens in c(x) to predict the element x in the middle. See Appendix A for an illustration. b) Finetuning with reconstruction loss: We found that in the clustering stage, finetuning with respect to the KL divergence loss alone easily leads to trivial solutions where all points map to the same cluster. To address this, we add the reconstruction loss as a regularization term. This is in agreement with subsequent works in deep clustering (Yang et al. 2017). Instead of solely minimizing L KL , we minimize
L total = L KL + λL rec(1)
in the clustering stage, where λ is a hyperparameter denoting the weight of the reconstruction loss. c) Contextualized representations: We represent linguistic elements x by embeddings extracted from pretrained networks like BERT (Devlin et al. 2019), SpanBERT (Joshi et al. 2020), and multilingual BERT (Devlin et al. 2019).
All of these networks are multi-layer architectures. Thus, we average the embeddings across the various layers. We experimented with different layer combinations but found the average was the best solution for these tasks. We averaged the embeddings of the subword units to compute word embeddings. 4 For CoLab, we represent spans by concatenating the representations of the end points (Toshniwal et al. 2020). d) Task-specific representations: Previous work in unsupervised syntax induction has shown the value of taskspecific features. In particular, a number of morphological features based on prefixes and suffixes and spelling cues like capitalization have been used in unsupervised POSI works (Tseng, Jurafsky, and Manning 2005;Stratos 2019;Yatbaz, Sert, and Yuret 2012). In our POSI experiments, we incorporate these morphological features by using word representations from fastText (Joulin et al. 2017). We use fastText embeddings of the trigram from each word with contextualized representations as input.
Experimental Details
Datasets: We evaluate our approach for POSI on two datasets: 45-tag Penn Treebank Wall Street Journal (WSJ) dataset (Marcus, Santorini, and Marcinkiewicz 1993) and multilingual 12-tag datasets drawn from the universal dependencies project (Nivre et al. 2016 (Rosenberg and Hirschberg 2007). For CoLab, we use F1 score following Drozdov et al. (2019a), ignoring spans which have only a single word and spans with the "TOP" label. In addition to F1, we also report M1 accuracy for CoLab to show the clustering performance more naturally and intuitively. Training Details: Similar to Xie, Girshick, and Farhadi (2016), we use greedy layerwise pretraining (Bengio et al. 2007) for initialization. New hidden layers are successively added to the autoencoder, and the layers are trained to denoise output of the previous layer. After layerwise pretraining, we train the autoencoder end-to-end and leverage the trained SyntDEC encoder (Section 3). K-Means is used to initialize cluster means and assignments. SyntDEC is trained end-to-end with the reconstruction and clustering losses. More details are in the appendix. , improves upon the result of KMeans by nearly 3 points, which demonstrates the effectiveness of transforming the mBERT embeddings to lower dimensionality using an autoencoder before clustering. Our method (SyntDEC) further enhances the result and shows that transforming the pretrained mBERT embeddings using clustering objective helps to extract syntactic information more effectively. When augmenting the mBERT embeddings with morphological features (SyntDEC_Morph), we improve over Stratos (2019) and (Tran et al. 2016). We also obtain similar M1 accuracy with higher VM as compared to (Yuret, Yatbaz, and Sert 2014). Morphology: We also note that the M1 accuracy of Tran et al. (2016) and Stratos (2019) drop significantly by nearly 14 points in absence of morphological features, while SyntDEC degrades by 2 points. This trend suggests that mBERT representations encode the morphology to some extent. Yatbaz, Sert, and Yuret (2012) are not directly comparable to our work as they performed word-based POSI which attaches same tag to all the instances of the word, while all the other works in Table 1 perform token-based POSI. They use task-specific hand-engineered rules like presence of hyphen, apostrophe etc. which might not translate to multiple languages and tasks. (He, Neubig, and Berg-Kirkpatrick 2018) train a POSI specialized model with Markov syntax model and short-context word embeddings and report current SOTA on POSI. In contrast to their method, SyntDEC is fairly task agnostic. 12-Tag Universal Treebanks: In Table 2, we report M1 accuracies on the 12-tag datasets averaged over 5 random runs. Across all languages, we report SOTA results and find an improvement on average over the previous best method (Stratos 2019) from 71.4% to 75.7%. We also note improvements of SyntDEC over SAE (70.9% to 75.7%) across languages, which reiterates the importance of finetuning representations for clustering. Our methods yield larger gains on this coarse-grained 12 tag POSI task as compared to the fine-grained 45 tag POSI task, and we hope to explore the reasons for this in future work. Ablation Studies: Next, we study the impact of our choices on the 45-tag WSJ dataset. Table 3 demonstrates that multilingual BERT (mBERT) is better than English BERT (E-BERT) across settings. For both mBERT and E-BERT, compressing the representations with SAE and finetuning using SyntDEC performs better than KMeans. Also, focusing the representations on the local context (CBoW) improves performance with E-BERT, though not with mBERT. In the appendix, we show the impact of using different types of fastText character embeddings and note the best results when we use embeddings of the last trigram of each word. Error Analysis: We compared SyntDEC and KMeans (when both use mBERT) and found that SyntDEC does better on noun phrases and nominal tags. It helps alleviate confusion among fine-grained noun tags (e.g., NN vs. NNS), while also showing better handling of numerals (CD) and personal pronouns (PRP). However, SyntDEC still shows considerable confusion among fine-grained verb categories. For 12-tag experiments, we similarly found that SyntDEC outperforms KMeans for the majority of the tags, especially nouns and verbs, resulting in a gain of more than 20% in 1-to-1 accuracy. We further compare t-SNE visualizations of SyntDEC and mBERT embeddings and observe that SyntDEC embeddings show relatively compact clusters. Detailed results and visualizations are shown in Figure 4 and the appendix.
SyntDEC as an Unsupervised Probe
Next, we leverage SyntDEC as an unsupervised probe to analyse where syntactic information is captured in the pretrained representations. Existing approaches to probing usually rely on supervised training of probes. However, as argued recently by (Zhou and Srikumar 2021), this can be unreliable. Our supervision-free probe arguably gets rid of any bias in interpretations due to the involvement of training data in probing.We compare our unsupervised probe to a reimplementation of the supervised shallow MLP based probe in Tenney, Das, and Pavlick (2019). Similar to their paper, we report Expected Layer under supervised and unsupervised settings for the two tasks in Figure 5. Expected Layer represents the average layer number in terms of incremental performance gains: E ∆ [l] = L l=1 l * ∆ (l) L l=1 ∆ (l) , where ∆ (l) is the change in the performance metric when adding layer l to the previous layers. Layers are incrementally added from lower to higher layers. We use F1 and M1 score as the performance metric for supervised and unsupervised experiments respectively. We observe that: 1. Expected Layer as per the unsupervised probe (blue) is higher than the supervised probe (green) for both tasks and models showing that unsupervised syntax induction benefits more from higher layers. 2. There are larger differences between E-BERT and mBERT Expected Layer under unsupervised settings suggesting that our unsupervised probe utilizes mBERT and E-BERT layers differently than the supervised one. In Figure 6, we further probe the performance of each layer individually by computing the F1 score for the supervised probe and the M1 score for the unsupervised probe. We observe noticeable improvement at Layer 1 for supervised POSI and Layer 1/4/6 for CoLab which also correlates with their respective Expected Layer values. For unsupervised settings, the improvements are more evenly shared One-to-one mapping is used to assign labels to clusters. across initial layers. Although F1 and M1 are not directly comparable, supervised performance is competitive even at higher layers while unsupervised performance drops. We present detailed results in the appendix.
7 Crosslingual POSI Pires, Schlinger, and Garrette (2019); Wu and Dredze (2019) show that mBERT is effective at zero-shot crosslingual transfer. Inspired by this, we evaluate the crosslingual performance on 12-tag universal treebank (Table 4). The first row shows M1 accuracies when training and evaluating SyntDEC on the same language (monolingual). The second row shows M1 accuracies of the English-trained SyntDEC on other languages (crosslingual). In general, we find that clusters learned on a high-resource languages like English can be used for other languages. Similar to He et al. (2019), we use the distances of the languages with English to group languages as nearby or distant. The distance is calculated by accounting for syntactic, genetic, and geographic distances according to the URIEL linguistic database (Littell et al. 2017). Our results highlight the effectiveness of mBERT in crosslingual POSI. Even for Asian languages (ko, id, and ja), which have a higher distance from English, the performance is comparable across settings. For nearby languages, crosslingual SyntDEC performs well and even outperforms the monolingual setting for some languages. Drozdov et al. (2019a). DIORACB and DIORA * CB are fairly specialized models involving codebook learning (*). We also report E-BERT and ELMo baselines from Drozdov et al. (2019a) (**). We significantly outperform these previously reported E-BERT/ELMo baselines. Our results are not directly comparable to DIORA as it uses the WSJ dev set for tuning and early stopping whereas we do not.
Constituency Labelling (CoLab)
In Table 5, we report the F1 and M1 score of constituency labelling (CoLab) over the WSJ test set. We represent constituents by concatenating embeddings of the first and last words in the span (where word embeddings are computed by averaging corresponding subword embeddings). We observe improvement over DIORA (Drozdov et al. 2019b), a recent unsupervised constituency parsing model, and achieve competitive results to recent variants that improve DIORA with discrete representation learning (Drozdov et al. 2019a). Our model and the DIORA variants use gold constituents for these experiments. We compute F1 metrics for comparing with previous work but also report M1 accuracies. As with POSI, our results suggest that mBERT outperforms both SpanBERT and E-BERT for the CoLab task as well. We also note that SpanBERT performs better than E-BERT, presumably because SpanBERT seeks to learn span representations explicitly. In the Appendix (Table 7), we explore other ways of representing constituents and note that mean/max pooling followed by clustering does not perform well. Compressing and finetuning the mean-pooled representation using SyntDEC (SyntDEC_Mean) is also suboptimal. We hypothesize that mean/max pooling results in a loss of information about word order in the constituent whereas the concatenation of first and last words retains this information. Even a stacked autoencoder (SAE) over the concatenation of first and last token achieves competitive results, but finetuning with SyntDEC improves the F 1 µ by nearly 4.5%. This demonstrates that for CoLab also, the transformation to lower dimensions and finetuning to clustering friendly spaces is important for achieving competitive performance.
Related Work
Deep Clustering: Unlike previous work where feature extraction and clustering were applied sequentially, deep clustering aims to jointly optimize for both by combining a clustering loss with the feature extraction. A number of deep clustering methods have been proposed which primar-ily differ in their clustering approach: Yang et al. (2017) use KMeans, Xie, Girshick, and Farhadi (2016) use cluster assignment hardening, Ghasedi Dizaji et al. (2017 add a balanced assignments loss on top of cluster assignment hardening, Huang et al. (2014) introduce a locality-preserving loss and a group sparsity loss on the clustering, Yang, Parikh, andBatra (2016) use agglomerative clustering, andJi et al. (2017) use subspace clustering. All of these approaches can be used to cluster contextualized representations, and future work may improve upon our results by exploring these approaches. The interplay between deep clustering for syntax and recent advancements in NLP, such as contextualized representations, has not previously been studied. In this paper, we fill this gap. Unsupervised Syntax Induction: There has been a lot of work on unsupervised induction of syntax, namely, unsupervised constituency parsing (Klein and Manning 2002;Seginer 2007;Kim, Dyer, and Rush 2019) and dependency parsing (Klein and Manning 2004;Smith and Eisner 2006;Gillenwater et al. 2010;Spitkovsky, Alshawi, and Jurafsky 2013;Jiang, Han, and Tu 2016). While most prior work focuses on inducing unlabeled syntactic structures, we focus on inducing constituent labels while assuming the gold syntactic structure is available. This goal has also been pursued in prior work (Drozdov et al. 2019a;Jin and Schuler 2020). Compared to them, we present simpler models to induce syntactic labels directly from pretrained models via dimensionality reduction and clustering. Similar to us, (Li and Eisner 2019) also note gains for supervised NLP tasks upon reducing the representation dimension. Probing Pretrained Representations: Recent analysis work (Liu et al. 2019a;Aljalbout et al. 2018;Jawahar, Sagot, and Seddah 2019, inter alia) has shown that pretrained language models encode syntactic information efficiently. Most of them train a supervised model using pretrained representations and labeled examples, and show that pretrained language models effectively encode part-of-speech and constituency information. In contrast to these works, we propose an unsupervised approach to probing which does not rely on any training data. (Zhou and Srikumar 2021) also pursue the same goals by studying the geometry of these representations.
Conclusion
In this work, we explored the problem of clustering text representations for model interpretation and induction of syntax. We observed that off-the-shelf methods like KMeans are sub-optimal as these representations are high dimensional and, thus, not directly suitable for clustering. Thus, we proposed a deep clustering approach which jointly transforms these representations into a lower-dimensional cluster friendly space and clusters them. Upon integration of a small number of task-specific features, and use of multilingual representations, we find that our approach achieves competitive performance for unsupervised POSI and CoLab comparable to more complex methods in the literature. Finally, we also show that we can use the technique as a supervisionfree approach to probe syntax in these representations and contrast our unsupervised probe with supervised ones. In Figure 8, we show the confusion matrices of SyntDEC_Morph and mBERT for the 20 most frequent tags in the 45 tag POSI task by assigning labels to predicted clusters using the optimal 1-to-1 mapping. We observe that SyntDEC_Morph outperforms mBERT for most tags.
12-Tag POSI Analysis
In Fig 9, we show t-SNE visualization of SyntDEC and mBERT embeddings of tokens from the 12-tag Universal Treebank English dataset. SyntDEC embeddings produce more distinct clusters. In Table 6, we study the impact of different character embeddings and achieve best results on using embeddings of the trailing trigram of each token.
45-Tag POSI Ablation Studies
CoLab Ablation Studies
In Table 7, we present the ablation results for CoLab. We find that KMeans on Max or mean pooled span representation of mBERT do not work well. Even deep clustering (SyntDEC_Mean) over the mean of the span representation does not help. SAE and SyntDEC trained over the concatenation of the representation of the end points substantially improve the results.
C Unsupervised Probing
In Table 8 and Table 9, we report the results of adding layers incrementally from lower to higher for POSI on mBERT and E-BERT. We present similar results for CoLab in Table 10 and Table 11. In Table 15 and Table 14, we report the results of individual layers for POSI on mBERT and E-BERT. We present similar results for CoLab in Table 12 and Table 13.
D Hyperparameters
Words are represented by 768 dimension vectors obtained after taking the mean of BERT layers.We tried max and mean pooling also but did not notice much improvement. Morphological embeddings extracted from fastText have 300 dimensions. The number of clusters are set equal to the number of ground truth tags for all the experiments. Following previous work (Stratos 2019), we use the 45-tag POSI experiments on English to select the hyperparameters for our framework and use these hyperparameters across all the other languages and tasks. We use a SyntDEC architecture with one encoder layer and use 75 as the size of the latent dimension. Layer-wise and end-to-end training is done for 50 epochs with a batch size of 64, learning rate of 0.1 and momentum of 0.9 using the SGD optimizer. In the clustering stage, we train SyntDEC for 4000 iterations with 256 as batch size and 0.001 as learning rate with 0.9 momentum using SGD. We set the reconstruction error weight λ = 5 for all our experiments. We set the context width as one for CBoW. For out-of-vocabulary words, we use an average over all subword embeddings. For all the experiments, we report results for the last training iteration as we do not have access to the ground truth labels for model selection. For the supervised experiments, we follow the training and architecture details of (Tenney, Das, and Pavlick 2019
Figure 1
1: t-SNE visualization of mBERT embeddings (clustered using kMeans) and SyntDEC (our probe) embeddings of tokens from Penn Treebank. Colors correspond to ground truth POS tags.
Figure 2 :
2An illustration of POSI and CoLab formalisms.
Figure 3 :
3An illustration of our SyntDEC model.
Figure 4 :
4Comparison of confusion matrices of mBERT and SyntDEC for 12-tag experiments on English.
Figure 5 :Figure 6 :
56Expected Layer of POSI and CoLab under unsupervised SyntDEC (blue) and supervised settings (green) with E-BERT and mBERT representations. Comparison of M1/F1 measure for POSI and CoLab under unsupervised (SyntDEC) and supervised settings with mBERT and E-BERT representations.
Figure 7 :
7CBoW variant of SyntDEC. Embeddings of the context tokens are concatenated and used as input to SyntDEC to reconstruct the embedding of the token.
Figure 7
7shows an illustration of SyntDEC-CBoW B POSI Analysis45-Tag POSI AnalysisWe show a t-SNE visualization of mBERT embeddings and the embeddings learned by our deep clustering model inFigure 1. We note that the clusters formed by SyntDEC are more coherent and dense.
Figure 8 :
8Confusion matrices for KMeans over mBERT (left) and SyntDEC_Morph (right) for the 20 most frequent tags (45-tag POSI).
Figure 9
9: t-SNE visualization of mBERT and SyntDEC embeddings of tokens from 12 tag Universal Treebank english dataset. Colors correspond to the ground truth POS tags.
). The WSJ dataset has approximately one million words tagged with 45 part of speech tags. For multilingual experiments, we use the 12-tag universal treebank v2.0 dataset which consists of corpora from 10 languages. 5 The words in this dataset have been tagged with 12 universal POS tags(McDonald et al. 2013). For CoLab, we follow the existing benchmark(Drozdov et al. 2019a) and evaluate on the WSJ test set. For POSI, as per the standard practice (Stratos 2019), we use the complete dataset (train + val + test) for training as well as evaluation. However, for CoLab, we use the train set to train our model and the test set for reporting results, followingDrozdov et al. (2019a). Evaluation Metrics: For POSI, we use the standard measures of many-to-one (M1; Johnson 2007) accuracy and V-Measure
Table 1: Many-to-one (M1) accuracy and V-Measure (VM) of POSI on the 45-tag Penn Treebank WSJ dataset for 10 random runs. mBERT is used in all of our experiments (upper part of the table).Method
M1
VM
SyntDEC_Morph
79.5 (±0.9)
73.9 (±0.7)
SyntDEC
77.6 (±1.5)
72.5 (±0.9)
SAE
75.3 (±1.4)
69.9 (±0.9)
KMeans
72.4 (±2.9)
-
Brown et al. (1992)
65.6 (±NA)
-
Stratos, Collins, and Hsu (2016)
67.7 (±NA)
-
Berg-Kirkpatrick et al. (2010)
74.9 (±1.5)
-
Blunsom and Cohn (2011)
77.5 (±NA)
69.8
Stratos (2019)
78.1 (±0.8)
-
Tran et al. (2016)
79.1 (±NA) 71.7 (±NA)
Yuret, Yatbaz, and Sert (2014)
79.5 (±0.3)
69.1(±2.7)
Yatbaz, Sert, and Yuret (2012) (word-based) 80.2 (±0.7)
72.1 (±0.4)
He, Neubig, and Berg-Kirkpatrick (2018)
80.8 (±1.3)
74.1 (±0.7)
5 Part of Speech Induction (POSI)
45-Tag Penn Treebank WSJ: In Table 1, we evaluate
the performance of contextualized representations and our
probe on the 45-tag Penn Treebank WSJ dataset. KMeans
clustering over the mBERT embeddings improves upon
Brown clustering (Brown et al. 1992) (as reported by Stratos,
2019) and Hidden Markov Models (Stratos, Collins, and Hsu
2016) based approach, showing that mBERT embeddings en-
code syntactic information. The stacked autoencoder, SAE
(trained during pretraining stage)
Table 3 :
3Comparison of E-BERT and mBERT on the 45-tag POSI task. We report oracle results in this table.
Table 4 :
4POSI M1 for SyntDEC with mBERT on 12-tag universal treebank in monolingual and crosslingual settings. Monolingual: clusters are learned and evaluated on the same language. Crosslingual: clusters are learned on English and evaluated on all languages.
Table 5 :
5CoLabresults on the WSJ test set using the gold parses
over five random runs. Our models were trained for 15 epochs
and results from the final epoch for each run are recorded. DIORA
results are reported from
Table 6 :
6Comparison of different orders of character n-gram embeddings for the 45-tag POSI task.
Table 7 :
7Comparison of different methods to represent spans forCoLab. mBERT is used in these experiments.
). All our experiments are performed on a 12GB GeForce RTX 2080 Ti GPU and each run takes approximately 3 hours.NN
NNP
NNS
VB
VBD
VBZ
VBN
VBG
VBP
RB
JJ
IN
DT
,
.
CD
CC
PRP
TO
MD
Others
NN
NNP
NNS
VB
VBD
VBZ
VBN
VBG
VBP
RB
JJ
IN
DT
,
.
CD
CC
PRP
TO
MD
Others
Note that it is not necessary for constituents to be contiguous, but we only consider contiguous constituents for simplicity. 3 X is distinct from the corpus vocabulary; in POSI, we tag each word token in each sentence with a POS tag.
In our preliminary experiments, we also tried other pooling mechanisms such as min/max pooling over subwords, but average performed the best among all of them.5 We use v2.0 in order to compare toStratos (2019).
LayersM1 VMLayer 0 61.6 (±0.5) 59.8 (±0.6) Layer 0_1 61.9 (±0.8) 59.9 (±0.8) Layer 0_2 66.5 (±1.0) 64.5 (±1.0) Layer 0_3 67.4 (±2.4) 65.6 (±1.6) Layer 0_4 68.5 (±2.2) 65.9 (±1.6) Layer 0_5 69.4 (±2.4) 66.2 (±1.3) Layer 0_6 70.7 (±1.2) 67.1 (±1.6) Layer 0_7 72.8 (±1.2) 68.3 (±0.7) Layer 0_8 72.6 (±0.6) 68.6 (±0.3) Layer 0_9 72.7 (±0.7) 68.9 (±0.5) Layer 0_10 72.1 (±1.4) 67.9 (±0.9) Layer 0_11 72.0 (±1.2) 67.9 (±0.9) Layer 0_12 72.7 (±1.2) 68.9 (±0.8)Table 11: Comparison of different mBERT layers for CoLab task. We report oracle M1 accuracy and V-Measure (VM) averaged over 5 random runs.
E Aljalbout, V Golkov, Y Siddiqui, M Strobel, D Cremers, arXiv:1801.07648Clustering with deep learning: Taxonomy and new methods. arXiv preprintAljalbout, E.; Golkov, V.; Siddiqui, Y.; Strobel, M.; and Cre- mers, D. 2018. Clustering with deep learning: Taxonomy and new methods. arXiv preprint arXiv:1801.07648.
What do neural machine translation models learn about morphology?. Y Belinkov, N Durrani, F Dalvi, H Sajjad, J Glass, Proc. of ACL. of ACLBelinkov, Y.; Durrani, N.; Dalvi, F.; Sajjad, H.; and Glass, J. 2017. What do neural machine translation models learn about morphology? In Proc. of ACL.
Greedy layer-wise training of deep networks. Y Bengio, P Lamblin, D Popovici, H Larochelle, Proc. of NeurIPS. of NeurIPSBengio, Y.; Lamblin, P.; Popovici, D.; and Larochelle, H. 2007. Greedy layer-wise training of deep networks. In Proc. of NeurIPS.
Painless unsupervised learning with features. T Berg-Kirkpatrick, A Bouchard-Côté, J Denero, D Klein, Proc. of NAACL-HLT. of NAACL-HLTBerg-Kirkpatrick, T.; Bouchard-Côté, A.; DeNero, J.; and Klein, D. 2010. Painless unsupervised learning with fea- tures. In Proc. of NAACL-HLT.
A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction. P Blunsom, T Cohn, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesBlunsom, P.; and Cohn, T. 2011. A hierarchical Pitman-Yor process HMM for unsupervised part of speech induction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technolo- gies, 865-874.
Class-based n-gram models of natural language. P F Brown, V J Della Pietra, P V Desouza, J C Lai, R L Mercer, Computational linguistics. 184Brown, P. F.; Della Pietra, V. J.; Desouza, P. V.; Lai, J. C.; and Mercer, R. L. 1992. Class-based n-gram models of nat- ural language. Computational linguistics, 18(4): 467-480.
S Cao, N Kitaev, D Klein, arXiv:2010.03146Unsupervised Parsing via Constituency Tests. arXiv preprintCao, S.; Kitaev, N.; and Klein, D. 2020. Unsuper- vised Parsing via Constituency Tests. arXiv preprint arXiv:2010.03146.
Deep adaptive image clustering. J Chang, L Wang, G Meng, S Xiang, C Pan, Proc. of ICCV. of ICCVChang, J.; Wang, L.; Meng, G.; Xiang, S.; and Pan, C. 2017. Deep adaptive image clustering. In Proc. of ICCV.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proc. of NAACL-HLT. of NAACL-HLTDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL-HLT.
Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders. A Drozdov, P Verga, Y.-P Chen, M Iyyer, A Mc-Callum, Proc. of EMNLP-IJCNLP. of EMNLP-IJCNLPDrozdov, A.; Verga, P.; Chen, Y.-P.; Iyyer, M.; and Mc- Callum, A. 2019a. Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders. In Proc. of EMNLP-IJCNLP.
Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders. A Drozdov, P Verga, M Yadav, M Iyyer, A Mccallum, Proc. of NAACL-HLT. of NAACL-HLTDrozdov, A.; Verga, P.; Yadav, M.; Iyyer, M.; and McCal- lum, A. 2019b. Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders. In Proc. of NAACL-HLT.
Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. K Ghasedi Dizaji, A Herandi, C Deng, W Cai, H Huang, Proc. of ICCV. of ICCVGhasedi Dizaji, K.; Herandi, A.; Deng, C.; Cai, W.; and Huang, H. 2017. Deep clustering via joint convolutional au- toencoder embedding and relative entropy minimization. In Proc. of ICCV.
Sparsity in Dependency Grammar Induction. J Gillenwater, K Ganchev, J Graça, F Pereira, B Taskar, Proc. of ACL. of ACLGillenwater, J.; Ganchev, K.; Graça, J.; Pereira, F.; and Taskar, B. 2010. Sparsity in Dependency Grammar Induc- tion. In Proc. of ACL.
Unsupervised learning of syntactic structure with invertible neural projections. J He, G Neubig, T Berg-Kirkpatrick, Proc. of EMNLP. of EMNLPHe, J.; Neubig, G.; and Berg-Kirkpatrick, T. 2018. Unsuper- vised learning of syntactic structure with invertible neural projections. In Proc. of EMNLP.
Cross-lingual syntactic transfer through unsupervised adaptation of invertible projections. J He, Z Zhang, T Berg-Kiripatrick, G Neubig, Proc. of ACL. of ACLHe, J.; Zhang, Z.; Berg-Kiripatrick, T.; and Neubig, G. 2019. Cross-lingual syntactic transfer through unsupervised adap- tation of invertible projections. In Proc. of ACL.
A structural probe for finding syntax in word representations. J Hewitt, C D Manning, Proc. of NAACL-HLT. of NAACL-HLTHewitt, J.; and Manning, C. D. 2019. A structural probe for finding syntax in word representations. In Proc. of NAACL- HLT.
Deep embedding network for clustering. P Huang, Y Huang, W Wang, L Wang, Proc. of International Conference on Pattern Recognition. of International Conference on Pattern RecognitionHuang, P.; Huang, Y.; Wang, W.; and Wang, L. 2014. Deep embedding network for clustering. In Proc. of International Conference on Pattern Recognition.
What Does BERT Learn about the Structure of Language?. G Jawahar, B Sagot, D Seddah, Proc. of ACL. of ACLJawahar, G.; Sagot, B.; and Seddah, D. 2019. What Does BERT Learn about the Structure of Language? In Proc. of ACL.
Deep subspace clustering networks. P Ji, T Zhang, H Li, M Salzmann, I Reid, Proc. of NeurIPS. of NeurIPSJi, P.; Zhang, T.; Li, H.; Salzmann, M.; and Reid, I. 2017. Deep subspace clustering networks. In Proc. of NeurIPS.
Unsupervised Neural Dependency Parsing. Y Jiang, W Han, K Tu, Proc. of EMNLP. of EMNLPJiang, Y.; Han, W.; and Tu, K. 2016. Unsupervised Neural Dependency Parsing. In Proc. of EMNLP.
Variational deep embedding: An unsupervised and generative approach to clustering. Z Jiang, Y Zheng, H Tan, B Tang, H Zhou, Proc. of IJCAI. of IJCAIJiang, Z.; Zheng, Y.; Tan, H.; Tang, B.; and Zhou, H. 2016. Variational deep embedding: An unsupervised and genera- tive approach to clustering. In Proc. of IJCAI.
Unsupervised learning of PCFGs with normalizing flow. L Jin, F Doshi-Velez, T Miller, L Schwartz, W Schuler, Proc. of ACL. of ACLJin, L.; Doshi-Velez, F.; Miller, T.; Schwartz, L.; and Schuler, W. 2019. Unsupervised learning of PCFGs with normalizing flow. In Proc. of ACL.
The Importance of Category Labels in Grammar Induction with Child-directed Utterances. L Jin, W Schuler, Proc. of International Conference on Parsing Technologies. of International Conference on Parsing TechnologiesJin, L.; and Schuler, W. 2020. The Importance of Cate- gory Labels in Grammar Induction with Child-directed Ut- terances. In Proc. of International Conference on Parsing Technologies.
Why doesn't EM find good HMM POStaggers?. M Johnson, Proc. of EMNLP-CoNLL. of EMNLP-CoNLLJohnson, M. 2007. Why doesn't EM find good HMM POS- taggers? In Proc. of EMNLP-CoNLL.
M Joshi, D Chen, Y Liu, D S Weld, L Zettlemoyer, O Levy, SpanBERT: Improving Pre-training by Representing and Predicting Spans. TACL. 8Joshi, M.; Chen, D.; Liu, Y.; Weld, D. S.; Zettlemoyer, L.; and Levy, O. 2020. SpanBERT: Improving Pre-training by Representing and Predicting Spans. TACL, 8: 64-77.
BERT for Coreference Resolution: Baselines and Analysis. M Joshi, O Levy, L Zettlemoyer, D Weld, Proc. of EMNLP-IJCNLP. of EMNLP-IJCNLPJoshi, M.; Levy, O.; Zettlemoyer, L.; and Weld, D. 2019. BERT for Coreference Resolution: Baselines and Analysis. In Proc. of EMNLP-IJCNLP.
Bag of Tricks for Efficient Text Classification. A Joulin, E Grave, P Bojanowski, T Mikolov, Proc. of EACL. of EACLJoulin, A.; Grave, E.; Bojanowski, P.; and Mikolov, T. 2017. Bag of Tricks for Efficient Text Classification. In Proc. of EACL.
Compound Probabilistic Context-Free Grammars for Grammar Induction. Y Kim, C Dyer, A M Rush, Proc. of ACL. of ACLKim, Y.; Dyer, C.; and Rush, A. M. 2019. Compound Prob- abilistic Context-Free Grammars for Grammar Induction. In Proc. of ACL.
Constituency Parsing with a Self-Attentive Encoder. N Kitaev, D Klein, Proc. of ACL. of ACLKitaev, N.; and Klein, D. 2018. Constituency Parsing with a Self-Attentive Encoder. In Proc. of ACL.
A Generative Constituent-Context Model for Improved Grammar Induction. D Klein, C D Manning, Proc. of ACL. of ACLKlein, D.; and Manning, C. D. 2002. A Generative Constituent-Context Model for Improved Grammar Induc- tion. In Proc. of ACL.
Corpus-Based Induction of Syntactic Structure: Models of Dependency and Constituency. D Klein, C D Manning, Proc. of ACL. of ACLKlein, D.; and Manning, C. D. 2004. Corpus-Based Induc- tion of Syntactic Structure: Models of Dependency and Con- stituency. In Proc. of ACL.
Higher-Order Coreference Resolution with Coarse-to-Fine Inference. K Lee, L He, L Zettlemoyer, Proc. of NAACL-HLT. of NAACL-HLTLee, K.; He, L.; and Zettlemoyer, L. 2018. Higher-Order Coreference Resolution with Coarse-to-Fine Inference. In Proc. of NAACL-HLT.
Specializing word embeddings (for parsing) by information bottleneck. X L Li, J Eisner, arXiv:1910.00163arXiv preprintLi, X. L.; and Eisner, J. 2019. Specializing word embed- dings (for parsing) by information bottleneck. arXiv preprint arXiv:1910.00163.
Unsupervised POS Induction with Word Embeddings. C.-C Lin, W Ammar, C Dyer, L Levin, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLin, C.-C.; Ammar, W.; Dyer, C.; and Levin, L. 2015. Un- supervised POS Induction with Word Embeddings. In Pro- ceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
Assessing the ability of LSTMs to learn syntax-sensitive dependencies. T Linzen, E Dupoux, Y Goldberg, TACL. 4Linzen, T.; Dupoux, E.; and Goldberg, Y. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. TACL, 4: 521-535.
Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. P Littell, D R Mortensen, K Lin, K Kairis, C Turner, L Levin, Proc. of EACL. of EACLLittell, P.; Mortensen, D. R.; Lin, K.; Kairis, K.; Turner, C.; and Levin, L. 2017. Uriel and lang2vec: Representing lan- guages as typological, geographical, and phylogenetic vec- tors. In Proc. of EACL.
Linguistic Knowledge and Transferability of Contextual Representations. N F Liu, M Gardner, Y Belinkov, M E Peters, N A Smith, Proc. of NAACL-HLT. of NAACL-HLTLiu, N. F.; Gardner, M.; Belinkov, Y.; Peters, M. E.; and Smith, N. A. 2019a. Linguistic Knowledge and Transfer- ability of Contextual Representations. In Proc. of NAACL- HLT.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V 2019b Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintLiu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Building a Large Annotated Corpus of English: The Penn Treebank. M P Marcus, B Santorini, M A Marcinkiewicz, Computational Linguistics. 192Marcus, M. P.; Santorini, B.; and Marcinkiewicz, M. A. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2): 313-330.
Universal Dependency Annotation for Multilingual Parsing. R Mcdonald, J Nivre, Y Quirmbach-Brundage, Y Goldberg, D Das, K Ganchev, K Hall, S Petrov, H Zhang, O Täckström, C Bedini, N Bertomeu Castelló, J Lee, Proc. of ACL. of ACLMcDonald, R.; Nivre, J.; Quirmbach-Brundage, Y.; Gold- berg, Y.; Das, D.; Ganchev, K.; Hall, K.; Petrov, S.; Zhang, H.; Täckström, O.; Bedini, C.; Bertomeu Castelló, N.; and Lee, J. 2013. Universal Dependency Annotation for Multi- lingual Parsing. In Proc. of ACL.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintMikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013. Ef- ficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Rethinking self-attention: An interpretable self-attentive encoder-decoder parser. K Mrini, F Dernoncourt, T Bui, W Chang, N Nakashole, arXiv:1911.03875arXiv preprintMrini, K.; Dernoncourt, F.; Bui, T.; Chang, W.; and Nakashole, N. 2019. Rethinking self-attention: An in- terpretable self-attentive encoder-decoder parser. arXiv preprint arXiv:1911.03875.
Universal Dependencies v1: A Multilingual Treebank Collection. J Nivre, M.-C De Marneffe, F Ginter, Y Goldberg, J Hajič, C D Manning, R Mcdonald, S Petrov, S Pyysalo, N Silveira, R Tsarfaty, D Zeman, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16. the Tenth International Conference on Language Resources and Evaluation (LREC'16Nivre, J.; de Marneffe, M.-C.; Ginter, F.; Goldberg, Y.; Ha- jič, J.; Manning, C. D.; McDonald, R.; Petrov, S.; Pyysalo, S.; Silveira, N.; Tsarfaty, R.; and Zeman, D. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), 1659-1666.
European Language Resources Association (ELRA). Slovenia Portorož, Portorož, Slovenia: European Language Resources Associ- ation (ELRA).
Deep Contextualized Word Representations. M Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, Proc. of NAACL-HLT. of NAACL-HLTPeters, M.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018a. Deep Contextualized Word Representations. In Proc. of NAACL-HLT.
Dissecting Contextual Word Embeddings: Architecture and Representation. M Peters, M Neumann, L Zettlemoyer, W Yih, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingPeters, M.; Neumann, M.; Zettlemoyer, L.; and Yih, W.-t. 2018b. Dissecting Contextual Word Embeddings: Architec- ture and Representation. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Process- ing.
Information-Theoretic Probing for Linguistic Structure. T Pimentel, J Valvoda, R Hall Maudslay, R Zmigrod, A Williams, R Cotterell, Proc. of ACL. of ACLPimentel, T.; Valvoda, J.; Hall Maudslay, R.; Zmigrod, R.; Williams, A.; and Cotterell, R. 2020. Information-Theoretic Probing for Linguistic Structure. In Proc. of ACL.
How Multilingual is Multilingual BERT?. T Pires, E Schlinger, D Garrette, Proc. of ACL. of ACLPires, T.; Schlinger, E.; and Garrette, D. 2019. How Multi- lingual is Multilingual BERT? In Proc. of ACL.
V-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure. A Rosenberg, J Hirschberg, Proc. of EMNLP-CoNLL. of EMNLP-CoNLLPrague, Czech RepublicRosenberg, A.; and Hirschberg, J. 2007. V-Measure: A Con- ditional Entropy-Based External Cluster Evaluation Mea- sure. In Proc. of EMNLP-CoNLL, 410-420. Prague, Czech Republic.
Fast Unsupervised Incremental Parsing. Y Seginer, Proc. of ACL. of ACLSeginer, Y. 2007. Fast Unsupervised Incremental Parsing. In Proc. of ACL.
Annealing Structural Bias in Multilingual Weighted Grammar Induction. N A Smith, J Eisner, Proc. of COLING-ACL. of COLING-ACLSmith, N. A.; and Eisner, J. 2006. Annealing Structural Bias in Multilingual Weighted Grammar Induction. In Proc. of COLING-ACL.
Breaking Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction. V I Spitkovsky, H Alshawi, D Jurafsky, Proc. of EMNLP. of EMNLPSpitkovsky, V. I.; Alshawi, H.; and Jurafsky, D. 2013. Break- ing Out of Local Optima with Count Transforms and Model Recombination: A Study in Grammar Induction. In Proc. of EMNLP.
Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction. K Stratos, Proc. of NAACL-HLT. of NAACL-HLTStratos, K. 2019. Mutual Information Maximization for Simple and Accurate Part-Of-Speech Induction. In Proc. of NAACL-HLT.
Unsupervised Part-Of-Speech Tagging with Anchor Hidden Markov Models. K Stratos, M Collins, D Hsu, TACL. 4Stratos, K.; Collins, M.; and Hsu, D. 2016. Unsupervised Part-Of-Speech Tagging with Anchor Hidden Markov Mod- els. TACL, 4: 245-257.
BERT Rediscovers the Classical NLP Pipeline. I Tenney, D Das, E Pavlick, Proc. of ACL. of ACLTenney, I.; Das, D.; and Pavlick, E. 2019. BERT Rediscovers the Classical NLP Pipeline. In Proc. of ACL.
What do you learn from context? probing for sentence structure in contextualized word representations. I Tenney, P Xia, B Chen, A Wang, A Poliak, R T Mccoy, N Kim, B Van Durme, S R Bowman, D Das, Proc. of ICLR. of ICLRTenney, I.; Xia, P.; Chen, B.; Wang, A.; Poliak, A.; McCoy, R. T.; Kim, N.; Van Durme, B.; Bowman, S. R.; Das, D.; et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. In Proc. of ICLR.
A Cross-Task Analysis of Text Span Representations. S Toshniwal, H Shi, B Shi, L Gao, K Livescu, K Gimpel, Proc. of RepL4NLP. of RepL4NLPToshniwal, S.; Shi, H.; Shi, B.; Gao, L.; Livescu, K.; and Gimpel, K. 2020. A Cross-Task Analysis of Text Span Rep- resentations. In Proc. of RepL4NLP.
Unsupervised Neural Hidden Markov Models. K M Tran, Y Bisk, A Vaswani, D Marcu, K Knight, Proc. of the Workshop on Structured Prediction for NLP. of the Workshop on Structured Prediction for NLPTran, K. M.; Bisk, Y.; Vaswani, A.; Marcu, D.; and Knight, K. 2016. Unsupervised Neural Hidden Markov Models. In Proc. of the Workshop on Structured Prediction for NLP.
Small and Practical BERT Models for Sequence Labeling. H Tsai, J Riesa, M Johnson, N Arivazhagan, X Li, A Archer, Proc. of EMNLP-IJCNLP. of EMNLP-IJCNLPTsai, H.; Riesa, J.; Johnson, M.; Arivazhagan, N.; Li, X.; and Archer, A. 2019. Small and Practical BERT Models for Sequence Labeling. In Proc. of EMNLP-IJCNLP.
Morphological features help POS tagging of unknown words across language varieties. H Tseng, D Jurafsky, C Manning, Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language ProcessingTseng, H.; Jurafsky, D.; and Manning, C. 2005. Morpho- logical features help POS tagging of unknown words across language varieties. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing.
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT. S Wu, M Dredze, Proc. of EMNLP-IJCNLP. of EMNLP-IJCNLPWu, S.; and Dredze, M. 2019. Beto, Bentz, Becas: The Sur- prising Cross-Lingual Effectiveness of BERT. In Proc. of EMNLP-IJCNLP.
Core-fQA: Coreference Resolution as Query-based Span Prediction. W Wu, F Wang, A Yuan, F Wu, J Li, Proc. of ACL. of ACLWu, W.; Wang, F.; Yuan, A.; Wu, F.; and Li, J. 2020. Core- fQA: Coreference Resolution as Query-based Span Predic- tion. In Proc. of ACL.
Unsupervised deep embedding for clustering analysis. J Xie, R Girshick, A Farhadi, Proc. of ICML. of ICMLXie, J.; Girshick, R.; and Farhadi, A. 2016. Unsupervised deep embedding for clustering analysis. In Proc. of ICML.
Towards k-means-friendly spaces: Simultaneous deep learning and clustering. B Yang, X Fu, N D Sidiropoulos, M Hong, Proc. of ICML. of ICMLYang, B.; Fu, X.; Sidiropoulos, N. D.; and Hong, M. 2017. Towards k-means-friendly spaces: Simultaneous deep learn- ing and clustering. In Proc. of ICML.
Joint unsupervised learning of deep representations and image clusters. J Yang, D Parikh, D Batra, Proc. of CVPR. of CVPRYang, J.; Parikh, D.; and Batra, D. 2016. Joint unsupervised learning of deep representations and image clusters. In Proc. of CVPR.
Learning syntactic categories using paradigmatic representations of word context. M A Yatbaz, E Sert, D Yuret, Proc. of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. of the 2012 Joint Conference on Empirical Methods in Natural Language essing and Computational Natural Language LearningYatbaz, M. A.; Sert, E.; and Yuret, D. 2012. Learning syn- tactic categories using paradigmatic representations of word context. In Proc. of the 2012 Joint Conference on Empiri- cal Methods in Natural Language Processing and Compu- tational Natural Language Learning.
Unsupervised instance-based part of speech induction using probable substitutes. D Yuret, M A Yatbaz, E Sert, Proc. of COLING. of COLINGYuret, D.; Yatbaz, M. A.; and Sert, E. 2014. Unsupervised instance-based part of speech induction using probable sub- stitutes. In Proc. of COLING 2014.
Head-Driven Phrase Structure Grammar Parsing on Penn Treebank. J Zhou, H Zhao, Proc. of ACL. of ACLZhou, J.; and Zhao, H. 2019. Head-Driven Phrase Structure Grammar Parsing on Penn Treebank. In Proc. of ACL.
DirectProbe: Studying Representations without Classifiers. Y Zhou, V Srikumar, Proc. of NAACL-HLT. of NAACL-HLTZhou, Y.; and Srikumar, V. 2021. DirectProbe: Studying Representations without Classifiers. In Proc. of NAACL- HLT.
Comparison of different E-BERT layers for CoLab task. We report oracle M1 accuracy and V-Measure (VM) averaged over. 12Table 12: Comparison of different E-BERT layers for CoLab task. We report oracle M1 accuracy and V-Measure (VM) averaged over
Comparison of different mBERT layers for CoLab task. We report oracle M1 accuracy and V-Measure (VM) averaged over. 13Table 13: Comparison of different mBERT layers for CoLab task. We report oracle M1 accuracy and V-Measure (VM) averaged over
Comparison of different mBERT layers for the 45-tag POSI task. We report oracle M1 accuracy and V-Measure (VM). 14Table 14: Comparison of different mBERT layers for the 45-tag POSI task. We report oracle M1 accuracy and V-Measure (VM)
Comparison of different E-BERT layers for the 45-tag POSI task. We report oracle M1 accuracy averaged and V-Measure (VM) over 5 random runs. 15Table 15: Comparison of different E-BERT layers for the 45-tag POSI task. We report oracle M1 accuracy averaged and V-Measure (VM) over 5 random runs.
| [] |
[
"VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT",
"VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT",
"VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT",
"VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT"
] | [
"Tharindu Cyril Weerasooriya \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Sujan Dutta \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Tharindu Ranasinghe t.ranasinghe@aston.ac.uk \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Marcos Zamperi \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Christopher M Homan \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Ashiqur R Khudabukhsh khudabukhsh@mail.rit.edu \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Tharindu Cyril Weerasooriya \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Sujan Dutta \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Tharindu Ranasinghe t.ranasinghe@aston.ac.uk \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Marcos Zamperi \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Christopher M Homan \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n",
"Ashiqur R Khudabukhsh khudabukhsh@mail.rit.edu \nRochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n\n"
] | [
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n",
"Rochester Institute of Technology\nRochester Institute of Technology\nAston University\nGeorge Mason University\nRochester Institute of Technology\nRochester Institute of Technology\n"
] | [] | this paper discusses and contains content that is offensive or disturbing. This paper examines social web content moderation from two key perspectives: automated methods (machine moderators) and human evaluators (human moderators). We conduct a noise audit at an unprecedented scale using nine machine moderators trained on well-known offensive speech data sets evaluated on a corpus sampled from 92 million YouTube comments discussing a multitude of issues relevant to US politics. We introduce a first-of-its-kind data set of vicarious offense. We ask annotators:(1) if they find a given social media post offensive; and (2) how offensive annotators sharing different political beliefs would find the same content. Our experiments with machine moderators reveal that moderation outcomes wildly vary across different machine moderators. Our experiments with human moderators suggest that (1) political leanings considerably affect first-person offense perspective;(2) Republicans are the worst predictors of vicarious offense; (3) predicting vicarious offense for the Republicans is most challenging than predicting vicarious offense for the Independents and the Democrats; and (4) disagreement across political identity groups considerably increases when sensitive issues such as reproductive rights or gun control/rights are discussed. Both experiments suggest that offense, is indeed, highly subjective and raise important questions concerning content moderation practices. | 10.48550/arxiv.2301.12534 | [
"https://export.arxiv.org/pdf/2301.12534v1.pdf"
] | 256,390,399 | 2301.12534 | 43917bd49dfef3ae57d5b08f9086a45ae6e683ea |
VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT
January 31, 2023
Tharindu Cyril Weerasooriya
Rochester Institute of Technology
Rochester Institute of Technology
Aston University
George Mason University
Rochester Institute of Technology
Rochester Institute of Technology
Sujan Dutta
Rochester Institute of Technology
Rochester Institute of Technology
Aston University
George Mason University
Rochester Institute of Technology
Rochester Institute of Technology
Tharindu Ranasinghe t.ranasinghe@aston.ac.uk
Rochester Institute of Technology
Rochester Institute of Technology
Aston University
George Mason University
Rochester Institute of Technology
Rochester Institute of Technology
Marcos Zamperi
Rochester Institute of Technology
Rochester Institute of Technology
Aston University
George Mason University
Rochester Institute of Technology
Rochester Institute of Technology
Christopher M Homan
Rochester Institute of Technology
Rochester Institute of Technology
Aston University
George Mason University
Rochester Institute of Technology
Rochester Institute of Technology
Ashiqur R Khudabukhsh khudabukhsh@mail.rit.edu
Rochester Institute of Technology
Rochester Institute of Technology
Aston University
George Mason University
Rochester Institute of Technology
Rochester Institute of Technology
VICARIOUS OFFENSE AND Noise Audit OF OFFENSIVE SPEECH CLASSIFIERS A PREPRINT
January 31, 2023Polarization · Machine Translation · US News Networks · Human Annotation
this paper discusses and contains content that is offensive or disturbing. This paper examines social web content moderation from two key perspectives: automated methods (machine moderators) and human evaluators (human moderators). We conduct a noise audit at an unprecedented scale using nine machine moderators trained on well-known offensive speech data sets evaluated on a corpus sampled from 92 million YouTube comments discussing a multitude of issues relevant to US politics. We introduce a first-of-its-kind data set of vicarious offense. We ask annotators:(1) if they find a given social media post offensive; and (2) how offensive annotators sharing different political beliefs would find the same content. Our experiments with machine moderators reveal that moderation outcomes wildly vary across different machine moderators. Our experiments with human moderators suggest that (1) political leanings considerably affect first-person offense perspective;(2) Republicans are the worst predictors of vicarious offense; (3) predicting vicarious offense for the Republicans is most challenging than predicting vicarious offense for the Independents and the Democrats; and (4) disagreement across political identity groups considerably increases when sensitive issues such as reproductive rights or gun control/rights are discussed. Both experiments suggest that offense, is indeed, highly subjective and raise important questions concerning content moderation practices.
Introduction
Offensive speech on web platforms is a persistent problem with wide-ranging impacts [1,2]. Not only can it impact those directly targeted [3,4,5], but it can also create a climate of negative behavior, where such behavior becomes normalized. Offensive speech can have significant economic impacts, as the sponsors of those platforms where offensive speech is particularly prevalent can become associated with the offending language and lose business, or withdraw their support from the platform in which it occurred [6].
There are many reasons why moderation fails to eliminate the problem. Among them is the reality that people often disagree on what is offensive [7,8]. Disagreement can be caused by a lack of understanding of the reasons why something is offensive, or how the offensive language can impact those targeted [9], or simply by the intrinsic subjectivity [10] of the phenomenon. This is reflected in wildly diverging annotations in publicly available datasets Figure 1: Illustrative examples highlighting nuanced inconsistencies between machine moderators and human moderators with different political leanings. For every comment, majority vote is used to aggregate individual machine and human moderator's verdicts. An angry emoji and a green checkbox indicate offensive and notOffensive labels, respectively. These real-world examples are drawn from comments on YouTube news videos of three major US cable news networks: Fox News, CNN, and MSNBC. Each example is annotated by 20 human moderators with at least six Republicans, Democrats, and Independents. Nine well-known offensive speech data sets are used to create nine machine moderators. The grid summarizes vicarious offense where annotators belonging to the row political identity is asked to predict vicarious offense perspectives of the two other political identities mentioned in the columns.
which result in diverging performances of machine learning models trained to detect offensive speech. And yet, thus far, there have been only few studies of the nature of this disagreement, from the perspective of human or machine moderators [11]. Both camps of moderators are related, as human annotators play a critical role in training machine moderators.
In this paper, we tackle the problem of disagreement in offense speech classification from both a human and machine perspective, and explore the relationship between the two. We focus on the particular problem of offensive speech in political discourse. We focus on this particular problem because it is a timely and important one. It is also one on which we expected to find a relatively strong and clear disagreement signal as it is a topic that has become especially polarized in recent years [12,13,14,15].
Contributions and Novelties
Our contributions and novelties are the following:
• Noise Audit: While limited literature exists on investigating the generalizability of offensive speech detection systems across data-sets [16] and vulnerability to adversarial attacks [17], unseen use cases [18], or geographic biases [19], to the best of our knowledge, no work exists on a comprehensive, in-the-wild evaluation of offensive speech filtering outcomes on large-scale real-world political discussions with documented political dissonance. Much to the spirit of the celebrated book, "Noise: A Flaw in Human Judgment" [20] by Kahneman et al., we conduct a comprehensive noise audit of several well-known offensive speech classifiers on a massive political data set consisting of 92,242,547 comments on 241,112 videos hosted by official YouTube handles of three prominent US cable news networks. Our data set spans the time period of 2014 -2022 that has seen two presidential elections, a raging pandemic, an ongoing war, and a far-from-peaceful transfer of power.
A noise audit, as described in [20], is essentially an audit of outcome variability across multiple (competent) decision systems. [20] outline several scenarios, such as judicial sentencing or insurance settlements, that indicate widespread noise in diverse real-world settings. In this paper, we seek to understand how content moderation outcomes vary across different offensive speech classifiers (dubbed machine moderators) in political discourse at the granularity of individual social media posts. One key impediment to performing in-the-wild analysis of content moderation systems is a lack of ground-truth. It is resource-intensive to annotate a representative data set of social media posts to test content moderation at scale. We bypass this requirement as we study disagreement among machine moderators on a large pool of unlabeled instances.
• Vicarious Offense: We introduce the notion of vicarious offense. With the 2022 US midterm election being months away and the current US political climate showing no signs of reconciliation between the politically divergent left and the right, we ask a timely and important question: how well does a Democratic-leaning user perceive what content would be deemed as offensive by her Republican-leaning counterpart or vice-versa? Documented evidence indicate that negative views towards the opposite political party have affected outcomes in settings as diverse as allocating scholarship funds [21], mate selection [22], and employment decisions [23]. Is it possible that such negative views also affect our ability to perceive what we dub vicarious offense?
We conduct a detailed annotation study to understand the phenomenon of vicarious offense and release the first-ofits-kind data set where an annotator labels both (1) if she finds a social media post as offensive or not, and (2) if she feels a person with a different political beliefs would find the same social media post as offensive or not. Unlike most studies on political biases, which proffer a binarized world-view of US politics, in our study, we consider all three key players in the US democracy: the Democrats, the Republicans, and the Independents. In the era of growing political polarization in the US where sectarian us vs them often dominates the political discourse [24], our study marks one of the early attempts to quantify how well the political opposites understand each other when asked to be on their opposites' shoes. Figure 1 presents a few illustrative examples from this rich data set. Note that, while Republicans feel that the comment Potentially criminal? It WAS fucking criminal and Trump and all associated with the insurrection should be in prison. will invite equal ire by the Independents, the Independents, actually, do not find it offensive. Our data set reveals how little different political groups understand each other and presents startling findings when sensitive issues such as reproductive rights or gun control/rights are in the mix.
• Resource: We will release a data set of both first-person and vicarious offense 2 . Our data set consists of 2,310 YouTube comments drawn from comments on news videos of three major US cable news networks: Fox News, CNN, and MSNBC. Each example is annotated by 20 annotators and at least by six Republicans, Democrats, and Independents. We present all 20 labels with detailed demographic information (race, gender, age, political identity) of individual annotators. In addition to presenting a standard offensive speech data set with fine-grained labels and rich annotator information, we present the first-of-its-kind dataset of vicarious offense introduced above.
• Social: Ensuring civil discourse while discussing politics remains a long-standing concern [1, 13,25]. Over the last few years, both sides across the political aisle have accused web platforms of bias [26]. Our study presents critical insights into modern day moderation challenges when it comes down to political content and reveals that humans often disregard style over content when the content aligns with their political views. Our focused analysis on sensitive issues such as reproductive rights and gun control/rights suggest that human political biases may pose significant challenges to moderating sensitive political content.
Our Approach and Paper Road-map
Section 3 investigates how consistent machine moderators are when evaluated at scale. To this end, we evaluate nine classifiers (described in Section 2.1) trained on well-known offensive speech data sets on a political corpus (described in Section 2.2). Once we establish that moderation outcomes wildly vary across machine moderators in the wild, we turn our focus to human moderators. We carefully select a data set for human annotation (described in Section 4) that contains instances that machine moderators (1) near-unanimously marked as not offensive; (2) near-unanimously marked as offensive; and (3) exhibited maximal disagreement with no clear consensus. We then summarize our annotator demographics (Section 5.1) and conduct a thorough annotation study and examine (1) the extent human moderators are aligned with machine moderators; (2) how political identities affect moderation outcomes. Finally, we explore vicarious offense. We ask annotators to predict out-group labels, i.e., for a Republican annotator, we would ask how she thinks a Democrat and an Independent would label a given instance. We then analyze how well annotators belonging to different political identities understand out-group perception of offense.
Design Considerations
Machine Moderators
Research on automatic content moderation has focused on a broad taxonomy of inappropriate content that includes -toxic, harmful, abusive, hateful, offensive etc. For calibration purposes, following [27], we transform the labels of instances present in eight well-known data sets into two broad labels: offensive and not offensive. The labels correspond to the level A of the popular OLID taxonomy [28] widely used in offensive speech classification. The offensive class represents posts containing any form of non-acceptable language (e.g. profanity, swear words) or a targeted offense including insults and threats.
We investigate nine offensive language identification models. From these models, we trained eight models on the following offensive speech data sets: (1) AHSD [29]; (2) HASOC [30]; (3) HatEval [3]; (4) HateXplain [31]; (5) OLID [28]; (6) TRAC [32]; (7) OHS [33]; (8) TCC 3 . These data sets are well-known in the web-toxicity domain and consist of social media posts obtained from Twitter, Facebook, Gab, Reddit, and YouTube. We trained BERT-LARGE-CASED models on each of the training sets in these datasets following a text classification objective. We employed a batch-size of 16, Adam optimiser with learning rate 2e−5, and a linear learning rate warm-up over 10% of the training data. During the training process, the parameters of the BERT-LARGE-CASED model, as well as the parameters of the subsequent layers, were updated. The models were evaluated while training using an evaluation set that had one fifth of the rows in training data. We performed early stopping if the evaluation loss did not improve over three evaluation steps. All the models were trained for three epochs. As the (9) and final model we used publicly available Detoxify [34] which is a ROBERTA-BASE model trained on data from Unintended Bias in Toxicity Classification.
In-the-wild Evaluation Data Set
We consider a data set of more than 92 million comments on 241,112 news videos hosted between 2014, January 1 to 2022, Aug 27 by the official YouTube handles of three prominent US cable news networks: CNN, Fox News, and MSNBC. We select this data set for our noise audit for the following reasons. First, all three YouTube channels have millions of subscribers and substantial user engagement indicating broad participation from a diverse audience. Second, these mainstream news networks cover a broad range of topics. This data set span eight years that have seen a wide range of highly debated sociopolitical events that include a raging pandemic, two bitterly fought presidential elections, a global protest for racial justice and equality, the longest government shutdown, four Supreme Court confirmations, overturning of Row v Wade, and a major war, to name a few. Hence, the YouTube comments data set is rich in topical diversity. Third, previous studies have reported substantial partisan and ideological divergence in both content and audience in these news networks [35,36,37,38] and prior work indicates considerable political dissonance in similar data sets [14,25].
When compared with years 2014-2018, YouTube engagement has grown for these three YouTube channels [14]. Uniform sampling would bias the data toward more recent years. In order to ensure that our data set is well-distributed over time, for each channel, we sample 200K comments from five bins: 2014-2018, 2019, 2020, 2021, 2022. We club the first five years together into a single bin due to sparse user engagement in the initial years. Overall, our data set comprises of three million randomly sampled comments, one million from each of the three YouTube channels denoted by D cnn , D fox , and D msnbc , respectively.
Vicarious Offense
We break down our study on how political leanings affect perception of offense into two parts: (1) annotators answer questions relevant to direct, first-person perception of offense; (2) annotators answer about vicarious offense -a novel research direction heretofore never explored to our knowledge. Survey design is described in detail in Section 5.1.
The first part of the annotation study resembles a standard annotation scheme grounded in existing literature [7,39]. We first collect demographic information that includes race, gender, age, political identity and then present example YouTube comments and ask about first-person perception of offense. These questions are drawn from existing wellknown annotation protocols for offensive speech (see Appendix for details) [28] where one or more first-person perception of offense for a data point is collected.
Consider the annotator is a registered Independent. For each instance, in our second part of the annotation study, we ask her two additional questions: (1) do you think a Republican will find this post offensive?; and (2) do you think a Democrat will find this post offensive? For each annotator, we adjust this question to reflect the two political beliefs that the annotator does not belong to. A registered Democrat will be asked about her vicarious offense perception for a Republican and an Independent, while a registered Republican will be asked about her vicarious offense perception for a Democrat and an Independent. Existing literature reveals that annotator's gender, race, and political leanings may affect how offensive she finds a given content [40,41,42,43,44]. However, to our knowledge, vicarious offense, i.e., an annotator's perspective on how someone else with a political belief different from the annotator would feel about the said content, is never explored heretofore.
Notice that, for any given social media post d, we have nine vantage perception points of offense (see Figure 1): (1) how offensive a Republican finds d (2) how offensive a Democrat finds d (3) how offensive an Independent finds d (4) how offensive a Republican thinks a Democrat would find d (5) how offensive a Republican thinks an Independent would find d (6) how offensive a Democrat thinks a Republican would find d (7) how offensive a Democrat thinks an Independent would find d (8) how offensive an Independent thinks a Republican would find d (9) how offensive an Independent thinks an Democrat would find d The first three vantage points examine first-person perspective of offense. Recent lines of work examining how political leanings affect perception of offense limit their analyses to the first two vantage points primarily considering a binarized view of US politics [44]. However, Figure 2 demonstrates that the oversimplified political dichotomy most papers adheres to [13,45], disregards the political majority -the Independents.
The next six vantage points present insights into vicarious offense. Our study is the first of its kind to explore how well we can predict offense for others who do not share the same political beliefs. In the era of growing political polarization in the US, our study marks the first attempt to quantify how well the political opposites understand each other when asked to be on their shoes. Figure 3 summarizes the pairwise agreement results (Cohen's κ) on the overall corpus that combines D cnn , D msnbc , and D fox . Results for the individual news networks are qualitatively similar. Our results indicate that no machine moderator pair exhibits substantial agreement (κ score ≥ 0.81); only a handful exhibit moderate agreement (0.41 ≤ κ score ≤ 0.60); and several pairs exhibits fair or slight or none agreement. When we quantify agreement across all machine moderators, the overall Fleiss' κ across D cnn , D fox , and D msnbc are 0.27, 0.25, and 0.22, respectively. We next examine the distribution of machine moderators' aggregate verdicts on individual comments. Let Outcome(d, M) take a YouTube comment d and a model M as input and output o ∈ {offensive, notOffensive}. We define offenseScore(d) as the number of machine moderators that mark d as offensive, i.e., Figure 4 summarizes the distribution of offenseScore in our evaluation data set. We note that the plot exhibits power-law distribution. A large fraction (nearly half) of the content is not flagged by any of the MMs, whereas a minuscule proportion (0.03%) is flagged as offensive by all. The bin with offenseScore 1 is particularly interesting. It indicates only one of the nine MMs marks these comments as offensive. Therefore, the moderation fate of every comment in this bin is highly volatile. If any other MM than the one that flags it is deployed, the comment will not be censored. We also observe that 10.1% of the content belongs to bin 4 and bin 5. These two bins represent the comments on which the MMs have maximal disagreement. To sum up, Figure 4 emphasizes that a large fraction of the social web represents disputed moderation zone and possibly requires human moderators' assistance. We thus in the following sections investigate how human moderators fare when they are tasked with the difficult job of determining offense in political discourse.
Noise Audit of Machine Moderators
offenseScore(d) = i=N i=1 I(Outcome(d, M i ) = offensive).
Stratified Sampling
In order to compare and contrast machine moderation and human moderation, we first construct a representative set of easy and challenging examples from the machine moderators' perspective. For each corpus D, we conduct a stratified sampling from three subsets: (1) a subset where most MMs agree that the content is not offensive (denoted by D notOffensive ); (2) a subset where most MMs agree the content is offensive (denoted by D offensive ); and (3) a subset in the twilight zone where nearly half of the models agree that the content is offensive with the other half agreeing that it is not (denoted by D debated ). Formally,
• d ∈ D notOffensive if 0 ≤ i=N i=1 I(Outcome(d, M i ) = offensive) ≤ 1 • d ∈ D debated if N 2 ≤ i=N i=1 I(Outcome(d, M i ) = offensive) ≤ N 2 • d ∈ D offensive if N − 1 ≤ i=N i=1 (I(Outcome(d, M i ) = offensive) ≤ N ,
where N denotes the total number of offensive speech classifiers considered (in our case, N = 9), and I is the indicator function.
We have three news outlets, three sub-corpora defined based on MM disagreement, and five time periods yielding 45 different combinations of news network, temporal bins, and MM disagreement. We weigh each of these combinations equally and sample 1,110 comments (D general ). In addition, we sample 600 comments with the keyword gun (D gun ) and 600 more with the keyword abortion (D abortion ) to shed light on human-machine disagreement on hot-button issues. Filtering relevant content by a single, general keyword has been previously used in computational social science literature [47,45], and we argue that it is a high-recall approach to obtain discussions relevant to reproductive rights and gun control/rights without biasing the selection towards event-specific keywords (e.g., Uvalde or Row v Wade).
Human Moderators
Annotation Study Design
We design our survey instrument based on the prior studies conducted on human annotation studies on offensive and subjective content analysis [48,49] 4 . We host the survey in Qualtrics, for attention and completion tracking we generated a verification code for the annotator to copy from the survey to MTurk. We release our study in batches of 30 data items in total with 10 items from each news outlet but with varying level of MM disagreements. Each batch consisted of 10 instances each from D offensive , D debated , and D notOffensive . Each instance is designed to be annotated by 20 annotators MTurk. We set restrictions on MTurk to show the study to only users registered as living in the USA due to the nature of the study. We not only asked if each item was offensive to the annotator, but how someone from an opposite political identity would find it offensive. Our study was reviewed by our Institutional Review Board and was deemed as exempt.
Pilot Study and Annotator Demographics
Since MTurk has documented liberal bias [44], we take a cautious first step and conduct a pilot study with 270 examples (nine batches) to estimate political representation. 117 unique annotators participate in this pilot. As shown in Figure 5 we observe that the annotator pool has strong Democrat bias (66% Democrat, 23% Republican, and 11% Independent).
Overall, we observe the following annotator demographics: • Political Leaning: 66% (76) registered as Democrats, 23% (27) as Republicans, and 11% (14) as an Independent. See Figure 5.
• Gender: An equal split between female (55% Dem, 30% Rep, and 15% Ind) and male (40% Dem, 35% Rep, and 25% Ind).
• Race: Predominantly White or Caucasian, with limited representations from the Asian, Black or African American, and American Indians communities.
In order to ensure comparable political representation, we set restrictions for the subsequent annotation batches to have at least six annotators from each political identity (18 annotators in total). The remaining two spots are given to the annotators who first accept the jobs regardless of their political identity. We also re-run batches from our pilot study to ensure they all contain at least six annotators from each political identity.
Final Study
In the Figure 5 shows a distribution of the annotators based on their political leaning for both the pilot and the final study. Adding the political identity based restrictions aided in building a representative dataset for this work. This was done using by adding a quota on the survey instrument. We conducted in total of 37 batches of our survey of 30 items each, following the same survey structure as the pilot study.
Demographics of the final study annotator pool;
• Political Leaning: 35% (267) registered as Democrats, 35% (266) as Republicans, and 30% (220) as an Independent. See Figure 5.
• Gender: 47% Female, 53% male, and one non-binary annotator.
• Race: Similar to the pilot study, majority of the annotators are White or Caucasian, with limited representations from the Asian, Black or African American, and American Indians communities (in line with [48]).
• Age: The study had annotators from all age groups above 18 years, majority of the annotators were from the age group 25-34.
We include detailed demographic analysis in our appendix.
Annotator Compensation:
We compensate the annotator 0.1 USD for each instance. Each batch with 30 instances would thus fetch 3 USD. We allow the annotators to leave a comment on our study at the end. We did not receive any complaint that the pay was low. Rather, many annotators left comments saying that they found our study interesting.
Democrat
35%
Human Moderators and Vicarious Offense
We now contrast machine moderators, human moderators with different political identities and their first-person and vicarious perspectives of offense. All disagreement analyses are based on aggregate majority labels from relevant moderator groups. We first analyze D general in depth. Contrastive findings from D abortion and D gun are summarized in Section 5.3.
RQ1:
To which political party machine moderators are most closely aligned with? Recent reports indicate increasing scrutiny on big-tech platforms' content moderation policies [50]. The discussions center around two diametrically opposite positions: these platforms are not doing enough, or they need to do more. On one hand, certain camps believe that web platforms censor a particular political believers unjustly more [26]. On the other hand, different groups often believe that poor platform censorship led to some of the recent political crises [51]. Table 1 examines to which political party machine moderators are most aligned with. We observe that while all three political identities have comparable agreement with machines on what is offensive, Republicans align slightly more with the machines on what is not offensive. Existing literature hypothesized that conservatives may focus more on linguistic purity than the liberals while determining toxic content [44]. We note that of all the instances in D general that contains the keyword fuck, 94% of them were marked as offensive by the Republicans whereas Democrats marked 88% of them as offensive.
Consider the following examples without any profanity; yet the Democrats marked them as offensive but the Republicans did not: • More fear-mongering. only .oo6% of people die of covid. Russia has no reason to lie about the death total.
• Diversity====Less White people, White shaming. I see everyone as Americans not by their Skin color, The real Racist say black, white, brown pride. It is possible that the Democrats found these examples offensive because they did not align with their liberal views.
We notice that some obfuscated profanity escaped machine moderators while human moderator groups caught them (e.g., cocksuxxxer, or 3-letter company). We also observe that humans having deeper contexts allows them to respond differently. A dismissive comment about Caitlyn Jenner, a transgender Olympic Gold Medalist, is unanimously marked by all groups of human moderators as offensive which the machine moderators marked as notOffensive (see Figure 1). Table 3: Confusion matrices between vicarious offense and ground truth political identities, the human-human agreement is higher than the best human-machine agreement achieved in our experiment. We next note that while human-human agreement is generally higher than human-machine agreement, the highest human-human Cohen's κ achieved between the Independents and Democrats (0.43) is still at the lower end and is considered as moderate agreement [52]. Within the political identity pairs, the Democrats and the Independents are most aligned on their perception of offense. This result is not surprising. Historically, Independents lean more toward the Democrats than the Republicans as evidenced by the Gallup survey where 47.7 ± 2.98% of the Independents report that they lean Democrat as opposed to 42.3 ± 3.08% Independents reporting leaning Republican [46].
RQ3: Which political identity best predicts vicarious offense? In our vicarious offense study, we request the annotators to predict out-group offense labels. Hence, we have information about say, what Democrats believe Republicans find as offensive. Since we also have first-person perspectives from the Republicans, we can tally this information with the first-person perspective to find out how well annotators understand the political others.
For notational clarity, we indicate the vicarious offense predictor in superscripts. Republicans Dem means Democrats are predicting what Republicans would find offensive. Table 3 indicates that Republicans are the worst predictors of vicarious offense. On both cases of predicting vicarious offense for the Democrats and the Independents, they do worse than the Independents and the Democrats, respectively. We further note that while Independents and Democrats can predict vicarious offense for each other reasonably well, they fare poorly in predicting what Republicans would find offensive. Hence, in terms of vicarious offense, Republicans are the least understood political group while they also struggle the most to understand others.
Finally, we present a compelling result that shows why inter-rater agreement could be misleading.
Issue focused analysis
We observe considerable disagreement among human moderators across different political identities while annotating D general . When we consider sensitive issues, the disagreement worsens. Table 5 contrasts the pairwise disagreement between human-human moderators and human-machine moderators across D general , D abort , and D gun . We first observe that machine-human agreement is substantially lower on the issue-specific corpora across all political identities. We
On Censorship and Trust in Democracy
Beyond first-person and vicarious offense, we ask the annotators a range of questions on censorship and election fairness. We explicitly ask every annotator is she believes the comment should be allowed on social media. We find that of the comments individual political groups marked as offensive on D general , the Democrats, Republicans, and Independents believe 23%, 23%, and 17% , respectively, should not be allowed on social media. This implies that in general political discussions, Independents are more tolerant than the two political extremes on what should be allowed on social media. On D gun , the Republicans exhibit slightly more intolerance than Democrats and Independents and want to remove 26% of the offensive content as opposed to 23% and 14% by the Democrats, and the Independents, respectively. However, on D abortion the Democrats exhibit more intolerance seeking to remove 23% of the offensive content as opposed to 21% from both Independents and Republicans. We note that Independents are much more sensitive to reproductive rights than gun control/rights or general political discussions. Our study thus suggests that content moderation is a highly nuanced topic where different political groups can exhibit different levels of tolerance to offensive content depending on the issue.
What is offensive and should this offensive post be removed from social media can be subjective, as our study indicates. However, when we ask the annotators about fairness of the 2016 and 2020 elections, we notice a more worrisome issue: eroding trust in democracy. Figure 6 reveals that 5% and 10% of the annotators believed that the 2016 and 2020 elections were not conducted in a fair and democratic manner, respectively with Democrats doubting the fairness of 2016 election more while the Republicans doubting the 2020 election. This result sums up the deep, divergent political divide between the left and the right in the US and asks all the stakeholders -social media platforms, social web users, media, academic and industry researchers, and of course the politicians -to think about how to improve political discourse and brink back trust in democracy.
Discussions
In this paper, we analyze two under-explored aspects moderating social discussing politics: disagreement between machine moderators, and disagreement between human moderators. Our key contributions are (1) a comprehensive noise audit of machine moderators; (2) an offensive speech data set with transparent annotator details; (3) a novel framework of vicarious offense; and (4) focused analysis of moderation challenges present in dealing with sensitive social issues such as reproductive rights and gun control/rights.
Our study raises the following points to ponder upon. • Revisiting traditional supervised learning paradigm and existence of gold standard labels: Traditional supervised learning paradigm assumes existence of gold standard labels. While recent lines of work have investigated disagreement among annotators that stems from the inherent subjectivity of the task [10], our analyses of political discussions on highly sensitive issues reveal that there could be practically no agreement among annotator groups and depending on who we ask, we can end up with wildly different gold standard labels reminiscent of alternative facts [53]. Our work, in its current form, is primarily descriptive. We address the elephant in the room and quantify the challenges of offensive content moderation in political discourse both from the machine and human moderators' perspectives. We believe our data set will open the gates for modelling ideas considering multiple vantage points and yielding more robust systems.
• Fine-grained political identities: Contrary to most existing work on US politics that sidesteps the issue of dealing with the political middle, our study considers the Independents thus setting the stage for exploring more nuanced political identities. For example, a human moderator can be fiscally conservative but socially liberal. Understanding both first-person and vicarious perspectives of offense considering more fine-grained political identities merit deeper exploration.
• Issue focused analysis: In Section 5.3, our study barely scratches the surface of issue-focused analysis. Studies show that there are political disagreement between the left and the right on several other policy issues that include immigration [54], climate change [55], racism in policing [45], to name a few. We believe our study will open the gates for follow on research expanding to more issues.
• Style vs content: We observed an important interplay between the style and the content of posts particularly when it comes to polarizing topics and political preference. The analysis of vicarious offense reveals that the topic and targets of a potentially offensive post (e.g. a politician, a political party, etc.) seem to be more important to human moderators than to machine moderators as automatic methods often rely on the presence of profanity to assign a post offensive. This observation is in line with data sets and annotation taxonomies that consider the target of offensive posts as the central aspect of offensive speech identification such as OLID [28], HASOC [30] and others. The new vicarious offense data set presented in this paper is a valuable resource that can be used for further analysis.
• Beyond US politics and political polarization: Finally, the framework of vicarious offense has a broader appeal. While we apply this to US politics, there is rising political polarization in many other countries [56,57]. It does not also have to be always politics. The vicarious offense framework can also be used to understand religious polarization [58,59,60].
Ethics Statement
Our study was reviewed by our Institutional Review Board and was deemed as exempt. Our YouTube data is collected using the publicly available YouTube API. We do not collect or reveal any identifiable information about the annotators. Content moderation can be potentially gruesome and affect the mental health of the moderators [61]. We maintain a small batch size (30 YouTube comments) one-third of which is marked as notOffensive by almost all machine moderators to minimize the stress on annotators. In fact, many of the annotators left a comment at the end of the study indicating that they enjoyed this task. While our goal is to broaden our understanding of first-person and vicarious offence perception and has the potential to robustifying machine moderators, any content filtering system can be tweaked for malicious purposes. For instance, an inverse filter can be made that filters out notOffensive posts while filtering in the offensive ones. Table 6 lists few instances that machine moderators marked as notOffensive however, human moderators belonging to specific political identities marked them as offensive. Table 7 lists few instances that machine moderators marked as offensive however, human moderators belonging to specific political identities marked them as notOffensive.
Figure 2 :
2Distribution of political identities as reported in historical Gallup surveys[46] over the last 19 years.
Figure 3 :
3Agreement between machine moderators. A cell i, j presents the Cohen's κ agreement between machine moderators Mi and Mj. Majority is a machine moderator that takes the majority vote of the nine individual machine moderators.
Figure 4 :
4j-th bar represents the percentage of overall comments that are flagged by j machine moderators.
Figure 5 :
5Distribution of the annotators based on their political leaning. Here, the color blue denotes Democrats, red denotes Republican, and yellow denotes Independents. The smaller doughnut represents our pilot study with 117 (76 Dem, 27 Rep, and 14 Ind) annotators and the larger doughnut represents the final study with 753 annotators (267 Dem, 266 Rep, and 220 Ind).
Figure 6 :
6Distributions from annotators when the study asked, if 2016 and 2020 presidential elections were conducted in a fair and democratic manner. Percentages are computed on the entire population of annotators who participated in the study.
Figure 7
7demonstrates that our data set has sparse user engagement via comments between 2014 and 2018.
Figure 7 :
7Temporal trend showing number of comments made about news videos on three news networks' official YouTube channels over time.
RQ2: How aligned are different political identities on the perception of offense in first-person?Table 2summarizes the confusion matrices between human annotators of different political identities. We first observe that for any pair ofIndependents
notOffensive
offensive
Machines
notOffensive
8.72%
25.98%
offensive
6.58%
58.72%
(a) Cohen's κ is 0.17
Democrats
notOffensive
offensive
Machines
notOffensive
10.09%
24.62%
offensive
8.12%
57.17%
(b) Cohen's κ is 0.19
Republicans
notOffensive
offensive
Machines
notOffensive
11.45%
23.25%
offensive
7.86%
57.44%
(c) Cohen's κ is 0.23
Table 1: Confusion matrices between machines and humans
Republicans
notOffensive
offensive
Democrats
notOffensive
8.64%
9.57%
offensive
10.68%
71.11%
(a) Cohen's κ is 0.34
Independents
notOffensive
offensive
Republicans
notOffensive
8.55%
10.78%
offensive
6.76%
73.91%
(b) Cohen's κ is 0.39
Democrats
notOffensive
offensive
Independents
notOffensive
8.81%
6.50%
offensive
9.41%
75.28%
(c) Cohen's κ is 0.43
Table 2 :
2Confusion matrices between humans with different political identitiesRepublicans
notOffensive
offensive
Republicans Dem
notOffensive
9.83%
10.51%
offensive
9.49%
70.17%
(a) Cohen's κ is 0.37
Democrats
notOffensive
offensive
Democrats Rep
notOffensive
8.12%
7.09%
offensive
10.09%
74.7%
(b) Cohen's κ is 0.38
Independents
notOffensive
offensive
Independents Rep
notOffensive
7.27%
8.30%
offensive
8.04%
76.39%
(c) Cohen's κ is 0.37
Republicans
notOffensive
offensive
Republicans Ind
notOffensive
7.27%
5.82%
offensive
12.06%
74.85%
(d) Cohen's κ is 0.35
Democrats
notOffensive
offensive
Democrats Ind
notOffensive
8.04%
3.59%
offensive
10.18%
78.19%
(e) Cohen's κ is 0.46
Independents
notOffensive
offensive
Independents Dem
notOffensive
7.96%
7.44%
offensive
7.36%
77.24%
(f) Cohen's κ is 0.43
Table 4
4suggests that
Table 4 :
4Contrasting vicarious offense predictionsModerators
Moderators
Dgeneral
Dabortion
Dgun
Machines
Republicans
0.23
0.04
0.07
Machines
Democrats
0.19
0.06
0.11
Machines
Independents
0.17
0.02
-0.02
Republicans
Democrats
0.34
0.05
-0.01
Democrats
Independents
0.43
0.03
-0.04
Independents
Republicans
0.39
0.36
-0.03
Democrats
Democrats Rep
0.38
-0.05
-0.02
Democrats
Democrats Ind
0.46
0.00
-0.03
Republicans
Republicans Dem
0.37
-0.01
0.00
Republicans
Republicans Ind
0.35
0.15
-0.03
Independents
Independents Rep
0.37
0.18
0.04
Independents
Independents Dem
0.43
-0.04
-0.04
Republicans Dem
Republicans Ind
0.49
0.01
-0.04
Democrats Ind
Democrats Rep
0.44
0.57
0.10
Independents Rep
Independents Dem
0.37
0.06
-0.05
Table 5 :
5Disagreement across different annotation data sets.next note that some of the moderator pairs achieved negative Cohen's κ values on D gun even on first-person offense
perspective indicating considerable disagreement [52].
The pairwise group dynamics behave differently on different issues. While Independents exhibit considerable agreement
with Republicans on D abortion , they show little or no agreement on D gun . Interestingly, while neither the Republicans
nor the Independents agree a lot with a Democrats on D abortion these two groups (Independents and Republicans) are
well-aligned on what Democrats would find offensive in D abortion . However, once again, when we tally that with
what the Democrats actually find as offensive, we see the agreement on the pairs Democrats, Democrats Rep and
Democrats, Democrats Ind are substantially lower.
Machine moderators: notOffensive | Human Moderators: offensive So a woman wants an abortion, and SHE happens to have an airplane ticket that SHE got. The airline company can be sued? Maybe read the U.S. Constitution. Abortion is not in it, unlike the 2nd Amendment. Please stop showing planned parenthood ads. I don't agree with abortion. republicans are so far right that facts don't matter anymore. it is political theater. 0 dollars of Federal money has been spent on abortion. the war is on abortions. other people trying to dictate how other people should live their lives.
Table 6 :
6Illustrative examples highlighting disagreement between machine moderators and human moderators on Dabortion. The blue, yellow, and the red cells consider Democrats, Independents, and Republicans human moderators, respectively Machine moderators: offensive | Human Moderators: notOffensive Republicunts n Evangelicunts are a scourge to JesusBabies snatched from the parents is worse then abortion u shameless bastards President Pussygrabber is a Star. We can allow him to murder a woman having an abortion on 5th Avenue and Dickbag trump is horrible for our country. He is a lying con man who have Americas dumbest supporters its ALL Bidens fault with his open borders!!! TRUMP WAS WAY BET-TER THAN THIS IDIOT YOU STUPID DEMS NEED TO LOOK AT WHATS HAPPENING TO EUROPE This really stinks. The democraps really schitf their pants now!!!
Table 7 :
7Illustrative examples highlighting disagreement between machine moderators and human moderators on Dgeneral. The blue and red cells consider Democrats and Republicans human moderators, respectively 8.2 Dataset Details
The dataset will be available soon on https://github.com/Homan-Lab/noise-audit-dataset/
Available at https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge
Anonymized version of this survey can be found on this link https://anonymous.4open.science/r/www23-667D
Annotator DemographicsThe overall annotator demographics are described inFigure 8.18Figure 8: Overall demographic distribution of the annotator pool. The colors denote the political identities. The dataset without any keyword based filtering.Figure 9: Word clouds generated from the three datasets used for human annotation study.Wordclouds
Rhetoric, civility, and community: Political debate on computer bulletin boards. W Thomas, Benson, Communication Quarterly. 443Thomas W Benson. Rhetoric, civility, and community: Political debate on computer bulletin boards. Communica- tion Quarterly, 44(3):359-378, 1996.
You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert, Proceedings of the ACM on Human-Computer Interaction. 1CSCWEshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. You can't stay here: The efficacy of reddit's 2015 ban examined through hate speech. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW):1-22, 2017.
Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Nozza Debora, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti, 13th International Workshop on Semantic Evaluation. Association for Computational LinguisticsValerio Basile, Cristina Bosco, Elisabetta Fersini, Nozza Debora, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti, et al. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In 13th International Workshop on Semantic Evaluation, pages 54-63. Association for Computational Linguistics, 2019.
Fanning the Flames of Hate: Social Media and Hate Crime. Karsten Müller, Carlo Schwarz, Journal of the European Economic Association. 194Karsten Müller and Carlo Schwarz. Fanning the Flames of Hate: Social Media and Hate Crime. Journal of the European Economic Association, 19(4):2131-2167, 10 2020.
Cyber bullying: Bullying in the digital age. Michael A Fauman, American Journal of Psychiatry. 1656Michael A Fauman. Cyber bullying: Bullying in the digital age. American Journal of Psychiatry, 165(6):780-781, 2008.
Apple suspends Parler from App Store. Sarah Perez, Brian Heater, TechCrunchSarah Perez and Brian Heater. Apple suspends Parler from App Store, 2021. TechCrunch.
Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. Zeerak Waseem, Proceedings of the first workshop on NLP and computational social science. the first workshop on NLP and computational social scienceZeerak Waseem. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142, 2016.
Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis. Björn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, Michael Wojatzki, Proceedings of NLP4CMC. NLP4CMCBjörn Ross, Michael Rist, Guillermo Carbonell, Benjamin Cabrera, Nils Kurowsky, and Michael Wojatzki. Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis. In Proceedings of NLP4CMC, 2016.
Voice for the Voiceless: Active Sampling to Detect Comments Supporting the Rohingyas. * Shriphani Palakodety, Ashiqur R Khudabukhsh, * , Jaime G Carbonell, The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI. Shriphani Palakodety * , Ashiqur R. KhudaBukhsh * , and Jaime G. Carbonell. Voice for the Voiceless: Active Sampling to Detect Comments Supporting the Rohingyas. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI, pages 454-462, 2020.
Inherent disagreements in human textual inferences. Ellie Pavlick, Tom Kwiatkowski, Transactions of the Association for Computational Linguistics. 7Ellie Pavlick and Tom Kwiatkowski. Inherent disagreements in human textual inferences. Transactions of the Association for Computational Linguistics, 7:677-694, 2019.
How well do hate speech, toxicity, abusive and offensive language classification models generalize across datasets? Information Processing & Management. Paula Fortuna, Juan Soler-Company, Leo Wanner, 58102524Paula Fortuna, Juan Soler-Company, and Leo Wanner. How well do hate speech, toxicity, abusive and offensive language classification models generalize across datasets? Information Processing & Management, 58(3):102524, 2021.
Affect, not ideologya social identity perspective on polarization. Shanto Iyengar, Gaurav Sood, Yphtach Lelkes, Public opinion quarterly. 763Shanto Iyengar, Gaurav Sood, and Yphtach Lelkes. Affect, not ideologya social identity perspective on polarization. Public opinion quarterly, 76(3):405-431, 2012.
Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, Dan Jurafsky, Proceedings of NAACL-HLT 2019. NAACL-HLT 2019ACLDorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Jurafsky. Analyzing polarization in social media: Method and application to tweets on 21 mass shootings. In Proceedings of NAACL-HLT 2019, pages 2970-3005. ACL, June 2019.
We Don't Speak the Same Language: Interpreting Polarization through Machine Translation. R Ashiqur, * Khudabukhsh, Rupak Sarkar, * , Mark S Kamlet, Tom M Mitchell, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. AAAI Press2021Ashiqur R. KhudaBukhsh * , Rupak Sarkar * , Mark S. Kamlet, and Tom M. Mitchell. We Don't Speak the Same Language: Interpreting Polarization through Machine Translation. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, pages 14893-14901. AAAI Press, 2021.
Everyone's voice matters: Quantifying annotation disagreement using demographic information. Ruyuan Wan, Jaehyung Kim, Dongyeop Kang, Ruyuan Wan, Jaehyung Kim, and Dongyeop Kang. Everyone's voice matters: Quantifying annotation disagreement using demographic information. 2023.
Hate speech detection is not as easy as you may think: A closer look at model validation. Aymé Arango, Jorge Pérez, Barbara Poblete, Proceedings of the 42nd international acm sigir conference on research and development in information retrieval. the 42nd international acm sigir conference on research and development in information retrievalAymé Arango, Jorge Pérez, and Barbara Poblete. Hate speech detection is not as easy as you may think: A closer look at model validation. In Proceedings of the 42nd international acm sigir conference on research and development in information retrieval, pages 45-54, 2019.
All you need is" love" evading hate speech detection. Tommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, N Asokan, Proceedings of the 11th ACM workshop on artificial intelligence and security. the 11th ACM workshop on artificial intelligence and securityTommi Gröndahl, Luca Pajola, Mika Juuti, Mauro Conti, and N Asokan. All you need is" love" evading hate speech detection. In Proceedings of the 11th ACM workshop on artificial intelligence and security, pages 2-12, 2018.
Are chess discussions racist? an adversarial hate speech data set (student abstract). Rupak Sarkar, Ashiqur R Khudabukhsh, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. AAAI Press2021Rupak Sarkar and Ashiqur R. KhudaBukhsh. Are chess discussions racist? an adversarial hate speech data set (student abstract). In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, pages 15881-15882. AAAI Press, 2021.
Detecting cross-geographic biases in toxicity modeling on social media. Sayan Ghosh, Dylan K Baker, David Jurgens, Vinodkumar Prabhakaran, Proceedings of the Seventh Workshop on Noisy User-generated Text, W-NUT 2021. the Seventh Workshop on Noisy User-generated Text, W-NUT 2021Association for Computational LinguisticsSayan Ghosh, Dylan K. Baker, David Jurgens, and Vinodkumar Prabhakaran. Detecting cross-geographic biases in toxicity modeling on social media. In Proceedings of the Seventh Workshop on Noisy User-generated Text, W-NUT 2021, Online, November 11, 2021, pages 313-328. Association for Computational Linguistics, 2021.
Noise: A flaw in human judgment. Daniel Kahneman, Olivier Sibony, Cass R Sunstein, 2021Little, BrownDaniel Kahneman, Olivier Sibony, and Cass R Sunstein. Noise: A flaw in human judgment. Little, Brown, 2021.
Fear and loathing across party lines: New evidence on group polarization. Shanto Iyengar, Sean J Westwood, American Journal of Political Science. 593Shanto Iyengar and Sean J Westwood. Fear and loathing across party lines: New evidence on group polarization. American Journal of Political Science, 59(3):690-707, 2015.
Political homophily in social relationships: Evidence from online dating behavior. A Gregory, Neil Huber, Malhotra, The Journal of Politics. 791Gregory A Huber and Neil Malhotra. Political homophily in social relationships: Evidence from online dating behavior. The Journal of Politics, 79(1):269-283, 2017.
Does politics influence hiring? evidence from a randomized experiment. Karen Gift, Thomas Gift, Political Behavior. 373Karen Gift and Thomas Gift. Does politics influence hiring? evidence from a randomized experiment. Political Behavior, 37(3):653-675, 2015.
Political sectarianism in america. J Eli, Finkel, A Christopher, Mina Bail, Cikara, H Peter, Shanto Ditto, Samara Iyengar, Lilliana Klar, Mason, C Mary, Brendan Mcgrath, Nyhan, G David, Rand, Science. 3706516Eli J Finkel, Christopher A Bail, Mina Cikara, Peter H Ditto, Shanto Iyengar, Samara Klar, Lilliana Mason, Mary C McGrath, Brendan Nyhan, David G Rand, et al. Political sectarianism in america. Science, 370(6516):533-536, 2020.
Fringe news networks: Dynamics of US news viewership following the 2020 presidential election. R Ashiqur, Rupak Khudabukhsh, Mark S Sarkar, Tom M Kamlet, Mitchell, WebSci '22: 14th ACM Web Science Conference 2022. ACMAshiqur R. KhudaBukhsh, Rupak Sarkar, Mark S. Kamlet, and Tom M. Mitchell. Fringe news networks: Dynamics of US news viewership following the 2020 presidential election. In WebSci '22: 14th ACM Web Science Conference 2022, pages 269-278. ACM, 2022.
Musk says Twitter is biased against conservatives -facts say otherwise. Paul M Barrett, The Hill2022Paul M. Barrett. Musk says Twitter is biased against conservatives -facts say otherwise, 2022. The Hill.
Multilingual offensive language identification with cross-lingual embeddings. Tharindu Ranasinghe, Marcos Zampieri, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsTharindu Ranasinghe and Marcos Zampieri. Multilingual offensive language identification with cross-lingual embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5838-5844, Online, November 2020. Association for Computational Linguistics.
Predicting the type and target of offensive posts in social media. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar, NAACL-HLT. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. Predicting the type and target of offensive posts in social media. In NAACL-HLT, pages 1415-1420, June 2019.
Automated hate speech detection and the problem of offensive language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17. the 11th International AAAI Conference on Web and Social Media, ICWSM '17Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17, pages 512-515, 2017.
Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german. Thomas Mandl, Sandip Modha, Anand Kumar, M , Bharathi Raja Chakravarthi, Forum for Information Retrieval Evaluation. Thomas Mandl, Sandip Modha, Anand Kumar M, and Bharathi Raja Chakravarthi. Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german. In Forum for Information Retrieval Evaluation, pages 29-32, 2020.
Hatexplain: A benchmark dataset for explainable hate speech detection. Binny Mathew, Punyajoy Saha, Chris Seid Muhie Yimam, Pawan Biemann, Animesh Goyal, Mukherjee, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI Press2021Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. Hatexplain: A benchmark dataset for explainable hate speech detection. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14867-14875. AAAI Press, 2021.
Evaluating aggression identification in social media. Ritesh Kumar, Atul Kr, Shervin Ojha, Marcos Malmasi, Zampieri, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. the Second Workshop on Trolling, Aggression and CyberbullyingMarseille, FranceEuropean Language Resources Association (ELRARitesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. Evaluating aggression identification in social media. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 1-5, Marseille, France, May 2020. European Language Resources Association (ELRA).
A benchmark dataset for learning to intervene in online hate speech. Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, William Yang Wang, EMNLP-IJCNLP. Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, and William Yang Wang. A benchmark dataset for learning to intervene in online hate speech. In EMNLP-IJCNLP, pages 4755-4764, 2019.
. Laura Hanu, Unitary Team, Detoxify, Github, 2020Laura Hanu and Unitary team. Detoxify. Github. https://github.com/unitaryai/detoxify, 2020.
How MSNBC Became Fox's Liberal Evil Twin. Alessandra Stanley, 1Alessandra Stanley. How MSNBC Became Fox's Liberal Evil Twin, 2012. Online; accessed 01-September-2020.
Weapons of mass distortion: The coming meltdown of the liberal media. Brent Bozell, National Review. L Brent Bozell. Weapons of mass distortion: The coming meltdown of the liberal media. National Review, 2004.
Selective exposure to cable news and immigration in the us: The relationship between fox news, cnn, and attitudes toward mexican immigrants. Homero Gil De Zúñiga, Teresa Correa, Sebastian Valenzuela, Journal of Broadcasting & Electronic Media. 564Homero Gil de Zúñiga, Teresa Correa, and Sebastian Valenzuela. Selective exposure to cable news and immigration in the us: The relationship between fox news, cnn, and attitudes toward mexican immigrants. Journal of Broadcasting & Electronic Media, 56(4):597-615, 2012.
Agenda setting in the partisan tv news context: Attribute agenda setting and polarized evaluation of presidential candidates among viewers of nbc, cnn, and fox news. Ki Deuk Hyun, Soo Jung Moon, Journalism & Mass Communication Quarterly. 933Ki Deuk Hyun and Soo Jung Moon. Agenda setting in the partisan tv news context: Attribute agenda setting and polarized evaluation of presidential candidates among viewers of nbc, cnn, and fox news. Journalism & Mass Communication Quarterly, 93(3):509-529, 2016.
Racial bias in hate speech and abusive language detection datasets. Thomas Davidson, Debasmita Bhattacharya, Ingmar Weber, Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineThomas Davidson, Debasmita Bhattacharya, and Ingmar Weber. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Third Workshop on Abusive Language Online, pages 25-35, 2019.
Hate speech and constitutional protection: Priming values of equality and freedom. Gloria Cowan, Miriam Resendez, Elizabeth Marshall, Ryan Quist, Journal of Social Issues. 582Gloria Cowan, Miriam Resendez, Elizabeth Marshall, and Ryan Quist. Hate speech and constitutional protection: Priming values of equality and freedom. Journal of Social Issues, 58(2):247-263, 2002.
Whites see racism as a zero-sum game that they are now losing. I Michael, Norton, R Samuel, Sommers, Perspectives on Psychological science. 63Michael I Norton and Samuel R Sommers. Whites see racism as a zero-sum game that they are now losing. Perspectives on Psychological science, 6(3):215-218, 2011.
Group-based differences in perceptions of racism: What counts, to whom, and why? Social and personality psychology compass. R Evelyn, Mary C Carter, Murphy, 9Evelyn R Carter and Mary C Murphy. Group-based differences in perceptions of racism: What counts, to whom, and why? Social and personality psychology compass, 9(6):269-280, 2015.
On releasing annotator-level labels and information in datasets. Aida Mostafazadeh Vinodkumar Prabhakaran, Mark Davani, Diaz, Proceedings of the Joint 15th Linguistic Annotation Workshop. the Joint 15th Linguistic Annotation Workshop3Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Diaz. On releasing annotator-level labels and information in datasets. In Proceedings of the Joint 15th Linguistic Annotation Workshop (LAW) and 3rd
. Designing Meaning Representations (DMR) Workshop. Association for Computational LinguisticsDesigning Meaning Representations (DMR) Workshop, pages 133-138. Association for Computational Linguistics, November 2021.
Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, Noah A Smith, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsMaarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, and Noah A. Smith. Annotators with attitudes: How annotator beliefs and identities bias toxic language detection. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5884-5906. Association for Computational Linguistics, 2022.
A murder and protests, the capitol riot, and the chauvin trial: Estimating disparate news media stance. Sujan Dutta, Beibei Li, Daniel S Nagin, Ashiqur R Khudabukhsh, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22. the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22AI for GoodSujan Dutta, Beibei Li, Daniel S. Nagin, and Ashiqur R. KhudaBukhsh. A murder and protests, the capitol riot, and the chauvin trial: Estimating disparate news media stance. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 5059-5065, 2022. AI for Good.
Gallup. Party Affiliation, 2022. Gallup. Gallup. Party Affiliation, 2022. Gallup.
Corpus-level evaluation for event QA: the indiapoliceevents corpus covering the 2002 gujarat violence. Andrew Halterman, Katherine A Keith, Muhammad Sheikh, Brendan O' Sarwar, Connor, ACL/IJCNLP 2021, Findings of ACL. Andrew Halterman, Katherine A. Keith, Sheikh Muhammad Sarwar, and Brendan O'Connor. Corpus-level evaluation for event QA: the indiapoliceevents corpus covering the 2002 gujarat violence. In ACL/IJCNLP 2021, Findings of ACL, pages 4240-4253, 2021.
Social bias frames: Reasoning about social and power implications of language. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, Yejin Choi, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreaultthe 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational Linguistics2020Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. Social bias frames: Reasoning about social and power implications of language. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5477-5490. Association for Computational Linguistics, 2020.
Designing Toxic Content Classification for a Diversity of Perspectives. Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, Michael Bailey, arXiv:2106.04511arXiv: 2106.04511Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. Designing Toxic Content Classification for a Diversity of Perspectives. arXiv:2106.04511 [cs], June 2021. arXiv: 2106.04511.
More Content Moderation Is Not Always Better. Evelyn Douek, Evelyn Douek. More Content Moderation Is Not Always Better, 2021. WIRED.
Despite Parler backlash, Facebook played huge role in fueling Capitol riot, watchdogs say. Igor Derysh, SalonIgor Derysh. Despite Parler backlash, Facebook played huge role in fueling Capitol riot, watchdogs say, 2021. Salon.
Interrater reliability: the kappa statistic. L Mary, Mchugh, Biochemia medica. 223Mary L McHugh. Interrater reliability: the kappa statistic. Biochemia medica, 22(3):276-282, 2012.
Alexandra Jaffe Kellyanne Conway, WH Spokesman Gave 'Alternative Facts' on Inauguration Crowd. NBC NewsAlexandra Jaffe. Kellyanne Conway: WH Spokesman Gave 'Alternative Facts' on Inauguration Crowd, 2017. NBC News.
Computational analysis of 140 years of us political speeches reveals more positive but increasingly polarized framing of immigration. Dallas Card, Serina Chang, Chris Becker, Julia Mendelsohn, Rob Voigt, Leah Boustan, Ran Abramitzky, Dan Jurafsky, Proceedings of the National Academy of Sciences. 119312120510119Dallas Card, Serina Chang, Chris Becker, Julia Mendelsohn, Rob Voigt, Leah Boustan, Ran Abramitzky, and Dan Jurafsky. Computational analysis of 140 years of us political speeches reveals more positive but increasingly polarized framing of immigration. Proceedings of the National Academy of Sciences, 119(31):e2120510119, 2022.
Where does political polarization come from? locating polarization within the us climate change debate. R Dana, Joseph Fisher, Philip Waggle, Leifeld, American Behavioral Scientist. 571Dana R Fisher, Joseph Waggle, and Philip Leifeld. Where does political polarization come from? locating polarization within the us climate change debate. American Behavioral Scientist, 57(1):70-92, 2013.
Investigating political polarization on twitter: A canadian perspective. Anatoliy Gruzd, Jeffrey Roy, Policy & internet. 61Anatoliy Gruzd and Jeffrey Roy. Investigating political polarization on twitter: A canadian perspective. Policy & internet, 6(1):28-45, 2014.
Populist radical right parties and mass polarization in the netherlands. Silva Bruno Castanho, European Political Science Review. 102Bruno Castanho Silva. Populist radical right parties and mass polarization in the netherlands. European Political Science Review, 10(2):219-244, 2018.
Mining insights from large-scale corpora using fine-tuned language models. * Shriphani Palakodety, Ashiqur R Khudabukhsh, * , Jaime G Carbonell, ECAI 2020 -24th European Conference on Artificial Intelligence. Shriphani Palakodety * , Ashiqur R. KhudaBukhsh * , and Jaime G. Carbonell. Mining insights from large-scale corpora using fine-tuned language models. In ECAI 2020 -24th European Conference on Artificial Intelligence, pages 1890-1897, 2020.
Analyzing islamophobia on twitter during the covid-19 outbreak. Mohit Chandra, Manvith Reddy, Shradha Sehgal, Saurabh Gupta, Arun Balaji Buduru, Ponnurangam Kumaraguru, Proceedings of the 32nd ACM Conference on Hypertext and Social Media. the 32nd ACM Conference on Hypertext and Social MediaMohit Chandra, Manvith Reddy, Shradha Sehgal, Saurabh Gupta, Arun Balaji Buduru, and Ponnurangam Kumaraguru. " a virus has no religion": Analyzing islamophobia on twitter during the covid-19 outbreak. In Proceedings of the 32nd ACM Conference on Hypertext and Social Media, pages 67-77, 2021.
short is the road that leads from fear to hate": Fear speech in indian whatsapp groups. Punyajoy Saha, Binny Mathew, Kiran Garimella, Animesh Mukherjee, Proceedings of the Web Conference 2021. the Web Conference 2021Punyajoy Saha, Binny Mathew, Kiran Garimella, and Animesh Mukherjee. "short is the road that leads from fear to hate": Fear speech in indian whatsapp groups. In Proceedings of the Web Conference 2021, pages 1110-1121, 2021.
But is the job too gruesome to handle?. Olivia Solon, Facebook is hiring moderators. The GuardianOlivia Solon. Facebook is hiring moderators. But is the job too gruesome to handle?, 2017. The Guardian.
| [
"https://github.com/Homan-Lab/noise-audit-dataset/",
"https://github.com/unitaryai/detoxify,"
] |
[
"CPTAM: Constituency Parse Tree Aggregation Method",
"CPTAM: Constituency Parse Tree Aggregation Method"
] | [
"Adithya Kulkarni \nIowa State University\nAmesIowaUSA\n",
"Nasim Sabetpour \nIowa State University\nAmesIowaUSA\n",
"Alexey Markin amarkin@iastate.edu \nIowa State University\nAmesIowaUSA\n",
"Oliver Eulenstein \nIowa State University\nAmesIowaUSA\n",
"Qi Li \nIowa State University\nAmesIowaUSA\n"
] | [
"Iowa State University\nAmesIowaUSA",
"Iowa State University\nAmesIowaUSA",
"Iowa State University\nAmesIowaUSA",
"Iowa State University\nAmesIowaUSA",
"Iowa State University\nAmesIowaUSA"
] | [] | Diverse Natural Language Processing tasks employ constituency parsing to understand the syntactic structure of a sentence according to a phrase structure grammar. Many state-of-the-art constituency parsers are proposed, but they may provide different results for the same sentences, especially for corpora outside their training domains. This paper adopts the truth discovery idea to aggregate constituency parse trees from different parsers by estimating their reliability in the absence of ground truth. Our goal is to consistently obtain high-quality aggregated constituency parse trees. We formulate the constituency parse tree aggregation problem in two steps, structure aggregation and constituent label aggregation. Specifically, we propose the first truth discovery solution for tree structures by minimizing the weighted sum of Robinson-Foulds (RF ) distances, a classic symmetric distance metric between two trees. Extensive experiments are conducted on benchmark datasets in different languages and domains. The experimental results show that our method, CPTAM, outperforms the state-of-the-art aggregation baselines. We also demonstrate that the weights estimated by CPTAM can adequately evaluate constituency parsers in the absence of ground truth. | 10.1137/1.9781611977172.71 | [
"https://arxiv.org/pdf/2201.07905v1.pdf"
] | 246,063,474 | 2201.07905 | dbcb32565a1d6352497c198e2e42f34b643be8e7 |
CPTAM: Constituency Parse Tree Aggregation Method
19 Jan 2022
Adithya Kulkarni
Iowa State University
AmesIowaUSA
Nasim Sabetpour
Iowa State University
AmesIowaUSA
Alexey Markin amarkin@iastate.edu
Iowa State University
AmesIowaUSA
Oliver Eulenstein
Iowa State University
AmesIowaUSA
Qi Li
Iowa State University
AmesIowaUSA
CPTAM: Constituency Parse Tree Aggregation Method
19 Jan 2022Constituency parse treeTruth discoveryOp- timization
Diverse Natural Language Processing tasks employ constituency parsing to understand the syntactic structure of a sentence according to a phrase structure grammar. Many state-of-the-art constituency parsers are proposed, but they may provide different results for the same sentences, especially for corpora outside their training domains. This paper adopts the truth discovery idea to aggregate constituency parse trees from different parsers by estimating their reliability in the absence of ground truth. Our goal is to consistently obtain high-quality aggregated constituency parse trees. We formulate the constituency parse tree aggregation problem in two steps, structure aggregation and constituent label aggregation. Specifically, we propose the first truth discovery solution for tree structures by minimizing the weighted sum of Robinson-Foulds (RF ) distances, a classic symmetric distance metric between two trees. Extensive experiments are conducted on benchmark datasets in different languages and domains. The experimental results show that our method, CPTAM, outperforms the state-of-the-art aggregation baselines. We also demonstrate that the weights estimated by CPTAM can adequately evaluate constituency parsers in the absence of ground truth.
Introduction
The constituency parse trees (CPTs) display the syntactic structure of a sentence using context-free grammar. CPTs divide the input sentence into phrase structures that belong to a specific grammar category. The available state-of-the-art constituency parsers use different parsing techniques. They are leveraged in various NLP applications like Question Answering, Information Extraction, and word-processing systems. However, due to multiple limitations, the state-of-theart constituency parsers may make errors, and different constituency parsers may give different results for the same sentence. The conflicts among parsers can confuse users on the parser to use for the downstream tasks, as the performance of different parsers can vary significantly on different domains * The first two authors contributed equally to this work. and languages. No parser can consistently achieve the best results on all datasets, and it is costly and impractical for users to obtain ground truth parsing results. Table 1 shows the percentage of agreement among the structure of the parsers' outputs on six benchmark datasets, including Penn Treebank-3 [46], OntoNotes (English and Chinese) [32], Genia [31], French Treebank [1], and TIGER Corpus [6]. We execute four parsers including Berkeley [19], CoreNLP [27], AllenNLP [14], and Hanlp [17], for the English datasets, and three parsers, namely Berkeley, CoreNLP, and Hanlp, for the non-English datasets. On the Penn Treebank-3 dataset, it can be observed that all the parsers agree only on 1.32% of the sentences. A similar observation can be made for other datasets.
To overcome these challenges, we aim to construct a CPT that performs consistently well to represent the constituency grammar of the sentence in the absence of ground truth. Intuitively, such CPTs can be constructed by aggregating the parsing results from the state-of-the-art parsers to keep the common structure among the parsers' output and resolve their conflicts. Therefore, we propose to aggregate CPTs through the truth discovery idea.
Truth discovery has been proposed to conduct weighted aggregation for various applications, where the weights reflect the source reliabilities and are inferred from the data itself without the knowledge of the ground truth [23]. Truth discovery algorithms [22,36,37,48] witness successful applications on the aggregation of categorical and numerical data types. However, the aggregation of the tree data type has never been investigated in this domain.
There are tree aggregation methods proposed in the phylogenetic domain [2,28,43,15] and ensemble methods for parsers [44]. The issue with studies in the phylogenetic domain is that many assumptions are not applicable for CPTs, and none of them consider the constituent label aggregation. The ensemble methods use ground truth to evaluate the quality of the weak learners, whereas, for our task, the aggregation needs to be conducted in the absence of ground truth.
In this paper, we adopt the truth discovery framework to aggregate CPTs from different input parsers by estimating the parsers' reliability without ground truth. We formulate the constituency parse tree aggregation problem in two Table 1: Percentage of the sentences that different parsers agree on the tree structure steps, structure aggregation and constituent label aggregation. In the structure aggregation step, the key challenges are measuring the distance between trees and constructing the aggregated tree that can minimize that distance. We adopt the Robinson-Foulds (RF ) distance, a symmetric difference metric for trees [35], to measure the distance between the aggregated tree and input CPTs. In practice, we propose an efficient algorithm that can construct the optimal aggregated tree in near-linear time and provide theoretical proofs. We adopt the same truth discovery framework in the constituent label aggregation step. Extensive empirical studies demonstrate that the proposed Constituency Parse Tree Aggregation Model (CP-TAM) can consistently obtain high-quality aggregated CPTs across different languages and domains. Specifically, we apply the most widely used constituency parsers as the input parsers on six corpora from English, Chinese, French and German languages, and from general domains and biomedical domains. Our experimental results validate that there is no single parser that can achieve the best results across the corpora. CPTAM can consistently obtain highquality results and significantly outperforms the aggregation baselines. We further examine the estimated weights for the parsers and illustrate that the weight estimation can correctly reflect the quality of each parser in the absence of ground truth.
In summary, our main contributions are:
• We identify the pitfalls and challenges in data with tree structures for the task of truth discovery.
• We adopt Robinson-Foulds (RF ) distance to measure the differences among data with tree structures.
• We construct the best aggregation trees by solving an optimization problem and derive the theoretical proofs for the correctness and the efficiency of the algorithm.
• We test the proposed algorithm on real-world datasets, and the results clearly demonstrate the advantages of the approach in finding the accurate tree structures from the multi-sourced input.
Related Works
We summarize the related works in three categories as below.
Truth Discovery
Truth discovery aims to resolve the conflicts from multiple sources [23]. One line of work applies probabilistic methods to model the workers' labeling behavior in crowdsourcing platforms [12,26,21]. Another line of work formulates optimization frameworks that seek to minimize the weighted distance between the source and aggregated results and estimate the source reliability [22,49]. Recent truth discovery methods consider different applications such as aggregation of sequential labels [37,42,29] and aggregation of time series data [47,24,41].
Most of the available truth discovery methods mainly focus on the numerical and categorical data [23], and none of them consider tree structure. Furthermore, the distance measurements introduced in previous works do not support the tree structure. However, the problem of how to aggregate information from trees into one representative tree has been of great importance for various applications [34].
Phylogenetic Tree Aggregation Problem
The tree aggregation problem has been studied in the phylogenetic domain, where trees are branching diagrams showing the evolutionary relationships among biological species or other taxa [13]. The taxa can be described through different types of data (e.g., morphological or biomolecular). Since the inference of phylogenetic trees is an immensely complex problem, practitioners often perform many tree estimation runs with the same or different phylogenetic inference methods. The estimated trees are aggregated using consensus tree techniques [5,8].
A variety of methods have been developed for phylogenetic tree aggregation [2,8]. Some methods conduct aggregation through simple heuristics when the aggregated tree only contains branches with a certain percentage of agreement, such as the majority rule consensus [28], strict consensus [7], semi-strict consensus [15], and the greedy consensus [8]. Further, supertree and median tree approaches have been extensively explored to compute fully binary aggregated trees [4]. Such methods typically seek an output tree that minimizes the overall distance to the input trees. Since the mentioned methods are introduced in phylogenetic domain, they do not consider the characteristics of parse trees. our needs since these methods ensemble on the classification decisions instead of constructing an aggregation tree.
There are multiple ensemble models for the parsing of syntactic dependencies in the literature, aiming to construct aggregation trees [44,20]. These parsing tree ensemble methods are commonly categorized into two groups. The first group aggregates the base parsers at training time [30,3,44]. The second group aggregates the independently trained models at the prediction time [38,16,20]. One of the common approaches in these ensemble methods is to find the maximum spanning tree (MST) for the directed weighted graph to obtain the optimal dependency structure. Unlike our proposed task, all these methods rely on the ground truth to estimate the parsers' reliability.
Preliminaries
This section briefly overviews the optimization-based problem in truth discovery that we adopt for CPT aggregation. The basic idea is that the inferred truth is likely to be correct if a reliable source provides it. Therefore, the goal is to minimize the overall distance of the aggregated truth to a reliable source [22]. Based on this principle, the optimization framework is defined as follows:
min X * ,W f (X * , W) = K k=1 w k N i=1 M m=1 dm(v * im , v k im ) s.t. δ(W) = 1, W ∈ S,(3.1)
where X * and W correspond to the set of truths and the source weight, respectively, and w k refers to the reliability degree of the k-th source. The function dm(·, ·) measures the distance between the sources' observations v k im and the aggregated truths v * im . The regularization function δ(W) is defined to guarantee the weights are always non-zero and positive.
To optimize the objective function Eq. (3.1), the block coordinate descent algorithm is applied by iteratively updating the aggregated truths and source weights, conducting the following two steps.
Source Weight Update. To update the source weight in the model, the values for the truths are considered fixed, and the source weights are computed, which jointly minimizes the objective function as shown in Eq. (3.2).
W ← argmin W f (X * , W) s.t. δ(W) = K k=1 exp(−w k ). (3.2)
This function regularizes the value of w k by constraining the sum of exp(−w k ).
Truth Update. At this step, the weight of each source w k is fixed, and the truth is updated for each entry to minimize the difference between the truth and the sources' observations, where sources are weighted by their reliability degrees.
v ( * ) im ← argmin v K k=1 w k · dm(v, v k im ). (3.3)
Notation Definition n number of sentences indexed by i p number of parsers indexed by k S i the i-th sentence in the dataset W set of input CPTs' weights w S k the weight of the k-th parser w.r.t. the clusters w l k the weight of the k-th parser w.r.t. the labels T ik the k-th input CPT for the i-th sentence C i set of all unique clusters from input trees for the i-th sentence T S * i aggregated tree for the i-th sentence w.r.t. the tree structure T * i aggregated tree for the i-th sentence w.r.t. the labels L Clu(T ) clusters' labels in tree T 3) for every instance, the collection of truths X * which minimizes f (X * , W) with fixed W is obtained.
Constituency Parse Tree Aggregation Model (CPTAM)
In this section, we first formally define the problem. Then, we propose our solution in two steps. In the first step, we focus on tree structure aggregation to resolve the conflict between input trees and obtain the aggregated tree structure T S * i of the CPTs. In the second step, the corresponding POS tags and constituent labels are obtained through the label aggregation. It is worth mentioning that both tree structures and tree labels are essential for adequately parsing a sentence.
Problem Definition
We define the CPT aggregation problem using the notations summarized in Table 2. Suppose there is a corpus that consists of n sentences indexed by i (i ∈ [1, n]), and p different parsers indexed by k (k ∈ [1, p]) produce CPTs for each sentence in the corpus. We use T ik to denote the k-th input CPT for the i-th sentence (Si). Each input constituency parser has two weight parameters w S k and w l k to reflect the parser's reliability with respect to structure and constituent labels, respectively. We use different weight parameters for structure and constituent labels to account for the scenarios where a parser can successfully identify the phrase structure but assign incorrect labels. A higher weight implies that the parser is of higher reliability. The CPT aggregation problem seeks an aggregated tree for a sentence (T * i ) given the input CPTs (T ik ), and estimates the qualities of parsers in the absence of ground truth.
Tree Structure Aggregation
We formulate our framework utilizing the truth discovery framework presented in Eq. (3.1). In the tree structure aggregation step, our goal is to minimize the overall weighted distance of the aggregated CPT (T S * i ) to the reliable input CPT (T S ik ) considering the structure only. Various distance measurements can be plugged in the optimization function shown in Eq. (3.1). We adopt RF distance defined in Eq. (4.4).
Robinson-Foulds (RF ) distance is a symmetric difference metric to calculate the distance between leaf-labeled trees [35] in terms of clusters, where a cluster refers to a maximal set of leaves with a common ancestor in a rooted tree (T ) [40]. For any two trees T1 and T2 that share the same leaf set, the RF distance is defined in Eq. (4.4):
RF (T1, T2) = |Clu(T1)∆Clu(T2)|, (4.4)
where the operation ∆ computes the symmetric difference between two sets (i.e., A∆B = (A\B) (B\A)), function | · | computes the cardinality of the set, and Clu(T ) refers to the cluster set of tree T . Different from Tree Edit Distance (TED) [39], which takes O(y 3 ) time [11] to calculate, where y refers to the number of tokens in the sentence, RF distance can be calculated in O(|Clu(T1)| + |Clu(T2)|) time [9]. Applying the truth discovery framework (Eq. (3.1)), we formulate the CPT aggregation problem with respect to the tree structure as shown in Eq. (4.5). Each parser has a weight parameter w S k to reflect the reliability of that parser in terms of the structure, and
W S = {w S 1 , w S 2 , .
.., w S p } refers to the set of all parsers' weights in terms of the structure. The higher the weight, the more reliable the parser. The aggregated tree T S * is the one that can minimize the overall weighted RF distances.
min T S * ,W S f (T S * , W S ) = p k=1 w S k n i=1 RF (T S * i , T S ik ). (4.5)
We follow the block coordinate descent method introduced in Section 3. To update the weights of the input constituency parsers in the objective function Eq. (4.5), T S * i is fixed, and w S k is updated as follows:
w S k = −log( i RF (T S * i , T S ik ) max k i RF (T S * i , T S ik )
).
(4.6)
This means that the weight of a parser is inversely proportional to the maximum sum of the distance between its input trees (we use T S ik to refer to the input CPT with respect to the structure) and the aggregated trees. Next, we update the aggregated parse tree for each sentence to minimize the difference between the aggregated parse tree and the input CPTs by treating the weights as fixed. The aggregated tree is updated following Eq. (3.3) as shown in Eq. (4.7):
T S * i ← − argmin T S * i p k=1 w S k n i=1 RF (T S * i , T S ik ). (4.7)
We propose an optimal solution for Eq. (4.7).
The Optimal Solution
We present an optimal solution to obtain an aggregated tree by solving the optimization problem in Eq. (4.7). Our proposed approach constructs the aggregated tree by adding clusters with weighted support greater than or equal to 50%, where support refers to the aggregated weight of CPTs containing that cluster. To establish the solution, we first demonstrate some properties of an optimal aggregated tree.
LEMMA 4.1. The cluster set (Clu(T S * i )) in Eq. (4.7) satisfies the constraint Clu(T S * i ) ⊆ Ci (Ci = ∪ p k=1 Clu(T S ik )).
Proof. We can prove this lemma by contradiction. Suppose Clu(T S * i ) is the optimal solution to Eq. (4.7) and there exists a cluster c = ∅ such that c ∈ Clu(T S * i ) but c / ∈ Ci. Therefore, c / ∈
T S ik , ∀k. Let Clu(T ′ i S * ) = Clu(Ti S * ) − c. Then based on the defi- nition of RF distance, we have p k=1 w S k n i=1 RF (Ti S * , T ik ) > p k=1 w S k n i=1 RF (T ′ i S * , T S ik )
, which contradicts the assumption that Clu(T S * i ) is the optimal solution.
This property suggests that the search space of the solution to Eq. (4.7) is Ci. That is, all clusters in the aggregated tree must be present in at least one of the input CPTs.
LEMMA 4.2. For any cluster c, if p k=1 w S k 1(c ∈ T S ik ) > 0.5 * p k=1 w S k , then c ∈ Clu(T S * i ), and if p k=1 w S k 1(c ∈ T S ik ) < 0.5 * p k=1 w S k , then c / ∈ Clu(T S * i ), where 1(·) is the indicator function.
Proof. The proof is similar to the proof for Lemma 4.1. We can prove the two statements by contradiction.
Therefore, for the optimal solution, the clusters that have more than 50% weighted support from all the input CPTs should be included in the aggregated tree.
p k=1 w S k 1(c1 ∈ T S ik ) > 0.5 * p k=1 w S k and p k=1 w S k 1(c2 ∈ T S ik ) > 0.5 * p k=1
w S k , then c1 and c2 must be compatible.
Proof. Note that for any constituency parse tree T S ik , its clusters must be compatible. Therefore, for a cluster c, all its noncompatible clusters can only occur in trees that c is not occurred.
If p k=1 w S k 1(c ∈ T S ik ) > 0.5 * p k=1 w S k , then ∀c ′ not compati- ble with c, p k=1 w S k 1(c ′ ∈ T S ik ) < 0.5 * p k=1 w S k , and based on Lemma 4.2, c ′ / ∈ Clu(T S * i ).
There is a special case when p k=1 w S k 1(c ∈ T S ik ) = 0.5 * p k=1 w S k . To consider this situation, we add the compatibility constraint as follows:
c1 ∩ c2 = ∅, or c1 ⊂ c2, or c2 ⊂ c1, ∀c1, c2 ∈ Clu(T S * i ). (4.8)
This constraint ensures that the aggregated tree follows the syntactic structure requirement of constituency parsing. Therefore, all the clusters in the aggregated tree should be compatible, which means they should either be disjoint or a proper subset.
In the cases where p k=1 w S k 1(c ∈ T S ik ) = 0.5 * p k=1 w S k , we propose to find the maximum number of compatible clusters to add into the aggregated tree Clu(T S * i ). Although adding these clusters into the constructed aggregation tree does not affect the resulting total RF distance, we favor the aggregated trees with more compatible clusters since they contain as many details from the input trees. We conduct the following steps. First we form a set C ′ i that includes all clusters such that p k=1 w S k 1(c ∈ T S ik ) = 0.5 * p k=1 w S k . Then we construct the incompatibility graph by treating the clusters as nodes and adding an edge if two clusters are not compatible. Finding the maximum number of compatible clusters is then equivalent to the maximum independent set problem [45]. This strong NP-hard problem can be addressed by the existing methods [25,18].
Based on the properties of the optimal solution, we construct the aggregated tree T S * i as follows. We compute the weighted Copyright © 20XX by SIAM Unauthorized reproduction of this article is prohibited support for each cluster c in Ci. If p k=1 w S k 1(c ∈ T S ik ) > 0.5 * p k=1 w S k then c is added to the aggregated tree Clu(T S * i ). If p k=1 w S k 1(c ∈ T S ik ) = 0.5 * p k=1 w S k then we find maximum number of compatible clusters C m i by solving the maximum independent set problem. We then add these clusters to the aggregated tree Clu(T S * i ). Finally, we re-order the clusters in Clu(T S * i ) to form T S * i . The pseudo-code of our proposed algorithm is given in Algorithm (1).
Algorithm 1 Optimal solution to Eq. (4.7)
Input: The set of unique clusters in all input CPTs for i-th sentence (Ci), weights (w S k ). (1) is the optimal solution to the following problem:
Aggregated CPT (T S * i ). Clu(T S * i ) = ∅ ; C ′ i = ∅; for c in Ci do if p k=1 w S k 1(c ∈ T S ik ) > 0.5 * p k=1 w S k then Clu(T S * i ) = Clu(T S * i ) c if p k=1 w S k 1(c ∈ T S ik ) = 0.5 * p k=1 w S k then C ′ i = C ′ i c; Construct incompatibility graph g for C ′ i ; C m i = Maximum-Independent-Set(g); Clu(T S * i ) = Clu(T S * i ) C m i return T S * i THEOREM 4.1. The aggregated tree T S * i calculated by AlgorithmT S * i ← − argmin T S * i p k=1 w S k n i=1 RF (T S * i , T S ik )
such that c1 ∩ c2 = ∅, or c1 ⊂ c2, or c2 ⊂ c1, ∀c1, c2 ∈ Clu(T S * i ).
Proof. In the Algorithm (1), we consider all the clusters with p k=1 w S k 1(c ′ ∈ T S ik ) > 0.5 * p k=1 w S k and add them to T S * i . From Lemma (4.3), we show that all of these clusters are compatible and from Lemma (4.2), we show that adding these clusters minimizes the RF distance. Adding all these clusters result in the minimum RF distance implying that the objective function will be minimized. Applying maximum independent set algorithm on the incompatibility graph provides us with the maximum number of compatible clusters for p k=1 w S k 1(c ′ ∈ T S ik ) = 0.5 * p k=1 w S k . Adding all these clusters to T S * i results in obtaining the maximum set of compatible clusters. Thus the solution is optimal. Proof. Let C1, . . . , C k be the clusters with 50% support from the input constituency parse trees. The set of all constituency parse trees with respect to structure is denoted by
Time Complexity
T S = {T S i1 , T S i2 , T S i3 , ..., T S ik }.
Assume that a cluster Ci is supported by trees T S i ⊂ T S . If Ci is not compatible with Cj, then it implies that T S i = T S \ T S j . Otherwise, T S i would have a non-empty intersection with T S j , which would imply that there is a tree T S s that supports both Ci and Cj, which contradicts with the assumption that Ci and Cj are incompatible.
We prove Lemma 4.4 by contradiction. Let's assume that the incompatibility graph G for clusters C1, . . . , C k is not bipartite. It means that G contains an odd-cycle. Without loss of generality assume that this cycle is (C1, C2, . . . , C2p+1). That is, C2 is not compatible with C1, C3 is not compatible with C2, and so on. Then, by our previous observation, C2 must be supported by T S \ T S 1 , C3 must be supported by T S 1 , and so on. That is, for odd i ≤ 2p + 1, Ci must be supported by T S 1 . Then C2p+1 and C1 are supported by the same set of trees, which means that C1 and C2p+1 must be compatible. This is a contradiction (i.e., (C1, C2, . . . , C2p+1) could not be a cycle).
The existing methods [18] solve the maximum independent set problem for a bipartite graph with time complexity of O(z 2.5 + (outputsize)) where z refers to the number of nodes in the incompatibility graph. As the expected output is the list of compatible clusters, the output size is in the order of O(z). The for loop that iterates over cluster set Ci runs in O(|Ci|) time. Therefore, the overall run time of Algorithm (1) is O(|Ci| + z 2.5 + z). However, in practice, z is very small compared to |Ci| because it only contains clusters with support equal to 50%. Thus, Algorithm (1) has, in practice, near-linear run time in |Ci|.
Constituent Label Aggregation
After obtaining the aggregated structures, we aggregate the corresponding labels provided by the parsers. In constituent label aggregation step, we aim to minimize the objective function Eq. (4.9) with respect to the L Clu(T S * i ) and W l , where L Clu(T S * i ) refers to the labels associated to the aggregated structure, and W l = {w l 1 , w l 2 , ..., w l p } refers to the set of all parsers' weights with respect to the constituent labels, as follows:
min T * ,W l f (T * , W l ) = p k=1 w l k n i=1 d(L Clu(T S * i ) , L Clu(T S ik ) ), (4.9)
where L Clu(T S ik ) refers to the constituent labels provided by parsers for the obtained clusters in T S * i . Accordingly, we show the weight update by taking differentiation with respect to W l in Eq. (4.10):
w l k = −log( i d(L Clu(T S * i ) , L Clu(T S ik ) ) max k i d(L Clu(T S * i ) , L Clu(T S ik ) )
), (4.10) where d refers to the zero-one loss function. Similarly, the constituent label aggregation update is shown in Eq. (4.11): OntoNotes 3 consists of a large corpus comprising various genres of text (e.g., news, weblogs, Usenet newsgroups, broadcast, and talk shows) with structural information in three languages (English, Chinese, and Arabic). The Arabic portion of the dataset is not included in our experiments since the parsers' tokenization does not align with the ground truth.
T * i ← − argmin L Clu(T S * i ) p k=1 w l k n i=1 d(L Clu(T S * i ) , L Clu(T S ik )
Genia 4 is constructed from research abstracts in the molecular biology domain. Approximately 2500 abstracts are annotated from the MEDLINE database.
TIGER Corpus 5 consists of approximately 40,000 sentences from the German newspaper "Frankfurter Rundschau". The corpus was annotated with part-of-speech and syntactic structures in the project TIGER (DFG).
French Treebank 6 consists of approximately 22000 sentences from the articles of French newspaper "Le Monde". Table 3 summarizes the statistics of the datasets.
Baselines
We compare CPTAM with two categories of baselines. The first category of baselines is the individual state-ofthe-art input constituency parsers including CoreNLP [27], Berkeley 7 [19], AllenNLP 8 [14], and HanLP [17]. We have chosen these parsers as they are the most "stars" NLP libraries on GitHub, demonstrating their wide applications in industry and academia. 1 Our implementation code is available at https://github.com/kulkarniadithya/CPTAM 2 https://catalog.ldc.upenn.edu/LDC99T42 3 https://catalog.ldc.upenn.edu/LDC2013T19 4 https://github.com/allenai/genia-dependency-trees/tree/master/original_data 5 https://www.ims.uni-stuttgart.de/documents/ressourcen/korpora/tiger-corpus/download/start.html 6 http://ftb.llf-paris.fr/telecharger.php?langue=en 7 We use the pretrained model provided by spaCy 8 This parser can parse sentences in English only.
The second category of baselines is the tree aggregation methods 9 including • Majority Rule Consensus (MRC) [28]. It constructs aggregation trees containing clusters with support greater than 50%.
• Greedy Consensus (GC) [8]. The aggregated trees are constructed progressively to have all the clusters whose support is above a threshold (30% for OntoNotes Chinese,TIGER Corpus, and French Treebank, and 20% for the other datasets) and compatible with the constructed tree. With these thresholds, this baseline essentially constructs aggregation trees with all compatible clusters from input trees.
• Strict Consensus (SC) [8]. It constructs aggregation trees containing clusters with support of 100%.
These methods only consider the aggregation of tree structures but not labels. Therefore, we apply Majority Voting (MV) to aggregate the labels after the tree aggregation step, where the label with the highest frequency is chosen for each cluster. We also compare with CPTAM-W, which is CPTAM without weight estimation. CPTAM-W considers clusters with support greater than or equal to 50%; thus, it is more aggressive compared to MRC, which considers clusters with support greater than 50% only, and more conservative compared to GC, which includes all compatible clusters.
Evaluation Measurements
The performance is evaluated by different standard metrics in the experiments. To evaluate the performance based on the real-life usage of constituency parsers, we also include the POS tags of individual tokens as part of the parsing results. Therefore, the following evaluation metric is stricter than Evalb, the standard metric for evaluating phrase structure. We report Precision, Recall, and F1 as follows:
P recision(P ) = #Correct Constituents #Constituents in parser output (5.12) Recall(R) = #Correct Constituents #Constituents in gold standard (5.13) F 1 = 2 * P recision * Recall P recision + Recall .
(5.14)
Accordingly, the same metrics Precision (P S ), Recall (R S ), and F1 (F 1 S ) are defined to evaluate the performance considering only the tree structure. Penn Treebank-3 OntoNotes (English) Genia OntoNotes (Chinese) French Treebank TIGER Corpus F 1 S w S Acc l w l F 1 S w S Acc l w l F 1 S w S Acc l w l F 1 S w S Acc l w l F Table 5: The comparison between the rankings of parsers' performance with the rankings of estimated weights different from the labels provided by the parsers, we do not consider them for constituent label aggregation 10 . Among the input parsers, it is clear that none of them consistently perform the best on all datasets. Specifically, Hanlp performs poorly on English but the best on Chinese. This may be caused since Hanlp targets the Chinese language even though it is software for multiple languages. Allennlp performs the best on Penn Treebank-3 and OntoNotes (English) among the four parsers but does not perform well on the Genia dataset in the biomedical domain.
Experimental Results
The proposed CPTAM significantly outperforms all the stateof-the-art aggregation methods in terms of Precision, Recall, and F1 score, demonstrating the power of the proposed method in resolving the conflicts between input CPTs. Comparing CPTAM-W and CPTAM, CPTAM further improves in all metrics, indicating the necessity and effectiveness of the weight estimation in the truth discovery process.
Compared with individual parsers, CPTAM performs consistently well to represent the constituency grammar of the sentence in all datasets. CPTAM performs the best for two out of four datasets and remains competitive on the other two datasets. In contrast, Al-lenNLP and Hanlp are the best for one out of four datasets, and CoreNLP and Berkeley are not the best in any datasets. This shows that the proposed CPTAM can consistently obtain high-quality aggregated CPTs over different languages and domains.
Further, we study the accuracy of the weight estimations of CPTAM. We compare the rankings given by the estimated weights with the rankings of parsers' real quality, and the results are shown in Table 5. To evaluate the weights estimated for the structure aggregation, we compute the rank of parsers' quality by their structure F1 scores (F 1 S ) compared with the ground truth and by the weight estimation w S k computed in Eq. (4.6), where the numbers indicate the rank. Similarly, for the label aggregation, we compute the rank of parsers' quality by their label accuracy (Acc l ) and by the weight estimation w l k computed in Eq. (4.10). It is clear that the parsers' quality varies across different languages and domains. The ranks of parsers are exactly the same between their real quality and the estimated weights. It illustrates that the weight calculated by the proposed CPTAM properly estimates parsers' quality in the absence of ground truth. These experiments also suggest that parser users can first apply CPTAM on the sampled corpus to estimate the quality of individual parsers on the given corpus and then use the best parser to achieve high-quality parsing results and high efficiency.
Ablation Study
To gain insights into our framework, we investigate the effectiveness of the tree structure aggregation step as it is the foundation of CPTAM. To evaluate the performance on the structure, the RF distance (RF dist.) is calculated between the parser output and ground truth. We also calculate Precision (P S ), Recall (R S ), and F1 score (F 1 S ) considering the tree structure only. The ablation study results are shown in Table 6 and Table 7. Table 6 and Table 7 illustrate a strong correlation between the RF distance and F1 score on all datasets. The lower the RF distance, the higher the F1 score. This correlation indicates that RF distance is a proper measurement for the quality of constituency parse trees. CPTAM outperforms all aggregation baseline approaches on all datasets. It consistently identifies proper clusters in the tree by correctly estimating the parsers' quality. As a result, CPTAM outperforms or stays competitive compared to the best parser on all datasets as well.
Conclusion
This paper adopts the truth discovery idea to aggregate CPTs from different parsers by estimating the parsers' reliability in the absence of ground truth. We aggregate the input CPTs in two steps: tree structure aggregation and constituent label aggregation. The block coordinate descent method is applied to obtain solutions through an iterative process, and an optimal solution is derived to construct aggregation trees that can minimize the weighted RF distance. We further provide theoretical analysis to show that the proposed approach gives the optimal solution. The proposed solution has near-linear run time in practice for the tree structure aggregation step. Our experimental results illustrate that the proposed solution CPTAM outperforms the state-of-the-art aggregation baselines and consistently obtains high-quality aggregated CPTs for various datasets in the absence of ground truth. We further illustrate that our adopted weight update correctly estimates parsers' quality. Empirically, the importance of the tree structure aggregation step is demonstrated in the ablation study. Overall, we present the effectiveness of the proposed CPTAM across different languages and domains.
LEMMA 4. 3 .
3For any cluster c1 and c2, if
LEMMA 4 . 4 .
44The incompatibility graph constructed for clusters with weighted support equal to 50% is bipartite.
Table 2 :
2Summary of NotationsBy deriving the truth using Eq. (3.
Table 3 :
3Statistics of Datasets Street Journal collection in English for syntactic annotation.5 Experiments
In this section, we conduct experiments on various datasets with
different languages from different domains 1 . We start with the
datasets in Section 5.1.The baseline methods and evaluations are
discussed in Sections 5.2 and 5.3, respectively. We demonstrate
the main experimental results in Section 5.4 and ablation studies in
Section 5.5.
5.1 Datasets We use six benchmark datasets from different
domains and different languages for evaluation.
Penn Treebank-3 2 selected 2,499 stories from a three-year
Wall
Table 4 :
4CPT aggregation performance comparison on Penn Treebank-3, OntoNotes (English, Chinese), and Genia datasets.
Table 6 :
6The tree structure aggregation performance comparison on Penn Treebank-3, OntoNotes (English), and Genia datasetsOntoNotes (Chinese)
French Treebank
TIGER Corpus
RF dist.
P S
R S
F 1 S
RF dist.
P S
R S
F 1 S
RF dist.
P S
R S
F 1 S
CoreNLP
108817
96.47 96.72 96.59
212550
91.06 91.56 91.31
740059
65.67 80.02 72.14
Berkeley
406708
92.90 84.16 88.31
344070
85.65 77.07 81.13
220183
93.97 85.36 89.46
HanLP
144618
95.73 95.78 95.75 1196428 44.90 37.82 41.06
818677
66.22 62.36 64.23
MRC
200008
95.41 94.04 94.72
231755
90.06 88.05 89.04
229139
92.95 82.79 87.58
GC
206438
93.93 95.06 94.49
308335
83.80 89.08 86.35
315613
87.66 84.38 85.99
SC
213173
95.83 92.84 94.31
314548
90.97 79.30 84.74
293991
93.07 80.23 86.17
CPTAM-W
198769
95.34 94.20 94.77
229344
90.8
88.35 89.56
228845
92.83 83.97 88.18
CPTAM
114733
96.39 96.49 96.44
212697
91.05 91.54 91.29
220183
93.97 85.36 89.46
Table 7 :
7The tree structure aggregation performance comparison on OntoNotes (Chinese), French Treebank and TIGER Corpus datasets
Taking CoreNLP output as an example in TIGER corpus, a chunk of the sentence is tagged as (NUR (S (NOUN Konzernchefs) (VERB lehnen) while the ground truth for the same span is (NN-SB Konzernchefs) (VVFIN-HD lehnen)
AcknowledgementThe work was supported in part by the National Science Foundation under Grant NSF IIS-2007941. Any opinions, findings, and conclusions, or recommendations expressed in this document are those of the author(s) and should not be interpreted as the views of any U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes, notwithstanding any copyright notation hereon.
Un corpus arboré pour le français: Le french treebank. A Abeillé, L Clément, Liégeois, Traitement Automatique des Langues. 603A Abeillé, L Clément, and L Liégeois. Un corpus arboré pour le français: Le french treebank. Traitement Automatique des Langues, 60(3):19-43, 2019.
Consensus techniques and the comparison of taxonomic trees. Edward N Adams Iii, Systematic Biology. 214Edward N Adams III. Consensus techniques and the compar- ison of taxonomic trees. Systematic Biology, 21(4):390-397, 1972.
Reverse revision and linear tree combination for dependency parsing. Giuseppe Attardi, Felice Dell'orletta, Proc. of NAACL. of NAACLGiuseppe Attardi and Felice Dell'Orletta. Reverse revision and linear tree combination for dependency parsing. In Proc. of NAACL, pages 261-264, 2009.
Phylogenetic supertrees: Combining information to reveal the tree of life. R P Olaf, Bininda-Emonds, Springer4Olaf RP Bininda-Emonds. Phylogenetic supertrees: Combin- ing information to reveal the tree of life, volume 4. Springer, 2004.
The (super) tree of life: Procedures, problems, and prospects. R P Olaf, Bininda-Emonds, Mike A John L Gittleman, Steel, Annual Review of Ecology and Systematics. 331Olaf RP Bininda-Emonds, John L Gittleman, and Mike A Steel. The (super) tree of life: Procedures, problems, and prospects. Annual Review of Ecology and Systematics, 33(1):265-289, 2002.
Tiger: Linguistic interpretation of a german corpus. S Brants, P Dipper, Eisenberg, E Hansen-Schirra, Wolfgang König, Christian Lezius, George Rohrer, Hans Smith, Uszkoreit, Research on language and computation. 24S Brants, S Dipper, P Eisenberg, S Hansen-Schirra, E König, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit. Tiger: Linguistic interpretation of a german corpus. Research on language and computation, 2(4):597-620, 2004.
. Kåre Bremer. Combinable component consensus. Cladistics. 64Kåre Bremer. Combinable component consensus. Cladistics, 6(4):369-372, 1990.
A classification of consensus methods for phylogenetics. DIMACS series in discrete mathematics and theoretical computer science. D Bryant, 61D Bryant. A classification of consensus methods for phyloge- netics. DIMACS series in discrete mathematics and theoreti- cal computer science, 61:163-184, 2003.
Optimal algorithms for comparing trees with labeled leaves. H E William, Day, Journal of classification. 21William HE Day. Optimal algorithms for comparing trees with labeled leaves. Journal of classification, 2(1):7-28, 1985.
Boosted trees for ecological modeling and prediction. Glenn De, ' Ath, Ecology. 881Glenn De'Ath. Boosted trees for ecological modeling and prediction. Ecology, 88(1):243-251, 2007.
An optimal decomposition algorithm for tree edit distance. Shay Erik D Demaine, Benjamin Mozes, Oren Rossman, Weimann, ACM Transactions on Algorithms (TALG). 61Erik D Demaine, Shay Mozes, Benjamin Rossman, and Oren Weimann. An optimal decomposition algorithm for tree edit distance. ACM Transactions on Algorithms (TALG), 6(1):1- 19, 2009.
Knowledge vault: A web-scale approach to probabilistic knowledge fusion. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, Wei Zhang, Proc. of KDD. of KDDXin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proc. of KDD, pages 601- 610, 2014.
Inferring phylogenies. J Felsenstein, Sinauer associates Sunderland. 2J Felsenstein. Inferring phylogenies, volume 2. Sinauer associates Sunderland, MA, 2004.
AllenNLP: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, F Nelson, Matthew Liu, Michael Peters, Luke Schmitz, Zettlemoyer, Proc. of Workshop for NLP Open Source Software (NLP-OSS). of Workshop for NLP Open Source Software (NLP-OSS)Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. AllenNLP: A deep semantic natural language processing platform. In Proc. of Workshop for NLP Open Source Software (NLP-OSS), pages 1-6, 2018.
Semi-strict supertrees. A Pablo, Diego Goloboff, Pol, Cladistics. 185Pablo A Goloboff and Diego Pol. Semi-strict supertrees. Cladistics, 18(5):514-525, 2002.
Single malt or blended? a study in multilingual parser optimization. Johan Hall, Jens Nilsson, Joakim Nivre, Trends in Parsing Technology. Johan Hall, Jens Nilsson, and Joakim Nivre. Single malt or blended? a study in multilingual parser optimization. Trends in Parsing Technology, pages 19-33, 2010.
. Han He, Hanlp, Han Language ProcessingHan He. HanLP: Han Language Processing, 2020.
Generation of maximum independent sets of a bipartite graph and maximum cliques of a circular-arc graph. T Kashiwabara, Masuda, T Nakajima, Fujisawa, Journal of algorithms. 131T Kashiwabara, S Masuda, K Nakajima, and T Fujisawa. Generation of maximum independent sets of a bipartite graph and maximum cliques of a circular-arc graph. Journal of algorithms, 13(1):161-174, 1992.
Constituency parsing with a selfattentive encoder. N Kitaev, Klein, Proc. of ACL. of ACLN Kitaev and D Klein. Constituency parsing with a self- attentive encoder. In Proc. of ACL, pages 2676-2686, 2018.
Distilling an ensemble of greedy dependency parsers into one MST parser. A Kuncoro, Lingpeng Ballesteros, Chris Kong, Noah A Dyer, Smith, Proc. of EMNLP. of EMNLPA Kuncoro, M Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. Distilling an ensemble of greedy dependency parsers into one MST parser. In Proc. of EMNLP, pages 1744- 1753, 2016.
A confidence-aware approach for truth discovery on long-tail data. Qi Li, Yaliang Li, Jing Gao, Lu Su, Bo Zhao, Murat Demirbas, Wei Fan, Jiawei Han, PVLDB. 84Qi Li, Yaliang Li, Jing Gao, Lu Su, Bo Zhao, Murat Demirbas, Wei Fan, and Jiawei Han. A confidence-aware approach for truth discovery on long-tail data. PVLDB, 8(4), 2014.
Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation. Qi Li, Yaliang Li, Jing Gao, Bo Zhao, Wei Fan, Jiawei Han, Proc. of SIGMOD. of SIGMODQi Li, Yaliang Li, Jing Gao, Bo Zhao, Wei Fan, and Jiawei Han. Resolving conflicts in heterogeneous data by truth dis- covery and source reliability estimation. In Proc. of SIGMOD, pages 1187-1198, 2014.
A survey on truth discovery. Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, Jiawei Han, ACM SigKDD Explorations Newsletter. 172Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei Fan, and Jiawei Han. A survey on truth discovery. ACM SigKDD Explorations Newsletter, 17(2):1-16, 2016.
On the discovery of evolving truth. Yaliang Li, Qi Li, Jing Gao, Lu Su, Bo Zhao, Wei Fan, Jiawei Han, Proc. of KDD. of KDDYaliang Li, Qi Li, Jing Gao, Lu Su, Bo Zhao, Wei Fan, and Jiawei Han. On the discovery of evolving truth. In Proc. of KDD, pages 675-684, 2015.
Maximal independent sets in bipartite graphs. J Liu, Journal of graph theory. 174J Liu. Maximal independent sets in bipartite graphs. Journal of graph theory, 17(4):495-507, 1993.
Faitcrowd: Fine grained truth discovery for crowdsourced data aggregation. Fenglong Ma, Yaliang Li, Qi Li, Minghui Qiu, Jing Gao, Shi Zhi, Lu Su, Bo Zhao, Ji Heng, Jiawei Han, Proc. of KDD. of KDDFenglong Ma, Yaliang Li, Qi Li, Minghui Qiu, Jing Gao, Shi Zhi, Lu Su, Bo Zhao, Heng Ji, and Jiawei Han. Faitcrowd: Fine grained truth discovery for crowdsourced data aggrega- tion. In Proc. of KDD, pages 745-754, 2015.
The stanford corenlp natural language processing toolkit. D Christopher, Manning, Surdeanu, Jenny R Bauer, S Finkel, D Bethard, Mcclosky, Proc. of ACL: System demonstrations. of ACL: System demonstrationsChristopher D Manning, M Surdeanu, J Bauer, Jenny R Finkel, S Bethard, and D McClosky. The stanford corenlp natural language processing toolkit. In Proc. of ACL: System demonstrations, pages 55-60, 2014.
Consensusn-trees. T Margush, F R Mcmorris, Bulletin of Mathematical Biology. 432T. Margush and F.R. McMorris. Consensusn-trees. Bulletin of Mathematical Biology, 43(2):239 -244, 1981.
Aggregating and predicting sequence labels from crowd annotations. Byron C An T Nguyen, Wallace, Li, M Nenkova, Lease, Proc. of ACL. of ACL299An T Nguyen, Byron C Wallace, J Li, A Nenkova, and M Lease. Aggregating and predicting sequence labels from crowd annotations. In Proc. of ACL, page 299, 2017.
Integrating graph-based and transition-based dependency parsers. Joakim Nivre, Ryan Mcdonald, Proc. of ACL. of ACLJoakim Nivre and Ryan McDonald. Integrating graph-based and transition-based dependency parsers. In Proc. of ACL, pages 950-958, 2008.
The genia corpus: An annotated research abstract corpus in molecular biology domain. T Ohta, Tateisi, H Kim, J Mima, Tsujii, Proc. of the human language technology conference. of the human language technology conferenceT Ohta, Y Tateisi, J Kim, H Mima, and J Tsujii. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proc. of the human language technology conference, pages 73-77, 2002.
Ontonotes: The 90% solution. S Sameer, N Pradhan, Xue, Proc. of NAACL. of NAACLSameer S Pradhan and N Xue. Ontonotes: The 90% solution. In Proc. of NAACL, pages 11-12, 2009.
Hyperparameters and tuning strategies for random forest. P Probst, N Marvin, A Wright, Boulesteix, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 931301P Probst, Marvin N Wright, and A Boulesteix. Hyperparam- eters and tuning strategies for random forest. Wiley Interdis- ciplinary Reviews: Data Mining and Knowledge Discovery, 9(3):e1301, 2019.
A survey of consensus problems in multi-agent coordination. Wei Ren, Ella M Beard, Atkins, Proc. of the American Control Conference. of the American Control ConferenceWei Ren, Randal W Beard, and Ella M Atkins. A survey of consensus problems in multi-agent coordination. In Proc. of the American Control Conference, pages 1859-1864, 2005.
Comparison of phylogenetic trees. F David, Robinson, Leslie R Foulds, Mathematical biosciences. 531-2David F Robinson and Leslie R Foulds. Comparison of phylogenetic trees. Mathematical biosciences, 53(1-2):131- 147, 1981.
Optsla: An optimization-based approach for sequential label aggregation. Nasim Sabetpour, Adithya Kulkarni, Qi Li, Proc. of EMNLP: Findings. of EMNLP: FindingsNasim Sabetpour, Adithya Kulkarni, and Qi Li. Optsla: An optimization-based approach for sequential label aggregation. In Proc. of EMNLP: Findings, pages 1335-1340, 2020.
Truth discovery in sequence labels from crowds. Nasim Sabetpour, Adithya Kulkarni, Sihong Xie, Qi Li, Proc. of ICDM. of ICDM2021Nasim Sabetpour, Adithya Kulkarni, Sihong Xie, and Qi Li. Truth discovery in sequence labels from crowds. In Proc. of ICDM, 2021.
Parser combination by reparsing. Kenji Sagae, Alon Lavie, Proc. of NAACL. of NAACLKenji Sagae and Alon Lavie. Parser combination by reparsing. In Proc. of NAACL, pages 129-132, 2006.
A new perspective on the tree edit distance. Stefan Schwarz, Mateusz Pawlik, Nikolaus Augsten, SISAP. SpringerStefan Schwarz, Mateusz Pawlik, and Nikolaus Augsten. A new perspective on the tree edit distance. In SISAP, pages 156-170. Springer, 2017.
. C Semple, Steel, Oxford University Press on Demand24C Semple, M Steel, et al. Phylogenetics, volume 24. Oxford University Press on Demand, 2003.
Dynamic truth discovery on numerical data. Zhi Shi, Fan Yang, Zheyi Zhu, Qi Li, Zhaoran Wang, Jiawei Han, Proc. of ICDM. of ICDMZhi Shi, Fan Yang, Zheyi Zhu, Qi Li, Zhaoran Wang, and Jiawei Han. Dynamic truth discovery on numerical data. In Proc. of ICDM, pages 817-826, 2018.
A Bayesian approach for sequence tagging with crowds. D Edwin, Iryna Simpson, Gurevych, Proc. of EMNLP-IJCNLP. of EMNLP-IJCNLPEdwin D. Simpson and Iryna Gurevych. A Bayesian ap- proach for sequence tagging with crowds. In Proc. of EMNLP- IJCNLP, pages 1093-1104, 2019.
Taxonomic congruence in the leptopodomorpha re-examined. R Robert, F James Sokal, Rohlf, Systematic Zoology. 303Robert R Sokal and F James Rohlf. Taxonomic congruence in the leptopodomorpha re-examined. Systematic Zoology, 30(3):309-325, 1981.
Ensemble models for dependency parsing: Cheap and good?. Mihai Surdeanu, D Christopher, Manning, Proc. of NAACL. of NAACLMihai Surdeanu and Christopher D Manning. Ensemble models for dependency parsing: Cheap and good? In Proc. of NAACL, pages 649-652, 2010.
Finding a maximum independent set. Anthony E Robert Endre Tarjan, Trojanowski, SIAM Journal on Computing. 63Robert Endre Tarjan and Anthony E Trojanowski. Finding a maximum independent set. SIAM Journal on Computing, 6(3):537-546, 1977.
The penn treebank: An overview. A Taylor, B Marcus, Santorini, Treebanks. SpringerA Taylor, M Marcus, and B Santorini. The penn treebank: An overview. In Treebanks, pages 5-22. Springer, 2003.
Online truth discovery on time series data. L Yao, Su, Y Li, Li, Ma, A Gao, Zhang, Proc. of SDM. of SDMSIAML Yao, L Su, Q Li, Y Li, F Ma, J Gao, and A Zhang. Online truth discovery on time series data. In Proc. of SDM, pages 162-170. SIAM, 2018.
Truth discovery with multiple conflicting information providers on the web. Xiaoxin Yin, Jiawei Han, S Yu Philip, IEEE Transactions on Knowledge and Data Engineering. 206Xiaoxin Yin, Jiawei Han, and S Yu Philip. Truth discov- ery with multiple conflicting information providers on the web. IEEE Transactions on Knowledge and Data Engineer- ing, 20(6):796-808, 2008.
Learning from the wisdom of crowds by minimax entropy. D Zhou, J C Platt, S Basu, Y Mao, NeurIPS. D. Zhou, J. C. Platt, S. Basu, and Y. Mao. Learning from the wisdom of crowds by minimax entropy. In NeurIPS, pages 2204-2212, 2012.
| [
"https://github.com/kulkarniadithya/CPTAM",
"https://github.com/allenai/genia-dependency-trees/tree/master/original_data"
] |
[
"SIRE: Separate Intra-and Inter-sentential Reasoning for Document-level Relation Extraction",
"SIRE: Separate Intra-and Inter-sentential Reasoning for Document-level Relation Extraction"
] | [
"Shuang Zeng zengs@pku.edu.cn \nThe MOE Key Laboratory of Computational Linguistics\nPeking University\nChina\n\nSchool of Software and Microelectronics\nPeking University\nChina\n",
"Yuting Wu \nThe MOE Key Laboratory of Computational Linguistics\nPeking University\nChina\n\nWangxuan Institute of Computer Technology\nPeking University\nChina\n",
"Baobao Chang \nThe MOE Key Laboratory of Computational Linguistics\nPeking University\nChina\n"
] | [
"The MOE Key Laboratory of Computational Linguistics\nPeking University\nChina",
"School of Software and Microelectronics\nPeking University\nChina",
"The MOE Key Laboratory of Computational Linguistics\nPeking University\nChina",
"Wangxuan Institute of Computer Technology\nPeking University\nChina",
"The MOE Key Laboratory of Computational Linguistics\nPeking University\nChina"
] | [] | Document-level relation extraction has attracted much attention in recent years. It is usually formulated as a classification problem that predicts relations for all entity pairs in the document. However, previous works indiscriminately represent intra-and inter-sentential relations in the same way, confounding the different patterns for predicting them. Besides, they create a document graph and use paths between entities on the graph as clues for logical reasoning. However, not all entity pairs can be connected with a path and have the correct logical reasoning paths in their graph. Thus many cases of logical reasoning cannot be covered. This paper proposes an effective architecture, SIRE, to represent intra-and inter-sentential relations in different ways. We design a new and straightforward form of logical reasoning module that can cover more logical reasoning chains. Experiments on the public datasets show SIRE outperforms the previous state-of-the-art methods. Further analysis shows that our predictions are reliable and explainable. Our code is available at https: | 10.18653/v1/2021.findings-acl.47 | [
"https://arxiv.org/pdf/2106.01709v1.pdf"
] | 235,313,883 | 2106.01709 | 0df00857af8c2d2c17b368a8008b965243322924 |
SIRE: Separate Intra-and Inter-sentential Reasoning for Document-level Relation Extraction
Shuang Zeng zengs@pku.edu.cn
The MOE Key Laboratory of Computational Linguistics
Peking University
China
School of Software and Microelectronics
Peking University
China
Yuting Wu
The MOE Key Laboratory of Computational Linguistics
Peking University
China
Wangxuan Institute of Computer Technology
Peking University
China
Baobao Chang
The MOE Key Laboratory of Computational Linguistics
Peking University
China
SIRE: Separate Intra-and Inter-sentential Reasoning for Document-level Relation Extraction
Document-level relation extraction has attracted much attention in recent years. It is usually formulated as a classification problem that predicts relations for all entity pairs in the document. However, previous works indiscriminately represent intra-and inter-sentential relations in the same way, confounding the different patterns for predicting them. Besides, they create a document graph and use paths between entities on the graph as clues for logical reasoning. However, not all entity pairs can be connected with a path and have the correct logical reasoning paths in their graph. Thus many cases of logical reasoning cannot be covered. This paper proposes an effective architecture, SIRE, to represent intra-and inter-sentential relations in different ways. We design a new and straightforward form of logical reasoning module that can cover more logical reasoning chains. Experiments on the public datasets show SIRE outperforms the previous state-of-the-art methods. Further analysis shows that our predictions are reliable and explainable. Our code is available at https:
Introduction
Relation Extraction (RE) is an important way of obtaining knowledge facts from natural language text. Many recent advancements (Sahu et al., 2019;Yao et al., 2019b;Nan et al., 2020; manage to tackle the document-level relation extraction (doc-level RE) that extracts semantic relations among entities across multiple sentences. Due to its strong correlation with real-world scenarios, doc-level RE has attracted much attention in the field of information extraction.
The doc-level RE task is usually formulated as a classification problem that predicts possible rela- Figure 1: Two examples from DocRED (Yao et al., 2019b) for illustration of intra-and inter-sentential relations. Sentence numbers, entity mentions, and supporting evidence involved in these relation instances are colored. Other mentions are underlined for clarity. tions for all entity pairs, using the information from the entire document. It has two different kinds of relations: intra-sentential relation and inter-sentential relation. We show examples of these two kinds of relations in Figure 1. When two entities have mentions co-occurred in the same sentence, they may express intra-sentential relations. Otherwise, they may express inter-sentential relations.
Previous methods do not explicitly distinguish these two kinds of relations in the design of the model and use the same method to represent them. However, from the perspective of linguistics, intrasentential relations and inter-sentential relations are expressed in different patterns. For two intrasentential entities, their relations are usually expressed from local patterns within their co-occurred sentences. As shown in the first example in Figure 1,(Polar Music,country of origin,Swedish) and (Wembley Arena, located in, London) can be inferred based solely on the sentence they reside in, i.e., sentences 1 and 6 respectively. Unlike intrasentential relations, inter-sentential relations tend to be expressed from the global interactions across multiple related sentences, also called supporting evidence. Moreover, cross-sentence relations usually require complex reasoning skills, e.g., logical reasoning. As shown in the second example in Figure 1, (São Paulo, continent, South America) can be inferred from the other two relation facts expressed in the document: (São Paulo, country, Brazil) and (Brazil, continent, South America). So the different patterns between intra-and intersentential relations show that it would be better for a model to treat intra-and inter-sentential relations differently. However, previous works usually use the information from the whole document to represent all relations, e.g., 13 sentences for predicting (Polar Music, country of origin, Swedish) in the first example in Figure 1. We argue that this will bring useless noises from unrelated sentences that misguide the learning of relational patterns.
Besides, previous methods treat logical reasoning as a representation learning problem. They construct a document graph from the input document using entities as nodes. And the paths between two entities on their graphs, usually passing through other entities, could be regarded as clues for logical reasoning. However, since not all entity pairs can be connected with a path and have the correct logical reasoning paths available on the graph, many cases of logical reasoning cannot be covered. So their methods are somehow limited, and we should consider a new form of logical reasoning to better model and cover all possible reasoning chains.
In this paper, we propose a novel architecture called Separate Intraand inter-sentential REasoning (SIRE) for doc-level RE. Unlike previous works in this task, we introduce two different methods to represent intra-and inter-sentential relations respectively. For an intra-sentential relation, we utilize a sentence-level encoder to represent it in every co-occurred sentence. Then we get the final representation by aggregating the relational representations from all co-occurred sentences. This will encourage intra-sentential entity pairs to focus on the local patterns in their co-occurred sen-tences. For an inter-sentential relation, we utilize a document-level encoder and a mention-level graph proposed by to capture the document information and interactions among entity mentions, document, and local context. Then, we apply an evidence selector to encourage intersentential entity pairs to selectively focus on the sentences that may signal their cross-sentence relations, i.e., finding supporting evidence. Finally, we develop a new form of logical reasoning module where one relation instance can be modeled by attentively fusing the representations of other relation instances in all possible logical chains. This form of logical reasoning could cover all possible cases of logical reasoning in the document.
Our contributions can be summarized as follows:
• We propose an effective architecture called SIRE that utilizes two different methods to represent intra-sentential and inter-sentential relations for doc-level RE.
• We come up with a new and straightforward form of logical reasoning module to cover all cases of logical reasoning chains.
We evaluate our SIRE on three public doc-level RE datasets. Experiments show SIRE outperforms the previous state-of-the-art models. Further analysis shows SIRE could produce more reliable and explainable predictions which further proves the significance of the separate encoding.
Intra-and Inter-sentential Relation Representation Module
As is discussed in Sec. 1, for two intra-sentential entities, their relations are usually determined by the local patterns from their co-occurred sentences, while for two inter-sentential entities, their relations are usually expressed across multiple related sentences that can be regarded as the supporting evidence for their relations. So in this module, we utilize two different methods to represent intrasentential and inter-sentential relations separately. Our model uses different methods to represent intra-and inter-sentential relations and the self-attention mechanism to model the logical reasoning process. We use the logical reasoning chain:e A → e B → e C for illustration.
Our methods encourage intra-sentential entity pairs to focus on their co-occurred sentences as much as possible and encourage inter-sentential entity pairs to selectively focus on the sentences that may express their cross-sentence relations. We use three parts to represent the relation between two entities: head entity representation, tail entity representation and context representation.
Intra-sentential Relation Representation Module
Encoding. We use a sentence-level encoder to capture the context information for intra-sentential relations and produce contextualized word embedding for each word. Formally, we convert the i-th sentence S i containing n i words w S i For each word w in S i , we first concatenate its word embedding with entity type embedding and co-reference embedding 1 :
x = [E w (w); E t (t); E c (c)](1)
where E w (·) , E t (·) and E c (·) denote the word embedding layer, entity type embedding layer and 1 The existing doc-level RE datasets annotate which mentions belong to the same entity. So for each word in the document, it may belong to the i-th entity or non-entity in the document. We embed this co-reference information between entity mention (surface words) and entity (an abstract concept) into the initialized representation of a word. co-reference embedding layer, respectively. t and c are named entity type and entity id. 2 Then the vectorized word representations are fed into the sentence-level encoder to obtain the sentence-level context-sensitive representation for each word:
[g S i 1 , . . . , g S i n i ] = f S enc ([x S i 1 , . . . , x S i n i ])(2)
where the f S enc denotes sentence-level encoder, which can be any sequential encoder. We will also get the sentence representation s S i for sentence S i from this encoder. For LSTM, s S i is the hidden state of the last time step; for BERT, s S i is the output representation of the special marker [CLS]. Representing. For i-th entity pair (e i,h , e i,t ) which expresses intra-sentential relations, where e i,h is the head entity and e i,t is the tail entity, their mentions co-occur in C sentences S co−occur = {S i 1 , S i 2 , . . . , S i C } once or many times. In j-th co-occurred sentence S i j , we use the entity mentions in S i j to represent head and tail entity. And we define that the context representation of this relation instance in S i j is the top K words correlated with the relations of these two mentions.
Specifically, head entity mention ranging from sth to t-th word is represented as the average of the words it contains: e
S i j i,h = 1 t−s+1 t k=s g S i j
k , so is the tail entity mention e S i j i,t . Then, we concatenate the representations of head and tail entity mentions and use it as a query to attend all words in S i j and compute relatedness score for each word in S i j :
s i,k = σ((W intra · [e S i j i,h ; e S i j i,t ]) T · g S i j k ) (3) α i,k = Sof tmax(s i,k )(4)
where [·; ·] is a concatenation operation. W intra ∈ R d×2d is a parameter matrix. σ is an activation function (e.g., ReLU). Then, we average the representations of top K related words to represents the context information c i for intra-sentential entity pair (e i,h , e i,t ) in S i j . In order to make W intra trainable during computing gradient, we also add an item which is the weighted average representation of all words:
c S i j i = β· 1 K k∈topK(α i, * ) g S i j k +(1−β)· n i j t α i,t g S i j t (5)
where β is a hyperparameter and we use 0.9 here to force model to focus on the top K words but still consider the subtle influence from other words.
Next, we concatenate the three parts obtained above to form the relational representation of intrasentential entity pair (e i,h , e i,t ) in S i j and further average the representations in all co-occured sentences S co−occur to get our final relation representation r i for intra-sentential entity pair (e i,h , e i,t ) 3 :
r i = 1 C S i j ∈S co−occur [e S i j i,h ; e S i j i,t ; c S i j i ](6)
This way, we could force the intra-sentential entity pairs to focus on the semantic information from their co-occurred sentences and ignore the noise information from other sentences.
Inter-sentential Relation
Representation Module Encoding. According to the nature of intersentetential relation, we use a document-level encoder to capture the global interactions for intersentential relations and produce contextualized word embedding for each word. Formally, we convert a document D containing m words w D Same as the embedding for intra-sentential relations, we use Equation 1 to embed each word in the document. Then the vectorized word representations are fed into the document-level encoder to obtain document-level context-sensitive representation for each word:
[g D 1 , . . . , g D m ] = f D enc ([x D 1 , . . . , x D m )(7)
where f D enc denotes the document-level encoder. And we will also get the document representation d D from this encoder.
To further enhance the document interactions, we utilize the mention-level graph (MG) proposed by . MG in contains two different nodes: mention node and document node. Each mention node denotes one particular mention of an entity. Furthermore, MG also has one document node that aims to model the document information. We argue that this graph only contains nodes concerning prediction, i.e., the mentions of the entities and document information. However, it does not contain the local context information, which is crucial for the interaction among entity mentions and the document. So we introduce a new type of node: sentence node and its corresponding new edges to infuse the local context information into MG.
So there are four types of edges 4 in MG: Intra-Entity Edge: Mentions referring to the same entity are fully connected. This models the interactions among mentions of the same entity. Inter-Entity Edge: Mentions co-occurring in the same sentence are fully connected. This models the interactions among different entities via cooccurrences of their mentions. Sentence-Mention Edge: Each sentence node connects with all entity mentions it contains. This models the interactions between mentions and their local context information. Sentence-Document Edge: All sentence nodes are connected to the document node. This models the interactions between local context information and document information, acting as a bridge between mentions and document.
Next, we apply Relational Graph Convolutional Network (R-GCN, Schlichtkrull et al., 2017) on MG to aggregate the features from neighbors for each node. Given node u at the l-th layer, the graph convolutional operation can be defined as:
h (l+1) u = ReLU t∈T v∈N t u {u} 1 c u,t W (l) t h (l) v (8) where T is a set of different types of edges, W (l) t ∈ R d×d is a trainable parameter matrix. N t
u denotes a set of neighbors for node u connected with t-th type edge. c u,t = |N t u | is a normalization constant. We then aggregate the outputs of all R-GCN layers to form the final representation of node u:
m u = ReLU (W u · [h (0) u ; h (1) u ; . . . ; h (N ) u ]) (9) where W u ∈ R d×N d is a trainable parameter ma- trix. h (0)
u is the initial representation of node u. For a mention ranging from the s-th word to the t-th word in the document, h
(0) u = 1 t−s+1 t j=s g D j ;
for i-th sentence node, it is initialized with s S i from sentence-level encoder; for the document node, it is initialized with d D from document-level encoder.
Representing. We argue that inter-sentential relations can be inferred from the following information sources: 1) the head and tail entities themselves; 2) the related sentences that signal their cross-sentence relations, namely supporting evidences; 3) reasoning information such as logical reasoning, co-reference reasoning, world knowledge, etc. We here only consider the first two information and leave the last in Sec. 2.2.
Different from intra-sentential relations, intersentential relations tend to be expressed from the global interactions. So for the i-th entity pair (e i,h , e i,t ) which expresses inter-sentential relation, the head entity representation e i,h and the tail entity representation and e i,t are defined as the average of their entity mentions from MG:
e i = 1 N j∈M (e i ) m j(10)
where the M (e i ) is the mention set of e i . And we apply an evidence selector with attention mechanism (Bahdanau et al., 2015) to encourage the inter-sentential entity pair to selectively focus on the sentences that express their cross-sentence relations. This process could be regarded as finding supporting evidence for their relations. So the context representation c i for inter-sentential entity pair (e i,h , e i,t ) is the weighted average of the sentence representations from MG:
P (S k |e i,h , e i,t ) = σ(W k · [e i,h ; e i,t ; m S k ]) (11) α i,k = P (S k |e i,h , e i,t ) l P (S l |e i,h , e i,t )(12)c i = l k α i,k · m S k(13)
where W k ∈ R 1×2d is a trainable parameter matrix. σ is a sigmoid function. Next, the final relation representation for intersentential entity pair (e i,h , e i,t ) should be:
r i = [e i,h ; e i,t ; c i ](14)
Logical Reasoning Module
In this module, we focus on logical reasoning modeling. As mentioned in Sec. 1, previous works usually use the paths between each entity pair as the clues for logical reasoning. Furthermore, they concatenate the path representations with entity pair representations to predict relations. However, since not all entity pairs are connected with a path and have the correct logical reasoning paths in their graph, many cases of logical reasoning cannot be covered. So their methods are somehow limited.
In this paper, we utilize self-attention mechanism (Vaswani et al., 2017) to model logical reasoning. Specifically, we can get the relational representations for all entity pairs from the above sections. For i-th entity pair (e h , e t ), we can assume there is a two-hop logical reasoning chains: e h → e k → e t in the document, where e k can be any other entities in the document except e h and e t . So (e h , e t ) can attend to all the relational representations of other entity pairs including (e h , e k ) and (e k , e t ), termed as R att . Finally, the weighted sum of R att can be treated as a new relational representation for (e h , e t ), which considers all possible two-hop logical reasoning chains in the document. 5
r new i = r k ∈Ratt∪{r i } γ k · r k (15) γ k = Sof tmax((W att · r i ) T · r k )(16)
where W att ∈ R 3d×3d is a parameter matrix. In this way, the path in the previous works could be converted into the individual attention on every entity pair in the logical reasoning chains. We argue that this form of logical reasoning is simpler and more scalable because it will consider all possible logical reasoning chains without connectivity constraints in the graph structure.
Classification Module
We formulate the doc-level RE task as a multi-label classification task:
P (r|e i,h , e i,t ) = sigmoid (W 1 σ(W 2 r i + b 1 ) + b 2 )
(17) where W 1 , W 2 , b 1 , b 2 are trainable parameters, σ is an activation function (e.g., ReLU). We use binary cross entropy as objective to train our SIRE:
L rel = − D∈C h =t r i ∈R I (r i = 1) log P (r i |e i,h , e i,t ) + I (r i = 0) log (1 − P (r i |e i,h , e i,t ))(18)
where C denotes the whole corpus, R denotes relation type set and I (·) refers to indicator function.
Experiments
Dataset
We evaluate our proposed model on three document-level RE datasets:
Experimental Settings
In our SIRE implementation, we use 3 layers of GCN, use ReLU as our activation function, and set the dropout rate to 0.3, learning rate to 0.001. We train SIRE using AdamW (Loshchilov and Hutter, 2019) as optimizer with weight decay 0.0001 and implement SIRE under PyTorch (Paszke et al., 2017) and DGL (Wang et al., 2019b) frameworks. We implement two settings for our SIRE. SIRE-GloVe uses GloVe (100d, Pennington et al., 2014) and BiLSTM (512d, Schuster and Paliwal, 1997) as word embedding and encoder, respectively. SIRE-BERT use BERT-base (Devlin et al., 2019) as encoder on DocRED, cased BioBERT-Base v1.1 as the encoder on CDR/GDA, and the learning rate for BERT parameters is set to 1e −5 and learning rate for other parameters remains 1e −3 . Detailed hyperparameter settings are in Appendix.
Baselines and Evaluation Metrics
We use the following models as our baselines: Yao et al. (2019b) propose the BiLSTM (Schuster and Paliwal, 1997) as the encoder on DocRED and use the output from the encoder to represent all entity pairs to predict relations. Wang et al. (2019a) propose BERT to replace the BiLSTM as the encoder on DocRED. Moreover, they also propose BERT-Two-Step, which first predicts whether two entities have a relation and then predicts the specific target relation. Tang et al. (2020) propose the hierarchical inference networks HIN-GloVe and HIN-BERT, which make full use of multi-granularity inference information including entity level, sentence level, and document level to infer relations. Similar to Wang et al. (2019a), Ye et al. (2020) propose a language representation model called CorefBERT as encoder on DocRED that can capture the coreferential relations in context. Nan et al. (2020) propose the LSR-GloVe and LSR-BERT to dynamically induce the latent dependency tree structure to better model the document interactions for prediction. propose a global-to-local network GLRE, which encodes the document information in terms of entity global and local representations as well as context relation representations. propose the graph aggregationand-inference networks GAIN-GloVe and GAIN- Table 1: Performance on DocRED. Models above the double line do not use pre-trained model. LR Module is the logical reasoning module. context denotes context representations in Eq. 6 and Eq. 14. inter4intra denotes using the inter-sentential module also for intra-sentential entity pairs.
Model CDR GDA BRAN (Verga et al., 2018) 62.1 -EoG 63.6 81.5 LSR (Nan et al., 2020) 64.8 82.2 GLRE-BioBERT 68.5 -SIRE-BioBERT 70.8 84.7 BERT which utilize two levels of graph structures: mention-level graph and entity-level graph to capture document interactions and conduct path logical reasoning mechanism, respectively. Verga et al. (2018) propose a self-attention encoder BRAN to consider interactions across mentions and relations across sentence boundaries.
Following the previous works (Yao et al., 2019b;, we use the F1 and Ign F1 as the evaluation metrics to evaluate the overall performance of a model. The Ign F1 metric calculates F1 excluding the common relation facts in the training and dev/test sets. We also use the intra-F1 and inter-F1 metrics to evaluate a model's performance on intra-sentential relations and inter-sentential relations on the dev set.
Results
The performances of SIRE and baseline models on the DocRED dataset are shown in Table 1. Among the model not using BERT encoding, SIRE outperforms the previous state-of-the-art model by 0.88/1.38 F1/Ign F1 on the test set. Among the model using BERT encoding, SIRE outperforms the previous state-of-the-art models by 1.18/0.81 F1/Ign F1 on the test set. The improvement on Ign F1 is larger than that on F1. This shows SIRE has a stronger generalization ability on the unseen relation instances. On intra-F1 and inter-F1, we can observe that SIRE is better than the previous models that indiscriminately represent the intra-and intersentential relations in the same way. This demonstrates that representing intra-and inter-sentential relations in different methods is better than representing them in the same way. The improvement on intra-F1 is greater than the improvement on inter-F1. This shows that SIRE mainly improves the performance of intra-sentential relations. The performances of SIRE and baseline models on the CDR/GDA dataset are shown in Table 2, which are consistent with the improvement on DocRED.
Ablation Study
To further analyze SIRE, we also conduct ablation studies to illustrate the effectiveness of different modules in SIRE. We show the results in Table 1. 1) the importance of the logical reasoning module: When we discard the logical reasoning module, the performance of SIRE-GloVe decreases by 0.41 F1 on the DocRED test set. This shows the effectiveness of our logical reasoning module, which can better model the reasoning information in the document. Moreover, it drops significantly on inter-F1 and drops fewer points on intra-F1. This shows our logical reasoning module mainly improves the performance of the inter-sentential relations that usually require reasoning skills. 2) Ablation on context representations in Eq. 6 Type Examples Intra-sentential relation instances "Your Disco Needs You" is a song performed by Australian recording artist and songwriter Kylie Minogue, taken from her seventh studio album Light Years (2000). and Eq. 14: When we remove the context representations in intra-and inter-sentential relational representations, the performance of SIRE-GloVe on the DocRED test set drops by 1.81 F1. This shows context information (top K words for intra, evidence sentences for inter) is important for both intra-and inter-sentential relation representation.
Relation: performer
Lark Force was an Australian Army formation established in
3) Using the inter-sentential module also for intra-sentential entity pairs: In this experiment, we do not distinguish these two types of relations, using the encoding method for inter-sentential to encode all entity pairs, and remain the logical reasoning module unchanged. The performance of SIRE-GloVe drops by 2.66/2.13 F1/intra-F1 on the DocRED test set. This confirms the motivation that we cannot use global information to learn the local patterns for intra-sentential relations.
Reasoning Performance
Furthermore, we evaluate the reasoning ability of our model on the development set in Table 3. We use infer-F1 as the metric that considers only twohop positive relation instances in the dev set. So it will naturally exclude many cases that do not belong to the two-hop logical reasoning process to strengthen the evaluation of reasoning performance. As Table 3 shows, SIRE is superior to previous models in handling the two-hop logical reasoning process. Moreover, after removing the logical reasoning module, out SIRE drops signif- icantly on infer-F1. This shows that our logical reasoning module plays a crucial role in modeling the logical reasoning process. Figure 3 shows the prediction cases of our SIRE. In intra-sentential relations, the top 4 words related to the relations of three entity pairs conform with our intuition. Our model correctly find the words by using Eq.5 that trigger the relations of these entity pairs. In inter-sentential relations, the supporting evidence that the model finds, i.e., sentences 1 and 2, indeed expresses the relations between São Paul and South America. We also conduct logical reasoning in terms of the logical reasoning chains: São Paul→ other-entity → South America. Our SIRE could focus on the correct logical reasoning chains: São Paul→ Brazil → South America. These cases show the predictions of SIRE are explainable.
Case Study
Document-level relation extraction. Many recent efforts Peng et al., 2017;Gupta et al., 2019;Song et al., 2018;Jia et al., 2019;Yao et al., 2019b;Wang et al., 2019a;Tang et al., 2020;Nan et al., 2020;Dai et al., 2020) manage to tackle the document-level relation extraction.
Most of them use graph-based models, such as Graph Convolutional Networks (GCNs, Schlichtkrull et al., 2017) that has been used in many natural language processing tasks (Marcheggiani and Titov, 2017;Yao et al., 2019a;. They construct a graph structure from the input document. This graph uses the word, mentions or entities as nodes and uses heuristic rules and semantic dependencies as edges.
They use this graph to model document information and interactions and to predict possible relations for all entity pairs. Nan et al. (2020) proposed a latent structure induction to induce the dependency tree in the document dynamically. proposed a double graph-based graph aggregationand-inference network that constructs two graphs: mention-level graph and entity-level graph. They use the former to capture the document information and interactions among entity mentions and document and use the latter to conduct path-based logical reasoning. However, these works do not explicitly distinguish the intra-and inter-sentential relation instances in the design of the model and use the same way to encode them. So the most significant difference between our model and previous models is that we treat intra-sentential and intersentential relations differently to conform with the relational patterns for their prediction.
Reasoning in relation extraction. Reasoning problem has been extensively studied in the field of question answering (Dhingra et al., 2018). However, few works manage to tackle this problem in the document-level relation extraction task. is the first to propose the explicit way of relational reasoning on doc-level RE, which mainly focuses on logical reasoning. They use the paths on their entity-level graph to provide clues for logical reasoning. However, since not all entity pairs are connected with a path and have the correct logical reasoning paths in their graph, their methods are somehow limited. In this work, we design a new form of logical reasoning to cover more cases of logical reasoning.
Conclusion
Intra-and inter-sentential relations are two types of relations in doc-level RE. We propose a novel architecture, SIRE, to represent these two relations in different ways separately in this work. We introduce a new form of logical reasoning module that models logical reasoning as a self-attention among representations of all entity pairs. Experiments show that our SIRE outperforms the previous state-of-the-art methods. The detailed analysis demonstrates that our predictions are explainable. We hope this work will have a positive effect on future research regarding new encoding schema, a more generalizable and explainable model.
Figure 2 :
2The architecture of SIRE. In the mention-level graph, the number in each circle is its sentence number. Mention nodes with the same color belong to the same entity. Different types of edges are in different styles of line.
Figure 3 :
3March 1941 during World War II for service in New Britain and New Ireland. Relation: inception Lake Hiawatha is one of the few lakes through which Minnehaha Creek flows , and the last one before it reaches Minnehaha Falls and then the Mississippi River. Relation: mouth of the watercourse Inter-sentential relation instances [1] (0.87) IBM Research-Brazil is one of twelve research laboratories comprising IBM Research, its first in South America. [2] (0.66) It was established in June 2010, with locations in São Paulo and Rio de Janeiro. [3] (0.01) Research focuses on Industrial Technology and Science, … [4] (0.04) The new lab, IBM's ninth … [5] (0.38)In collaboration with Brazil's government, it will help IBM to develop … Relation: continent Logical reasoning attention weight: Cases for illustrating the reliable and explainable predictions of our SIRE. Head entities, tail entities, and sentence numbers along with the scores from evidence selector are colored in blue, red, green, respectively. In intra-sentential relations, words with pink background color are the top 4 words from Equation 5.
Table 2 :
2Performance on CDR and GDA.
Table 3 :
3Infer-F1 results on dev set of DocRED. P: Precision, R: Recall.
For those words not belonging to any entity, we introduce None entity type and id.
If a head entity mentioned N times in a sentence, we will get N intra-sentential relational representations for each of the other tail entities in this sentence.
Note that we remove the mention-document edges of original MG in and substitute them by introducing mention-sentence and sentence-document edges.
This can be scaled to muti-hop logical reasoning by increasing the self-attention layers. We only consider two-hop logical reasoning in this paper following.
AcknowledgmentsThe authors would like to thank the reviewers for their thoughtful and constructive comments.A Hyperparameter settingsWe use the development set to manually tune the optimal hyperparameters for SIRE, based on the Ign F1 score. Experiments are run on NVIDIA-RTX-3090-24GB GPU. Hyperparameter settings for SIRE-GloVe, SIRE-BERT on DocRED are listed inTable 4, 5, respectively. The values of hyperparameters we finally adopted are in bold. Note that we do not tune all the hyperparameters.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.
Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou, 10.18653/v1/D19-1498Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Fenia Christopoulou, Makoto Miwa, and Sophia Ana- niadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4925- 4936.
Coarse-to-fine entity representations for document-level relation extraction. Damai Dai, Jing Ren, Shuang Zeng, Baobao Chang, Zhifang Sui, arXiv:2012.02507Computing Research Repository. Damai Dai, Jing Ren, Shuang Zeng, Baobao Chang, and Zhifang Sui. 2020. Coarse-to-fine entity rep- resentations for document-level relation extraction. Computing Research Repository, arXiv:2012.02507.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171-4186.
Neural models for reasoning over multiple mentions using coreference. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, Ruslan Salakhutdinov, 10.18653/v1/N18-2007Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersBhuwan Dhingra, Qiao Jin, Zhilin Yang, William Co- hen, and Ruslan Salakhutdinov. 2018. Neural mod- els for reasoning over multiple mentions using coref- erence. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 42-48.
Neural relation extraction within and across sentence boundaries. Pankaj Gupta, Subburam Rajaram, Hinrich Schütze, Thomas A Runkler, 10.1609/aaai.v33i01.33016513The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019. Pankaj Gupta, Subburam Rajaram, Hinrich Schütze, and Thomas A. Runkler. 2019. Neural relation extraction within and across sentence boundaries. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innova- tive Applications of Artificial Intelligence Confer- ence, IAAI 2019, The Ninth AAAI Symposium on Ed- ucational Advances in Artificial Intelligence, EAAI 2019, pages 6513-6520.
Document-level n-ary relation extraction with multiscale representation learning. Robin Jia, Cliff Wong, Hoifung Poon, 10.18653/v1/N19-1370Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsRobin Jia, Cliff Wong, and Hoifung Poon. 2019. Document-level n-ary relation extraction with mul- tiscale representation learning. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 3693-3704.
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, 5th International Conference on Learning Representations. Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In 5th International Conference on Learn- ing Representations, ICLR 2017.
Biocreative V CDR task corpus: a resource for chemical disease relation extraction. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, Zhiyong Lu, 10.1093/database/baw068DatabaseJiao Li, Yueping Sun, Robin J. Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. Biocreative V CDR task cor- pus: a resource for chemical disease relation extrac- tion. Database, 2016.
Evaluating text coherence at sentence and paragraph levels. Sennan Liu, Shuang Zeng, Sujian Li, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceEuropean Language Resources AssociationSennan Liu, Shuang Zeng, and Sujian Li. 2020. Eval- uating text coherence at sentence and paragraph lev- els. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1695-1703. Eu- ropean Language Resources Association.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations, ICLR 2019.
Encoding sentences with graph convolutional networks for semantic role labeling. Diego Marcheggiani, Ivan Titov, 10.18653/v1/D17-1159Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingDiego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1506-1515.
Reasoning with latent structure refinement for document-level relation extraction. Guoshun Nan, Zhijiang Guo, Ivan Sekulic, Wei Lu, 10.18653/v1/2020.acl-main.141Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsGuoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1546-1557.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, NIPS-W. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.
Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-Tau Yih, Computational Linguistics. 5Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Trans- actions of the Association for Computational Lin- guistics, 5:101-115.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
Distant supervision for relation extraction beyond the sentence boundary. Chris Quirk, Hoifung Poon, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational Linguistics1Chris Quirk and Hoifung Poon. 2017. Distant super- vision for relation extraction beyond the sentence boundary. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 1171-1182.
Inter-sentence relation extraction with document-level graph convolutional neural network. Fenia Sunil Kumar Sahu, Makoto Christopoulou, Sophia Miwa, Ananiadou, 10.18653/v1/P19-1423Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsSunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Inter-sentence relation extraction with document-level graph con- volutional neural network. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 4309-4316.
Modeling relational data with graph convolutional networks. Michael Sejr, Thomas N Schlichtkrull, Peter Kipf, Rianne Bloem, Van Den, Ivan Berg, Max Titov, Welling, arXiv:1703.06103Computing Research Repository. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. Computing Research Repository, arXiv:1703.06103.
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, 10.1109/78.650093IEEE Transactions on Signal Processing. 4511M. Schuster and K. K. Paliwal. 1997. Bidirectional re- current neural networks. IEEE Transactions on Sig- nal Processing, 45(11):2673-2681.
N-ary relation extraction using graphstate LSTM. Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea, 10.18653/v1/D18-1246Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingLinfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. N-ary relation extraction using graph- state LSTM. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2226-2235.
HIN: hierarchical inference network for documentlevel relation extraction. Hengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, Pengfei Yin, 10.1007/978-3-030-47426-3_16Advances in Knowledge Discovery and Data Mining -24th Pacific-Asia Conference. Singapore2020Proceedings, Part IHengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, and Pengfei Yin. 2020. HIN: hierarchical inference network for document- level relation extraction. In Advances in Knowledge Discovery and Data Mining -24th Pacific-Asia Con- ference, PAKDD 2020, Singapore, May 11-14, 2020, Proceedings, Part I, volume 12084 of Lecture Notes in Computer Science, pages 197-209.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.
Simultaneously self-attending to all mentions for full-abstract biological relation extraction. Patrick Verga, Emma Strubell, Andrew Mccallum, 10.18653/v1/N18-1080Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersPatrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 872-884.
Global-to-local neural networks for document-level relation extraction. Difeng Wang, Wei Hu, Ermei Cao, Weijian Sun, 10.18653/v1/2020.emnlp-main.303Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Difeng Wang, Wei Hu, Ermei Cao, and Weijian Sun. 2020. Global-to-local neural networks for document-level relation extraction. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3711-3721.
Fine-tune bert for docred with two-step process. Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, William Wang, arXiv:1909.11898Computing Research Repository. Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, and William Wang. 2019a. Fine-tune bert for docred with two-step process. Computing Re- search Repository, arXiv:1909.11898.
Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J Smola, Zheng Zhang, Deep graph library: Towards efficient and scalable deep learning on graphs. ICLR Workshop on Representation Learning on Graphs and Manifolds. Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, Ziyue Huang, Qipeng Guo, Hao Zhang, Haibin Lin, Junbo Zhao, Jinyang Li, Alexander J Smola, and Zheng Zhang. 2019b. Deep graph li- brary: Towards efficient and scalable deep learn- ing on graphs. ICLR Workshop on Representation Learning on Graphs and Manifolds.
RENET: A deep learning approach for extracting gene-disease associations from literature. Ye Wu, Ruibang Luo, C M Henry, Hing-Fung Leung, Tak Wah Ting, Lam, 10.1007/978-3-030-17083-7_17Research in Computational Molecular Biology -23rd Annual International Conference, RECOMB 2019. Washington, DC, USA11467Ye Wu, Ruibang Luo, Henry C. M. Leung, Hing-Fung Ting, and Tak Wah Lam. 2019. RENET: A deep learning approach for extracting gene-disease associ- ations from literature. In Research in Computational Molecular Biology -23rd Annual International Con- ference, RECOMB 2019, Washington, DC, USA, May 5-8, 2019, Proceedings, volume 11467, pages 272-284.
Graph convolutional networks for text classification. Liang Yao, Chengsheng Mao, Yuan Luo, 10.1609/aaai.v33i01.33017370The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. Honolulu, Hawaii, USA2019Liang Yao, Chengsheng Mao, and Yuan Luo. 2019a. Graph convolutional networks for text classification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innova- tive Applications of Artificial Intelligence Confer- ence, IAAI 2019, The Ninth AAAI Symposium on Ed- ucational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -Febru- ary 1, 2019, pages 7370-7377.
DocRED: A large-scale document-level relation extraction dataset. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, Maosong Sun, 10.18653/v1/P19-1074Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsYuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019b. DocRED: A large-scale document-level relation extraction dataset. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 764-777.
Coreferential Reasoning Learning for Language Representation. Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, Zhiyuan Liu, 10.18653/v1/2020.emnlp-main.582Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, and Zhiyuan Liu. 2020. Corefer- ential Reasoning Learning for Language Represen- tation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7170-7186.
Double graph based reasoning for documentlevel relation extraction. Shuang Zeng, Runxin Xu, Baobao Chang, Lei Li, 10.18653/v1/2020.emnlp-main.127Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for document- level relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1630-1640.
| [] |
[
"Unsupervised Paraphrasing with Pretrained Language Models",
"Unsupervised Paraphrasing with Pretrained Language Models"
] | [
"Tong Niu tniu@salesforce.com \nSalesforce Research\n\n",
"Semih Yavuz syavuz@salesforce.com \nSalesforce Research\n\n",
"Yingbo Zhou yingbo.zhou@salesforce.com \nSalesforce Research\n\n",
"Nitish Shirish Keskar nkeskar@salesforce.com \nSalesforce Research\n\n",
"Huan Wang huan.wang@salesforce.com \nSalesforce Research\n\n",
"Caiming Xiong cxiong@salesforce.com \nSalesforce Research\n\n"
] | [
"Salesforce Research\n",
"Salesforce Research\n",
"Salesforce Research\n",
"Salesforce Research\n",
"Salesforce Research\n",
"Salesforce Research\n"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled data that is costly to collect. To address this drawback, we adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting. Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking (DB). To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step. We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair (QQP) and the ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions. We also demonstrate that our model transfers to paraphrasing in other languages without any additional finetuning. | 10.18653/v1/2021.emnlp-main.417 | [
"https://www.aclanthology.org/2021.emnlp-main.417.pdf"
] | 237,497,412 | 2010.12885 | d83066e0a91d88890a59a6f502ccdcac15e6ff06 |
Unsupervised Paraphrasing with Pretrained Language Models
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Tong Niu tniu@salesforce.com
Salesforce Research
Semih Yavuz syavuz@salesforce.com
Salesforce Research
Yingbo Zhou yingbo.zhou@salesforce.com
Salesforce Research
Nitish Shirish Keskar nkeskar@salesforce.com
Salesforce Research
Huan Wang huan.wang@salesforce.com
Salesforce Research
Caiming Xiong cxiong@salesforce.com
Salesforce Research
Unsupervised Paraphrasing with Pretrained Language Models
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 2021
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled data that is costly to collect. To address this drawback, we adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting. Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking (DB). To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step. We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair (QQP) and the ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions. We also demonstrate that our model transfers to paraphrasing in other languages without any additional finetuning.
Introduction
Paraphrase generation restates text input in a different surface form while preserving its semantics. It has various applications on downstream NLP tasks including text summarization (Cao et al., 2016), semantic parsing (Berant and Liang, 2014), as well as diversifying text generation for user-facing systems such as chatbots. To evaluate model robustness, a paraphraser can be used to generate adversarial examples, which also serve as augmented data to train the target neural networks (Iyyer et al., 2018a). Besides, paraphrasing queries makes Question Answering systems more likely to match with keywords in a knowledge base (Fader et al., 2014;Yin et al., 2015). Generate with Dynamic Blocking Self-supervised Model
Self-supervision Data
Task-adapted Model Self-supervised Training Figure 1: Training pipeline of our paraphrasing model. We first train a task-adapted model with a denoising objective so that it is able to reconstruct input text. We then use Dynamic Blocking (DB) to generate pseudopairs of paraphrasing data. Finally, the generated data is used to train the self-supervised model. However, it is expensive to annotate paraphrases, resulting in only a few human-labeled datasets. The existing ones are either small-scale like MRPC (Dolan and Brockett, 2005), or of closed domains like QQP 1 which consists entirely of questions. Consequently, previous work explored automatically (hence noisily) annotated datasets such as PIT-2015(Xu et al., 2013, Twitter URL Paraphrase Corpus (Lan et al., 2017), ParaNMT (Wieting and Gimpel, 2018), and Para-Bank (Hu et al., 2019), or re-purposed datasets including MSCOCO (Lin et al., 2014) and WikiAnswers (Fader et al., 2013). The scarcity of highquality datasets motivates us to consider unsupervised alternatives. In this work, we explore a transfer learning approach, which leverages unsupervised large-scale pretrained models like T5 (Raffel et al., 2019) and BART (Lewis et al., 2019).
The effectiveness of BERT-score (Zhang et al., 2019) in identifying text similarity hints that pretrained language models are equipped with extensive knowledge in paraphrasing. This knowledge may be attributed to the fact that text spans shar-ing similar context usually stay semantically close together -word embedding (Mikolov et al., 2013) being a classic example. In other words, the paraphrasing capability of language models stems from the strong correlation between context and semantic similarity. In this work, we use pre-trained autoregressive LMs to leverage such implicit knowledge for paraphrasing in an unsupervised setting. 2 For paraphrasing, decoder-only LMs merely output a continuation of the input, while Sequenceto-Sequence models like BART tend to copy the input through even when paired with popular decoding algorithms such as greedy decoding, beam search or top-k/p sampling (Holtzman et al., 2020) because the probabilities of the input tokens during generation are all peaked. To address this issue, we propose Dynamic Blocking (DB), a decoding algorithm that effortlessly transforms pre-trained autoregressive language models into natural paraphrasers with the help of task-adaption and selfsupervision ( Figure 1). To obtain a surface form different from the input, whenever we emit a token that is present in the source sequence, this algorithm prevents the model from outputting its immediate successor for the next generation step. The algorithm is based on the intuition that during inference, although the top candidate at each generation step corresponds to a peaked probability, the rest of the distribution still contains rich linguistic knowledge suitable for paraphrasing. This is in similar spirit with using soft targets for model distillation (Hinton et al., 2015).
Through automatic and human evaluations, we demonstrate that our approach outperforms previous models (including supervised, in-domain models and the ground-truth targets) on both QQP and ParaNMT datasets and incurs no performance loss under domain shifts (i.e., finetuned on QQP and evaluated on ParaNMT, and vice versa). For automatic evaluations, we propose a reference-independent automatic metric named BERT-iBLEU, which is a harmonic mean of BERTscore and one minus self -BLEU. We show that this new metric correlates significantly better with human evaluation than traditional metrics. On the qualitative side, we illustrate with concrete examples that our model generates paraphrases that exhibit diverse syntactic structures. Finally, we observe that our model can generate paraphrases in other languages without any additional training. 2 We will release all codes. Our contributions are: (1) a training pipeline that leads to a strong, unsupervised paraphrasing model; (2) a novel decoding algorithm that effectively diversifies paraphrase generation; (3) a new automatic metric that evaluates paraphrasing quality more accurately. Figure 1 shows the training pipeline of our paraphrasing model, which consists of three key components, namely task-adaptation, self-supervision and Dynamic Blocking. Overall we decode the taskadapted model with Dynamic Blocking to generate self-supervision data, which is in turn used to train the final model.
Model
Task-Adaptation
Inspired by Gururangan et al. (2020), we apply task-adaptive training on the target dataset, treating its training set as a non-parallel collection of sentences. We perform task-adaptation by reconstructing the original sequence from its corrupted version with a denoising auto-encoder objective. Unlike previous work (Devlin et al., 2019;Lewis et al., 2019), we do not corrupt inputs with masks, but rather directly remove the corrupted tokens. This is to avoid pretrain-finetune discrepancy in denoising autoencoding models (Yang et al., 2019). After the deletions, we randomly shuffle all remaining tokens to encourage the model to learn different alignments for better syntactic diversity. 3 Note that we perform both deletions and shuffling on the word-level. This is similar to whole-word masking introduced in later versions of BERT (Devlin et al., 2019). To demonstrate the benefit of our corruption strategy, we present ablation study results in Section 4.3 by either adding masks or not shuffling.
Dynamic Blocking
Unlike previous diversity-promoting work which mainly focuses on the target side and encourages dissimilarity among beams (Vijayakumar et al., 2018;Kumar et al., 2019;Holtzman et al., 2020), Dynamic Blocking takes the source input into account to guide the model toward generating in a different surface form ( Figure 2). As illustrated in Algorithm 1, we represent the source sequence S as a list of tokens S = (S 0 , S 1 , ..., S M ) and similarly
Active Block Dictionary
Sample each entry with probability p Figure 2: Illustration of the Dynamic Blocking algorithm on real outputs. The algorithm first constructs a full block dictionary based on the input, which maps each token to its immediate successor to be blocked, and then samples from this dictionary to build multiple active block dictionaries, each used for generating a distinct paraphrase. When establishing an active dictionary, each entry in the full dictionary has a probability of p to be sampled. During generation, the blocking takes place whenever an item in the active dictionary is triggered.
the generated sequence as G = (G 0 , G 1 , ..., G N ). Suppose that during generation, the model emits G j that is identical to some S i (it is not necessary that i = j). Then for the next generation step G j+1 , the algorithm forbids the model to generate S i+1 . Note that we block S i+1 for only one step. After G j+1 is generated, we perform a different blocking for G j+2 iff G j+1 ∈ S.
Algorithm 1: Dynamic Blocking input :A source sequence S consisting of a list of tokens S = (S0, S1, ..., SM ), and a G0 = BOS to start the decoding process
1 Initialize j ← 0 2 while Gj = EOS do 3 if Gj = Si ∈ S for some i then 4 P (Gj+1 = Si+1|S, (G0, G1, ..., Gj) ← 0 5 end 6 Generate Gj+1 7 j ← j + 1 8 end output :G = (G0, G1, ..., GN )
The motivation to block for only one generation step is to allow the possibility of pure syntactic variation of the original sequence, meaning that all tokens are kept but their order is permuted. For example, let us consider a decoding algorithm that completely prevents the model from generating a source token at all generation steps -a popular n-gram blocking strategy we call Static Blocking. Suppose that we intend to paraphrase "I like apples and oranges." as "I like oranges and apples.". This is a valid paraphrase, but if we completely block the word "apples" at all generation steps, it will be impossible to arrive at this paraphrase. However, with Dynamic Blocking the model will still be able to generate the word "apples" later on even though this word has been temporarily blocked for one step after "and" is generated. As shown in Figure 2, Dynamic Blocking builds a block dictionary which maps each token in the source sequence to its immediate successor. We then sample from this dictionary with a probability p for each entry. This hyperparameter controls how different we want the paraphrase to be from the source input. In two extreme cases: when p = 0.0, the model does not block any tokens and most likely copies through the source sequence; when p = 1.0, the model always blocks the immediate next token, leading to a drastically different surface form. In this work, we take the middle ground and set p = 0.5 so that for each blocking action, there will be half of the candidates taking that path. Note that if a word is tokenized into several subwords, only the first subword is allowed to be blocked.
We sample multiple block dictionaries to ensure diversity among candidates, while leveraging beam search to ensure coherence. For each sampled block dictionary, we use beam search to generate four candidates and keep the top-ranked two. It is beneficial to combine the two decoding methods because beam search helps to weed out ungrammatical or semantically invalid candidates. 4 Note that we only adopt bi-gram blocking because it is a superset of all higher-gram blockings. Consider, e.g., a tri-gram blocking entry ab → c in the block dictionary. If this entry is triggered, then the bi-gram blocking entry b → c will also have been triggered. Hence we found it unnecessary to include higher-order n-grams.
Self-Supervision
To help the model internalize patterns learned from task-adaption, we pseudo-label the training set (Siddhant et al., 2020) by decoding the taskadapted model with Dynamic Blocking. Having obtained the self-supervision data, we discard the task-adapted model and start from the pretrained language model to avoid catastrophic forgetting (Chronopoulou et al., 2019;Chen et al., 2020). We also include reversed data (i.e., swapping source and target) because during task-adaptation the target is always longer than the input, and including reversed data helps to offset this bias of sequence length.
Experimental Setup
BERT-iBLEU
To evaluate paraphrasing quality, we propose a new metric named BERT-iBLEU which encourages semantic closeness while penalizing surface-form similarity. For semantic closeness we use the unsupervised metric BERT-score (Zhang et al., 2019), which leverages a pretrained language model to compute the cosine similarity between each token in the candidate and that in the reference using contextual embeddings. 5 To ensure that the key information (often conveyed through relatively rare words) is retained in the paraphrase, we apply IDF-reweighing on each token. 6 To measure the surface-form dissimilarity, we use one minus self -BLEU, where self -BLEU is the BLEU score between the source and the candidate. Hence BERT-4 For more details on Dynamic Blocking, please refer to Appendix D. 5 In early experiments we tried another unsupervised metric Universal Sentence Encoder (Cer et al., 2018) and supervised metrics including RUSE (Shimanaka et al., 2018), Sentence-BERT (Reimers and Gurevych, 2019), and BLEURT (Sellam et al., 2020). We observed that BERT-score worked better at evaluating semantic similarity compared to these metrics. 6 We use the BookCorpus dataset (Zhu et al., 2015) to compute the IDF weights. iBLEU (where i stands for inverse) is a weighted harmonic mean of the BERT-score and one minus self -BLEU.
BERT-iBLEU = β * BERT-score −1 + 1.0 * (1 − self -BLEU) −1 β + 1.0 −1 self -BLEU = BLEU(source, candidate)
As an extreme case, though copying through the input leads to a perfect BERT-score, 1 − self -BLEU = 0; hence BERT-iBLEU = 0. This is the reason that we do not use the BERTscore directly to evaluate paraphrases. β is used to control the relative importance between semantic similarity and surface-form dissimilarity. In our experiments we set β = 4.0 to scale up BERT-score so that it has a similar range with self -BLEU. Note that because BERT-iBLEU is reference-independent, it serves both as a metric to evaluate paraphrasing quality and as a criterion to re-rank generated candidates during task-adaptation and self-supervision.
Dataset
We evaluate on the Quora Question Pair (QQP) and the ParaNMT datasets. QQP contains 140K question pairs that are marked as a duplicate to each other and 640K non-parallel questions. The sizes of dev and test sets are 3K and 20K, respectively. The ParaNMT dataset was constructed by back-translating sentences in Czech in the CzEng dataset (Bojar et al., 2016). We directly obtained the test set of SOW-REAP from the authors of Goyal and Durrett (2020). To match the size of their training set, for task-adaptation we sample 350K non-parallel sentences from ParaNMT-5M, while to generate self-supervision data we sample 350K sentences from the same corpus as inputs.
We filter out any sentences in SOW-REAP's test set to avoid training on test examples.
Reproduction of Previous Models
For the experiments on QQP we reproduce the supervised Transformer with the pre-trained T5-base model, which is stronger than the usual setting where the paraphraser trains from scratch. We also reproduce the model from Hegde and Patil (2020), which we refer to as CorruptLM. This model is similar to our task-adaptive phase (Section 2.1), except that they corrupt the inputs by removing all stop words rather than a fixed percentage of arbi-trary words. 7 Instead of GPT-2 as used by their work, we use BART which shows stronger results on downstream tasks. The rest of the settings remain the same. 8 For the experiments on ParaNMT we use the SOW-REAP model released by Goyal and Durrett (2020). 9
Automatic Evaluation
To evaluate paraphrasing quality, we follow Li et al.
(2019) to report iBLEU (Sun and Zhou, 2012), BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) on QQP, and report BLEU and ROUGE on ParaNMT. Follwing Goyal and Durrett (2020), for ParaNMT both BLEU and ROUGE are calculated by first selecting the candidate that achieves the best sentence-level score with the ground-truth, and then compute the corpus-level score of all these candidates. We use py-rouge 10 to compute ROUGE and the Datasets library from HuggingFace 11 to compute BLEU. We also report BERT-iBLEU for the models we reproduced.
Human Evaluation
We conduct human evaluations on MTurk. 12 For each experiment, we compare our model with the strongest models reported in both supervised and unsupervised settings. On QQP, we compare with supervised Transformer, unsupervised CorruptLM, and the ground-truth. On ParaNMT, we compare with SOW-REAP and the ground-truth. To construct holistic human studies, we opt for both headto-head binary comparison and Likert-scale scoring. The former provides straightforward results on which model is stronger, while the latter is used to consolidate their relative positions. We only worked with annotators who had completed more than 10K assignments, had an approval rate of > 98%, and resided in the US. We also required that the annotators be native English speakers. When comparing between two model 7 Because the original paper did not provide the source of the stop words, we extract the first 252 words from The Corpus of Contemporary American English (Davies, 2010) to match the number.
8 To encourage the model to output new words in the reconstructed sentence, CorruptLM starts by randomly replacing 20% of the words in the source sequence with synonyms using Syn-net (Miller, 1998) (also applied during inference). 9 https://github.com/tagoyal/ sow-reap-paraphrasing/ 10 https://pypi.org/project/py-rouge/ 11 https://huggingface.co/metrics/ sacrebleu 12 Screenshots of the interfaces used by our MTurk studies are presented in Appendix F. outputs based on the same input, we asked the annotators to identify which paraphrase they prefer in terms of overall quality. 13 For each experiment, we randomly sampled 200 examples from the QQP's or ParaNMT's test set and shuffled the order of each example to anonymize the model identities. Each assignment was scored by two annotators. Table 1 and 2 present human evaluation results on our final model compared with other baselines. On QQP our model outperforms both Transformer and CorruptLM. Recall that CorruptLM also leverages a pre-trained language model. This indicates the effectiveness of our training pipeline when holding the LM factor as a constant. On ParaNMT our model outperforms SOW-REAP in both headto-head and Likert-based evaluations. Moreover, our model outperforms the ground-truth on both datasets. For ParaNMT, the result indicates that our approach also outperforms a supervised roundtrip translation baseline since that is how ParaNMT data was generated in the first place. For QQP, we note two reasons why these scores do not indicate that our model can generate paraphrases with human-level quality. First, QQP is humanlabeled, not human-generated. Second, QQP annotates duplicate questions rather than paraphrases. Questions referring to the same topic but are not semantically equivalent may still be marked as duplicates. 14 We use Cohen's Kappa to evaluate the interannotator agreement. For head-to-head evaluations, we obtained kappa = 0.35, indicating fair agreement. Note that when calculating kappa, we leave out all cases where either of the two annotators gives a "tie" because this usually signifies that they are unsure about which paraphrase is better.
Results
Human Evaluation
Advantage of the Proposed Metric
To facilitate a better understanding of the automatic evaluation results, we investigate how each of the automatic metrics correlates with human evaluation. Table 1: Head-to-head human evaluation results. Each experiment is performed over 200 samples with 2 annotators each. "Ours" stands for the model trained with self-supervision and decoded with Dynamic Blocking. Note that both Transformer and SOW-REAP are supervised models, and we are also comparing our unsupervised model outputs with the ground-truth. "W-L" stands for the difference between Win and Loss. nificantly better with human perceptions. The reason that BLEU does not correlate well with human evaluation is that there are two conflicting objectives. The first comes from keeping the important information, such as named entities, which should be copied verbatim, while the second comes from using different wordings to express the same semantics -the better the model is at this, the lower the BLEU becomes. For a model good at both, the gain in BLEU for matching key entities and the loss for using different wordings cancel each other out, preventing BLEU from faithfully evaluating the paraphrasing quality. Consequently, BLEU is only useful for checking extreme cases: very low or high BLEU usually signals bad paraphrases, but for the middle-ground cases BLEU alone is less indicative. A similar argument holds for ROUGE. In contrast, BERT-score encourages the first objective and is not penalized by the second. However, parroting the input will still fool BERT-score alone. Hence we pair it with self -BLEU to encourage surface-form diversity.
Automatic Evaluation
On QQP, our model outperforms both the supervised Transformer and the unsupervised Cor-ruptLM on BERT-iBLEU (Table 4). 15 Recall that 15 We tried combining supervised Transformer with DB, and obtained a BERT-iBLEU of 80.1 on QQP, indicating that DB itself is an effective diversity-promoting decoding strategy. both Transformer and CorruptLM leverage a strong pretrained language model, indicating that the performance gain stems mainly from our proposed pipeline rather than the language model itself. On ParaNMT, our model outperforms the supervised SOW-REAP (Table 5). 16 As ablation studies on task-adaptation and self-supervision, we can see in Table 4 and 5 that our model (TA+SS+DB) beats the one that is either task-adapted only (TA) or self-supervised but decoded without DB (TA+SS), showing that both self-supervision and Dynamic Blocking are crucial to paraphrasing quality. On the traditional metrics in Table 4, our models also obtain competitive results with the supervised models. However, as we move down to the last row, we see that Copy-input achieves state-of-theart results on all metrics except BERT-iBLEU, indicating that iBLEU, BLEU, and ROUGE scores are not reliable for evaluating paraphrasing quality. 17 In contrast, our best model on BERT-iBLEU (TA+SS+DB) achieves much lower iBLEU and BLEU scores as compared to other models, showing the inconsistency between these traditional metrics and human evaluation. We also note one special aspect of Table 5 to make it easier to interpret. Unlike on QQP, the performance of Copy-input on ParaNMT is the lowest among all models. However, we need to take this comparison with a grain of salt because all the other results are based on 10 candidates where only the ones with the highest sentence-level scores are retained. In contrast, Copy-input only has one candidate. Thus Copyinput and the other results are not directly comparable. Plus, SOW-REAP filters the dataset to only include syntactically diverse targets and then splits it into the train, dev and test sets, which makes Copy-input less effective.
Robustness to Domain Shift
On the ParaNMT dataset, we notice that Cor-ruptLM, when finetuned on non-parallel QQP, achieves much worse results than the other models (CorruptLM (QQP) row in Table 5), indicating that it is less robust to domain shift. In contrast, our model achieves similar results compared to the indomain one under the same setting (TA+SS+DB (QQP) row). Conversely, we also finetune our model on non-parallel ParaNMT and evaluate on QQP (TA+SS+DB (ParaNMT) row in Table 4). We observe that this model again achieves performance similar to that of the in-domain model. These results show that our model may be able to perform task-adaptation using an arbitrary out-of-domain corpus and still work well on the target domain.
Ablation Studies on Corruption Strategies
During task-adaptation, our corruption strategies involve both deletions and shuffling. In Table 6 we provide ablation study results where we either replace words with masks instead of deleting them or delete words without shuffling. We can see that our delete-and-shuffle strategy achieves the best BERT-iBLEU score among the three settings.
AddMask NoShuffle Delete-Shuffle BERT-iBLEU 80.7 81.7 83.1 Table 6: Ablation studies on different corruption strategies for task-adaptation on QQP. AddMask stands for the strategy where corrupted words are replaced with MASK tokens; NoShuffle corresponds to "no shuffling" after sentence corruption.
Analysis
Syntactic Diversity
In Table 7, we qualitatively demonstrate paraphrases generated by our model that exhibit syntactic structure variance. Unlike previous work relying on explicit syntactic scaffolding (Goyal and Durrett, 2020), our model achieves syntactic diversity "for free" from shuffling during task-adaptation. 18
Generalization to Other Languages
Dynamic Blocking on BART without Finetuning Though we focus on T5 throughout the paper, we do note a unique ability of BART: it can Input Generated paraphrase We got to spend the rest of the weekend at the track. yeah.
We got to stay at the track for the rest of the weekend. yeah. Are predictions of the future based on the present too much? Are future predictions too much based on the present? What is the best way to reduce belly and arm fat?
What is the easiest way to reduce arm and belly fat? You can seduce enemy soldiers, though.
You can, though, seduce enemy troops. Well, why would your buddy be in the shower with you?! Okay, why would you be in the shower with your friend?! directly work with Dynamic Blocking to generate paraphrases (i.e., without domain-adaptation and self-supervision), though of lower quality than the self-supervised model. We demonstrate such examples in Appendix D.
Paraphrasing in Other Languages We observe that although BART is trained almost exclusively on English text, it is able to paraphrase in multiple other languages. We adopt the aforementioned BART setting and present an example in German (Table 13 in Appendix E). To our best knowledge, this is the first unsupervised model that can paraphrase in a non-English language. The reasoning behind this observation is twofold. First, although BART was trained on English corpora, there is a small portion of the content in German due to mislabeled language identification, allowing the model to observe German data; second, previous work has shown that large-scale language models are able to perform zero-shot cross-lingual transfer on a variety of downstream classification tasks, such as Named Entity Recognition (Moon et al., 2019), Natural Language Inference, and Document Classification (Artetxe and Schwenk, 2019). Our work hence demonstrates that it is possible to perform such a transfer even for generative tasks like paraphrasing. We also hypothesize that the paraphrasing quality should improve if we apply our training pipeline to mBART or mT5 (Xue et al., 2020). We leave this as future work.
Related Work
Paraphrase generation has been a long-standing task that has several applications on downstream NLP tasks including text summarization (Cao et al., 2016), semantic parsing (Berant and Liang, 2014), and question answering (Yu et al., 2018). Early works on paraphrase generation mostly rely on rule-based or statistical machine translation systems (McKeown, 1980;Meteer and Shaked, 1988;Bannard and Callison-Burch, 2005).
Supervised Approaches Neural sequence-tosequence (Seq2Seq) models have been used to address this task (Prakash et al., 2016;Li et al., 2017;See et al., 2017;Vaswani et al., 2017;Gupta et al., 2018); sometimes such models are also used to evaluate paraphrasing quality (Thompson and Post, 2020). Round-trip translation between two languages (i.e., back-translation) with strong neural machine translation (NMT) models has also become a widely used approach for paraphrase generation (Yu et al., 2018 Transfer Learning There have been few works leveraging pre-trained language models for paraphrasing, either in a supervised (Witteveen and Andrews, 2019) or an unsupervised (Hegde and Patil, 2020) setting. Both works employ GPT-2 as their backbone generation model. Similarly, we opt for more recent large-scale pre-trained models like BART and T5.
Conclusion
We design an effective training pipeline that enables large-scale pre-trained models to generate high-quality paraphrases in an unsupervised setting through task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking. We demonstrate with automatic and human evaluations that our model achieves state-of-theart results on benchmark datasets. We also show that our model generates paraphrases that exhibit syntactic diversity, as well as generalizes to other languages without any additional training. Overall our work motivates a deeper investigation into self-supervised techniques for paraphrase generation as well as extensions such as context-aware paraphrasing, where the output conditions not only on the sentences to be paraphrased, but also on the context around them. We leave this as future work. Table 9: Automatic metrics results on the Para-NMT dataset. "(QQP)" stands for models finetuned on the non-parallel QQP dataset and evaluated on the ParaNMT dataset.
References
A Automatic Metric Results
We present automatic evaluation results on the previous metrics for QQP in Table 8 and for ParaNMT in Table 9. We can see that for QQP our task-adaptation model without Dynamic Blocking during inference achieves state-of-the-art results among unsupervised approaches. Had we based our judgments on Table 8, we would have mistakenly selected this one as our final model.
B Robustness to Grammar Errors
During the task-adaptation phase, our model in most cases has a grammatically correct sentence as the target sequence. Additionally, shuffling during that phase encourages the model to attend to the context during generation. These setups make our model reasonably robust to grammar errors so that it can paraphrase and normalize the input at the same time. Table 10 shows a case where we intentionally introduce grammar errors on subject-verb agreement, singular vs. plural, and verb inflections.
Input
Our approach are data-driven and can be apply across various situation.
Output
Our approach is data-driven and can be applied across various situations. Our approach is data-driven and can be applied across different situations. Our approach is data-driven and can be applied across diverse situations. Our approaches are data-driven and can be applied across various situations. Our data-driven approach can be applied across different situations. Our approaches are data-driven and can be applied across different situations. Our data-driven approach can be applied across diverse situations. Our approaches are data-driven and can be applied across diverse situations. We find that our model is in most cases robust to such errors. This trait is desired because we may face noisy inputs from users. Through early ablation studies, we observed that without shuffling during task-adaptation, the model was much less robust to grammar errors. Hence shuffling does more than just improving on the BERT-iBLEU metric ( Table 6).
C Failure Modes
Though only occurring occasionally, our model exhibits multiple failure patterns. Hence we perform "anti-cherry-picking" and present in Table 11 some of such examples and the respective modes we outline. We hypothesize that the Antonym mode can be partially addressed by a lookup in the dictionary to additionally block the antonyms. Grammar errors are harder to resolve because they are usually apparent only after the whole sentence is generated. A grammar checker on the candidates may improve the situation. The swapping of subject and object shows that unsupervised approaches based on pretrained language models could only carry us so far till the syntactic-level. In its current form, it cannot handle semantic mistakes. For missing named entities, an NER tagger can help filter candidates that miss important entities. We leave addressing these failure modes as future work.
D Details of Dynamic Blocking
Block surface-form variations In our early experiments, we observed that when blocking a word (e.g. "give"), the model usually tries to generate its capitalized ("Give") or upper ("GIVE") version. From we human's perspective, these are usually not good paraphrases -intuitively we would prefer a different word. Similar to whole-word masking introduced in later versions of BERT, 19 we only block the beginning of the word rather than any subword.
Block Closed-Class Words We also leverage linguistic knowledge to help boost the quality of the paraphrases by avoiding blocking closed-class words, or functional words. 20 The closed classes in English include pronouns, determiners, conjunctions, and prepositions while open-class words correspond to nouns, lexical verbs, adjectives, and adverbs. There are two justifications for blocking these words. First, because they are closed-class, there are fewer synonyms available; second, blocking such words is error-prone. For example, changing determiners (e.g. from "you" to "I") may lead to syntactic or semantic errors, while modifying conjunctions (e.g. from "and" to "or") may lead to change in logical relationships.
Block Inflections In Section 5.2, we mentioned that BART can directly work with Dynamic Blocking without task-adaptation or self-supervision, but that results in lower quality, especially lacking syntactic variance because it is not trained with the shuffling strategy during task-adaptation. In addition, we found that without finetuning, BART tries to generate inflections of a word when it is blocked. To partially remedy this drawback, we use the pattern library 21 to enumerate all inflections of a word to block (e.g. for "give" we should also block "gives", "gave", "giving" and "given") in addition to all the other blocking schemes introduced in Section 3. This is available for most languages that involve inflections. We show in Table 12 the output candidates of a selected example with and without blocking inflections.
Retain Named Entities We also explore a variation of the system where we employ a separate Named Entity Recognition model to identify the named entities in the source sequence and prevent 20 https://mailman.uib.no/public/ corpora/attachments/20111124/6c58cb02/ attachment.txt 21 https://github.com/clips/pattern any tokens in these entities from appearing in the full block dictionary. This change ensures that all named entities are copied verbatim.
E Paraphrasing in German
We pair BART directly with Dynamic Blocking to generate paraphrases in German. In Table
F MTurk Instructions
To facilitate reproducibility, we include our MTurk instructions for the head-to-head and the Likertbased human studies (Figure 3 and 4). As mentioned in Section 3.5, we only provide guidelines on which paraphrases are better in general and leave the rest to the annotator's intuition.
Input
The random selection of pages must be performed by someone other than the player.
Output
Blocking inflections
The random choice of the pages must be performed by someone else than the player. The random selection of the pages must be performed by someone else than the user. The random selection of the pages must be executed by someone other than the user. The random collection of these pages must be performed by someone else than the player. The random selection of these pages must be executed by someone other than the user.
No blocking inflections
The randomly selection of page must be perform by someone else than the players. The random choice of page must be performed by someone else than the player. The randomly selection of page must be perform by someone rather than the players. The random choice of page must be performed by someone rather than the player. The random collection of pages must be performed by someone else than the players. Table 13: Paraphrasing German input by directly applying Dynamic Blocking to BART. Translations on the right are given by the Google Translator, except that the first one is the ground-truth translation. Note that the candidates are ranked by multi-lingual BERT rather than RoBERTa-base which is only used to rank English outputs.
Figure 3 :
3Interface of our MTurk studies for head-to-head comparisions with other models.
Figure 4 :
4Interface of our MTurk studies for head-to-head comparisions with other models.
<s> The chart below illustrates how world population has changed throughout history.The following chart depicts how world's population has evolved over time.S0
S1
S2
Sk
< s >
G0
Autoregressive Decoding
h a s
G9
The chart
how world
world population
history
has changed
Blocking
t h e
t h e
f o l l o w i n g
h a s
e v o l v e d
following
chart
below
Blocked
did
has
does
Not
Blocked
Out of Dictionary
Dynamic Blocking
evolved
changed
shifted
Blocked
In Dictionary
In Dictionary
< s >
t h e
c h a r t
.
Table 3shows that BERT-iBLEU agrees sig-Ground-truth 49.50 14.50 36.00 13.50Dataset Ours v.s.
Win(%) Tie(%) Loss(%) W-L(%)
QQP
Transformer 40.75 28.25 31.00 12.50
CorruptLM 46.00 26.25 27.75 18.00
Ground-truth 43.00 16.75 40.25
2.75
ParaNMT
SOW-REAP 40.50 28.50 31.00
9.50
Table 2 :
2Likert-scale human evaluation results. Both averages and standard deviations are reported.
BERT-iBLEU iBLEU BLEU ROUGE-1/2/LAgree %
68.9
39.4
45.3
21.8/5.4/21.4
Table 3 :
3The percentage of times where the ranking given by each metric agrees with that given by human evaluation in the head-to-head studies. Only cases where two annotators agree are counted.
Table 4 :
4Automatic evaluation results on QQP. TA = Task-Adaptation, SS = Self-Supervision and DB = Dynamic Blocking. "NMT" stands for model finetuned on non-parallel ParaNMT and evaluated cross-domain on QQP. Both our final model (TA+SS+DB) and the best result for each metric are boldfaced. Please refer to Section A in the Appendix for a comparison with 12 supervised models and 5 unsupervised models from previous work.Model
BERT-iBLEU BLEU ROUGE-1 ROUGE-2 ROUGE-L
Supervised
SOW-REAP
54.2
30.9
62.3
40.2
61.7
Unsupervised
CorruptLM (QQP)
39.7
7.6
31.9
11.6
31.6
TA
72.0
20.2
59.0
32.3
53.8
TA+SS
74.0
22.9
58.9
33.3
54.1
TA+SS+DB (QQP)
76.8
22.0
60.1
33.8
54.9
TA+SS+DB
78.0
22.6
59.8
33.2
54.5
No Model
Copy-input
0.0
18.4
54.4
27.2
49.2
Table 5 :
5Automatic evaluation results on ParaNMT. "QQP" stands for models finetuned on non-parallel QQP and evaluated cross-domain on ParaNMT. Note that BLEU and ROUGE scores are based on top-10 candidates where only the ones with the highest sentence-level scores are retained for the final score computation.
Table 7 :
7Selected paraphrases generated by our final model that shows syntactic variance at different extents. Only the top candidate is shown for each input.
Table 8 :
8Automatic evaluation results on the QQP
dataset. Models we (re)produced and SOTA results
in each category are boldfaced. "Supervised (Wiki)"
stands for models trained on WikiAnswers and evalu-
ated on QQP.
Model
Oracle Quality (10 sentences)
BLEU ROUGE-1 ROUGE-2 ROUGE-L
Supervised
copy-input
18.4
54.4
27.2
49.2
SCPN
21.3
53.2
30.3
51.0
Transformer seq2seq
32.8
63.1
41.4
63.3
+ diverse-decoding
24.8
56.8
33.2
56.4
SOW-REAP (LSTM)
27.0
57.9
34.8
57.5
SOW-REAP
30.9
62.3
40.2
61.7
Unsupervised
CorruptLM (QQP)
7.6
31.9
11.6
31.6
TA+SS+DB (QQP)
22.0
60.1
33.8
54.9
TA+SS+DB
22.6
59.8
33.2
54.5
Table 10 :
10Selected example of output candidates produced by our model where we intentionally introduce grammar errors (marked with underlines). We observe that all paraphrase candidates have these errors corrected.
Failure modeInput Output AntonymHow do I gain weight in a healthy way? How do I lose weight in healthy ways? Repeated wordsWhat is the funniest movie to watch? What is the most funniest film to see? Grammar errors Do spirits or ghosts exist? Do ghost or spirit exist? Subject ↔ object How will you know you love someone? How will you tell if someone loves you? Missing named entity A look of dismay came into luzhin's face. A look of disappointment came into the face.
Table 11 :
11Typical examples where our model fails to generate correct paraphrases. Words related to each failure mode are underlined.
13, we can see that all candidates (left column) have different surface forms, while all translations in English (right column) 22 share similar meanings.
Table 12 :
12Selected example of output candidates produced by BART + Dynamic Blocking with and without blocking inflections. Grammar errors made by the latter due to wrong inflections are underlined.CandidatesWarum lieen keine Geschutzbelehrungen statt?Why were there no protection instructions? Warum finden keine Geschutzbelehrungen statt?Why are there no protection instructions? Warum lieen keine Brandschutzbelehrungen statt?Why weren't there any fire safety instructions? Warum finden keine Geschutzbelehrungen statt?Why are there no protection instructions? Warum finden wir keine Brandschutzbelehrungen statt? Why are we not giving fire safety instructions?German
Translation from German
Input
Warum finden keine Brandschutzbelehrungen statt ?
Why are there no fire instructions?
https://www.kaggle.com/c/ quora-question-pairs
For example, consider an input sentence "I want to lose weight in a healthy way." where we sample words "to" and "way" to delete and shuffle the rest. This may give us "weight in want a lose I healthy ." as the corrupted sentence.
We intentionally did not ask them to separately evaluate semantic similarity and surface-form diversity because the latter is easy to check with self -BLEU.14 For instance, the question pair "I'm 27, is it too late for me to go to medical school?" and "How old is too old to start medical school?" has a positive label even though they do not share the same meaning.
Please refer to Appendix A for results of our model compared with all previous ones on the traditional metrics.17 Mao and Lee (2019) also observe that parroting often achieves competitive results.
We present in Appendix B that shuffling also makes the model robust to grammar errors, enabling it to paraphrase and perform text normalization at the same time.
https://github.com/google-research/ bert
By Google Translator: https://translate. google.com/
Annual Meeting of the Association for Computational Linguistics. Online. Association for Computational LinguisticsAnnual Meeting of the Association for Compu- tational Linguistics, pages 2827-2835, Online. As- sociation for Computational Linguistics.
Unsupervised paraphrasing via deep reinforcement learning. Samet Ab Siddique, Vagelis Oymak, Hristidis, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningAB Siddique, Samet Oymak, and Vagelis Hristidis. 2020. Unsupervised paraphrasing via deep rein- forcement learning. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1800-1809.
Joint learning of a dual SMT system for paraphrase generation. Hong Sun, Ming Zhou, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, Korea2Short Papers). Association for Computational LinguisticsHong Sun and Ming Zhou. 2012. Joint learning of a dual SMT system for paraphrase generation. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 38-42, Jeju Island, Korea. Associa- tion for Computational Linguistics.
Automatic machine translation evaluation in many languages via zero-shot paraphrasing. Brian Thompson, Matt Post, 10.18653/v1/2020.emnlp-main.8Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsBrian Thompson and Matt Post. 2020. Automatic ma- chine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 90-121, Online. Association for Computational Linguistics.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
Diverse beam search for improved description of complex scenes. Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, Dhruv Batra, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 32.
ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. John Wieting, Kevin Gimpel, 10.18653/v1/P18-1042Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)John Wieting and Kevin Gimpel. 2018. ParaNMT- 50M: Pushing the limits of paraphrastic sentence em- beddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia. Association for Computational Linguistics.
Sam Witteveen, Martin Andrews, arXiv:1911.09661Paraphrasing with large language models. arXiv preprintSam Witteveen and Martin Andrews. 2019. Paraphras- ing with large language models. arXiv preprint arXiv:1911.09661.
Gathering and generating paraphrases from twitter with application to normalization. Wei Xu, Alan Ritter, Ralph Grishman, Proceedings of the sixth workshop on building and using comparable corpora. the sixth workshop on building and using comparable corporaWei Xu, Alan Ritter, and Ralph Grishman. 2013. Gath- ering and generating paraphrases from twitter with application to normalization. In Proceedings of the sixth workshop on building and using comparable corpora, pages 121-128.
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, arXiv:2010.11934Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprintLinting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A mas- sively multilingual pre-trained text-to-text trans- former. arXiv preprint arXiv:2010.11934.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5753-5763.
Answering questions with complex semantic constraints on open knowledge bases. Pengcheng Yin, Nan Duan, Ben Kao, Junwei Bao, Ming Zhou, Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. the 24th ACM International on Conference on Information and Knowledge ManagementPengcheng Yin, Nan Duan, Ben Kao, Junwei Bao, and Ming Zhou. 2015. Answering questions with com- plex semantic constraints on open knowledge bases. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Manage- ment, pages 1301-1310.
. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, V Quoc, Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V.
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. Le, 6th International Conference on Learning Representations. ICLRLe. 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehen- sion. In 6th International Conference on Learning Representations (ICLR).
Tianyi Zhang, Varsha Kishore, Felix Wu, Q Kilian, Yoav Weinberger, Artzi, arXiv:1904.09675Bertscore: Evaluating text generation with bert. arXiv preprintTianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, The IEEE International Conference on Computer Vision (ICCV). Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Con- ference on Computer Vision (ICCV).
| [
"https://github.com/tagoyal/",
"https://github.com/clips/pattern",
"https://github.com/google-research/"
] |
[
"Communication of Social Agents and the Digital City - A Semiotic Perspective",
"Communication of Social Agents and the Digital City - A Semiotic Perspective"
] | [
"Victor V Kryssanov kryssanov@mm.media.kyoto-u.ac.jp \nJapan Science and Technology Corporation\nJapan\n",
"Masayuki Okabe okabe@mm.media.kyoto-u.ac.jp \nJapan Science and Technology Corporation\nJapan\n",
"Koh Kakusho kakusho@mm.media.kyoto-u.ac.jp \nCenter for Information and Multimedia Studies\nKyoto University\n606-8501KyotoJapan\n",
"Michihiko Minoh minoh@mm.media.kyoto-u.ac.jp \nCenter for Information and Multimedia Studies\nKyoto University\n606-8501KyotoJapan\n"
] | [
"Japan Science and Technology Corporation\nJapan",
"Japan Science and Technology Corporation\nJapan",
"Center for Information and Multimedia Studies\nKyoto University\n606-8501KyotoJapan",
"Center for Information and Multimedia Studies\nKyoto University\n606-8501KyotoJapan"
] | [] | This paper investigates the concept of digital city. First, a functional analysis of a digital city is made in the light of the modern study of urbanism; similarities between the virtual and urban constructions are pointed out. Next, a semiotic perspective on the subject matter is elaborated, and a terminological basis is introduced to treat a digital city as a self-organizing meaning-producing system intended to support social or spatial navigation. An explicit definition of a digital city is formulated. Finally, the proposed approach is discussed, conclusions are given, and future work is outlined. | 10.1007/3-540-45636-8_5 | [
"https://arxiv.org/pdf/cs/0605121v1.pdf"
] | 13,502,805 | cs/0605121 | 09fbdd6eaba8e503eabe94e5a3944b1d3b58062c |
Communication of Social Agents and the Digital City - A Semiotic Perspective
Victor V Kryssanov kryssanov@mm.media.kyoto-u.ac.jp
Japan Science and Technology Corporation
Japan
Masayuki Okabe okabe@mm.media.kyoto-u.ac.jp
Japan Science and Technology Corporation
Japan
Koh Kakusho kakusho@mm.media.kyoto-u.ac.jp
Center for Information and Multimedia Studies
Kyoto University
606-8501KyotoJapan
Michihiko Minoh minoh@mm.media.kyoto-u.ac.jp
Center for Information and Multimedia Studies
Kyoto University
606-8501KyotoJapan
Communication of Social Agents and the Digital City - A Semiotic Perspective
This paper investigates the concept of digital city. First, a functional analysis of a digital city is made in the light of the modern study of urbanism; similarities between the virtual and urban constructions are pointed out. Next, a semiotic perspective on the subject matter is elaborated, and a terminological basis is introduced to treat a digital city as a self-organizing meaning-producing system intended to support social or spatial navigation. An explicit definition of a digital city is formulated. Finally, the proposed approach is discussed, conclusions are given, and future work is outlined.
Introduction
A digital city may be very generally defined as a collection of digital products and information resources made of a large distributed database of heterogeneous documents of various digital genres -(hyper)texts, photographs, maps, animated images, and the like -deployed to provide services aimed at facilitating social and/or spatial navigation in a virtual (e.g. "information" or "communication") or physical (e.g. geographical) space. Paramount to any digital city mission is the ability to deliver information of interest in a timely manner to its users. To do this, digital cities exploit a computer network and a client-server protocol, allowing the user to browse across digital documents through appropriately ordered hyperlinks and retrieve information as needed. An effective digital city supports access to all its repositories of relevant knowledge and data in both the raw and quality-filtered forms, and there can be several search engines installed to carry out the retrieval process. Naturally, networking and information retrieval are considered key issues in the development of digital cities.
As part of the information delivery, a digital city seeks to enable uncomplicated and correct interpretation of the results of a user's query. Examples of this include but are not limited to: providing the user with the related context as an aid for understanding the results (or even the query itself, in the case of exploratory search), illustrating the results with a suitable metaphor or analogy, and utilizing feedback from the user (or some data about the user) to adjust the strategy for retrieving or displaying the information to make it more accessible and meaningful to the user (the so-called "adaptive navigation support"). Another important issue that is thus immediately falling under the purview of digital city developers is human-computer interface.
Reflecting the present understanding of the concept of digital city, which is far from unified and is subject to discussion (and yet confusion), the literature abounds in technical descriptions of implemented and projected digital cities [18] 1 . The authors typically approach the task of the development with a narrow focus, defining a digital city through its functions or even contents with vague terms, such as "useful information," "communication," "social agent," "community network," and the like, and with ad hoc "common sense" design decisions, which may have unpredictable (especially, on a long-term scale) consequences, and which are often of arbitrary relevance to the users' needs. Evidence of the latter can be found already in the very attempt to characterize digital products intended for different purposes (e.g. social vs. geographical navigation) and different types of users by utilizing the loose metaphor of city without clarifying which, if any, aspects or features of the city original concept -the material grounding, functionality, dynamics, structure, or other -are to be adopted. Besides, it is admitted that the inter-disciplinary theoretical study of digital city remains in its infancy and is currently of little help to the practical developers.
The presented work aims to establish a basis for scientific investigation of digital cities and explore fundamental properties of a digital city as an information structure. In this paper, a definition of a digital city as an organization of interacting social agents is introduced, based on a semiotic interpretation of a system-theoretic model of communication. The definition is to explicate the concept of digital city and to elaborate perspectives on the research and development for the future.
The rest of the paper is in four parts. The next section analyzes the concept of digital city. Section 3 develops a semiotic view on the subject matter. This is followed by a section that strives to define a digital city in a manner sufficiently precise for both the academic and practical needs. The final section then discusses the study theoretical findings, formulates some conclusions, and gives information on forthcoming research.
Concept of Digital City
Metaphor of City
It is evident that "digital city" is a metaphor. Metaphors (from Greek metaphoratransfer) serve to create new meanings by transferring the semantics of one concept into the semantics of another concept. Metaphors are habitually used to interpret an unknown "world" (perception, experience, etc.) -the target -in terms of a familiar world -the source. Metaphorical explanation often helps us understand highly abstract and complex phenomena by relating them to phenomena we know well (or, at least, better). In so doing, a metaphor preserves (part of) structure of the original concept, but substitutes its functional contents, anticipating the corresponding change in its properties and meaning.
A metaphor can be expressed with worlds, gesture, in a graphical manner, or through behavior -essentially, metaphors are (combinations of) signs. To successfully apply a metaphor, one should understand not only the systemic organization of the source, but the rôle of the source's larger context that has to be realized and presented in the context intended for the target. (Proper) metaphors are not merely somehow convenient selections of signs. Rather, they are selections of consistent logical systems or theories, with which one can generate new meaningful signs from signs already existed [37]. It seems rational to assume that the logic of a digital city could root in the logic of real cities, if this metaphor is to be properly used. Another (perhaps, stronger) assumption can be made that digital cities as "virtual structures" have much of the properties inherent in the social communities.
Juval Portugali, in his survey of the study of urbanism, described a present-day city as a conglomerate of people together with their artifacts -buildings, roads, communications, etc. -that is "actually not a city but a text written by millions of unknown writers, unaware that they are writers, read by millions of readers, each reading his or her own personal and subjective story in this ever-changing chaotic text, thus changing and recreating and further complicating it" [30,29]. A city is complex. It consists of numerous components, which interact, and which are created by (and from) other components, thereby continuously re-producing the fabric of the city. A city is self-organizing. It, like all self-organizing systems, exchanges the resourcesmatter, energy, and people -and information with its environment and is, in this sense, open. At the same time, a city is closed (to a degree) in the sense that its structure is determined internally (again, to a degree), and the environment does not control how the city organizes itself. In complex system theory, complete closure means that every component of a system is produced solely by other components of the same system without influence from the outside -a requirement hardly reachable in social or even biological organizations. However, once a city starts to distinguish itself from the environment by creating a boundary, it can achieve a sufficient degree of organizational closure to be seen (though, controversially [25]) as an autopoietic system that is a form of self-organization [24].
Due to its composite makeup and the complexity of internal interactions, a city is generally indescribable in terms of cause and effect or in terms of probabilities [7]. This makes it extremely difficult to study the city basic properties. Recently, however, some conceptual and mathematical approaches (such as dissipative structures, synergetics, and cellular automata), borrowed from the natural sciences, have successfully been applied to the study of cities with the focus on not to control or predict behaviors of city components, but to deliberately participate in and sensefully "shape" the city development by acting at the global, social and organizational level [30].
Turning now to the case of digital cities, one can quickly prove the openness and the complexity of the virtual organizations. Intuitively, digital cities should be open, since they constantly exchange information with their environments, and they are indeed complex, each comprising a number of dynamic information resources. Not so straightforward is the question of organizational closure: it is unclear what a digital city produces and how it reproduces. A still more difficult question is, how can a digital city separate itself from the environment and what is the boundary? By answering these, we would clarify whether and to what extent the logic of cities and social systems in general is applicable to an information web-structure called "digital city."
Navigation with Digital City
Even at the level of parts, self-organization does not really mean freedom but a controlled collective behavior towards achieving a common (for the entire system) goal in an environment [16]. There is a controlling mechanism "hidden" and distributed over all the system parts that determines a strategy for the system, which is usually implemented as more or less inclusive constraints imposed on each part's behavior. While the principal goal of any ("living") self-organizing system must imply its longterm survival in a variable environment, i.e. the maintenance of the system invariant structure -its identity, the tactical (or transient) goals usually determine the system parts' behavior in every local situation and at every particular time. To "survive" for a digital city would mean to uphold the stability of its structure with the designated functionality supporting (social, geographical, spatial, etc.) navigation despite environmental disturbances.
Perhaps most generally, navigation in an environment 2 was defined in [34] as a four-stage iterative process that includes: 1) perception of the environment, 2) reconciliation of the perception and cognition (i.e. understanding), 3) deciding whether the current goal has been reached (i.e. decision-making), and 4) choosing and performing the next action (i.e. adjustment of the behavior). Among these stages, the last two have a noticeably subjective character and are solely on the navigator's side, whereas the other two depend on "objectively" available -sensed -information about the environment. It is perception that first represents "raw" sensory data and provides for further interpretation by putting the resultant representations into a context of the scene perceived (e.g. by simply combining the representations together). When information obtained through the senses is not enough for establishing or reestablishing meanings of the environment necessary for successful decision-making, the navigator may ask for help a guide -someone, who could presumably know more about the environment. A digital city may be thought of as such a guide: in the navigation process, it works to enhance and complement the navigator's sensing capabilities. In other words, a digital city is to "produce" information about the navigation space.
Perceptual Control Theory [31] proposes an explanation of the control mechanism for complex self-organizing systems. The theory tells us that a perceiving system normally seeks to bring the perceived situation to its goal or preferred state by utilizing (negative) feedback from the environment: if the situation deviates from the goal, the system acts and adapts, possibly changing its own state and the state of the envi-ronment, and the new situation is again sensed and estimated in respect to the goal. The loop repeats and keeps the system in a stable goal-directed state, environmental perturbations and compensating actions notwithstanding. Although a digital city can, in principle, sense its environment directly (e.g. through cameras and transducers, as in the "Helsinki Arena 2000" project [22]), there is no other way for it to determine the context and, hence, semantics necessary for making the sensed information meaningful for the navigation process, but (ultimately) by drawing on expertise of its users and utilizing feedback from them. The users together with their knowledge can and in fact should be considered as indispensable and constitutive parts of the digital city.
Each user's knowledge is supposed to be a subjective reconstruction of the locally and selectively perceived environment (for justification, see [20,38]). No user possesses perfect knowledge, but being connected by means of the digital city, the users can gain access to "collective knowledge" -once sensed or created information about the environment that, owing to the spatio-temporal dynamics uniquely allocating each user (and yet the natural cognitive limitations), is far more complete and encompassing than knowledge of a solitary user.
Perception is, obviously, effective only when it provides the navigation process with comprehensible and meaningful -useful -information. This requirement defines the strategy for a digital city. To ascertain the usefulness of a particular perception, the digital city puts it into the context of a situation associated with a user's query and then attempts to evaluate the user's reaction and/or feedback. There can be different and even conflicting interpretations of the same situation made by different users that would, in the long run, destroy the digital city by denouncing its very rationale. In order for a digital city to "survive," its functionality is kept up by enabling context-sensitive (i.e. dependent on the user's prior experience and personal understanding of the situation) interpretation of its contents. The latter sets conditions for the tactics. Thus, the global organizational stability of a digital city (that actually determines its functional stability that is supporting navigation) is naturally maintained at the expense of the stability of its parts (i.e. at the expense of the uniformity of the representation and understanding of the environment -see, for instance, the adaptive interface concept for the Kyoto Digital City described in [17]), just like as it happens in physical cities [30].
It is now understood that a digital city can become self-organizing, if it separates itself from the environment by developing an eventually autonomous structure allowing for generating meanings -forming a "sense" -of the environment for the needs of navigation. In addition to plain separations in matter (there is not such a thing as information, but a digital city is a representation of things; on the other hand, "the environment contains no information; the environment is as it is" [10]) and time (the time-scales of a digital city and its environment usually differ), a self-organizing digital city should develop a meaning boundary: it should maintain and reproduce its own functionally invariant meaning-making structure not just by storing some observations, but by recursively producing pertinent observations using other observations, while acting independently (perhaps, to a degree) of environmental disturbances.
Approach
Semiotics
An issue of great (by its consequences) significance that is sometimes overlooked by theorists and, fairly often, by practical developers dealing with human-computer interface is the fact that information transmitted by means of computers tends to loose its meaning. Digital signals, such as arrays of bits forming raster graphics, do not bear semantics and have to be interpreted subjectively. There is no context-independent or "absolute" meaning, but the meaning of a signal emerges through the process of interaction between a local perception of the signal and a global (in some way) vision of the corresponding situation [1,8].
Contrary to the objectivism dominating AI research, human navigation in an environment builds on information conceived (not just perceived!) by the navigator [34]. For instance, observing a map is useless for the purpose of navigation unless the map can be related with the navigator's current location and goal that, as a rule, requires additional "information processing," such as (re)interpretation of sensed information about the surroundings and the map itself. People, through their activities and practice, subjectively and locally but always internally create meanings of the environment. These meanings are then "externalized" to be disseminated and proliferated, while their validity (in respect to the environment) is continuously and again subjectively examined in an attempt to identify currently effectual and supportive meanings. Semiotics studies these in essence meaning-making processes, construing elements of the environment as signs that need to be interpreted to obtain meanings for their contextual use.
In Peirce's formulation [28], semiotics studies the process of interaction of three subjects: the sign itself -the representamen or signifier, the object -that which is signified by the sign; and the interpretant -the meaning that follows semantically from the process of interpretation of the sign. It is postulated that no sign is directly connected to an object: signs have meanings only when they are embodied into a system of interpretance that is just a (larger) system of signs -a sign system, which constrains and relates its constituents, thereby creating a context. A representamen is necessarily a sign of an object for a certain sign system but not for any sign system: depending on the context, the same sign may have different meanings while signifying different objects, or different signs may have the same meaning while signifying the same object, and so on. Designated semiosis processes determine the meaning(s) of signs in all the specific situations.
The science of semiotics has a long history of development and application, presently offering a set of generic concepts and procedures to a variety of disciplines, such as art theory, film theory, linguistics, theoretical biology, complex system theory, anthropology, and philosophy of mind, among others (see [6] for a gentle introduction into the study). Often thought of as "the mathematics of humanities" [2], semiotics has developed analytical apparatus for qualitative characterization of various representation and re-representation processes involving signs. This has later been applied on a more formal basis in natural sciences, putting forward a common language for treating information-processing aspects of inter-disciplinary problems [9]. In computer science, semiotics has traditionally been focused on analyzing the reciprocal influence of the computation and interpretation processes and classifying representations by type of relation to their objects [2,11]. From a semiotic point of view, many (if not all) information processes in a digital city -from "purely" technical, such as data storage, to experiential and cognitive, such as understanding of data -are semiosis processes [36]. Semiotics appears particularly apt for explicating the structure of the mechanism "producing" meanings out of perceptions.
Structure for Producing Meaning
Semiotics teaches us that people perceive an environment through signs, which may be interpreted and which serve to mediate meanings of the environment. Although human perception is relatively uniform and consistent, the meaning assigned to a single sign can vary significantly, resting on the subjective dynamics of perception and cognition, as well as on a larger context (e.g. orientational, functional, or operational) of the situation encountered. A semiosis process is the process of determining the meaning of some distinctions in an environment that entails representation and rerepresentation of these distinctions over several levels of interpretation, each of which is governed by and adopts certain norms -developmental rules and relational constraints for the signs. The norms reflect different aspects of human behavior that can be classified into five major groups [35]: perceptual -to respond to peculiarities of sensing; cognitive -to deal with cultural knowledge and beliefs; evaluative -to express personal preferences, values, and goals; behavioral -to delineate behavioral patterns; and denotative -to specify the choice of signs for (further) signifying.
From a system-theoretic viewpoint, the complexity and richness of many natural organizational processes, such as adaptation and self-organization, derives from the ability to arrange smaller units into larger ones, which are in turn arranged into larger ones, which are arranged into still larger ones, and so forth [33]. Semiosis is a natural organizational process [21]: it organizes signs in a partial hierarchy by ordering them so that representamina of objects (that can be other signs) of level N-1 for processes and structures of level N+1 are placed on level N. The lowest-level signs, e.g. (manifestations of) physical objects, behavioral dispositions, emotions, and the like, are perceived or realized through their distinctions and get a representation at an "intermediary" level of norms, reflecting interpretive laws of a higher, experiential and environmentally (physiologically, socially, technically, economically, etc.) induced level, which accommodates interpretants and gives meanings to the representamina. This simple three-level structure corresponds to and is set up by a single semiosis process, whereas various semiosis processes defined on the same realm will create a complex partially ordered structure, where one sign gets multiple meanings, depending on both the signified contents of the lower levels and the contextual constraints from the interpretive levels (see Fig. 1).
Navigation with a digital city activates a number of semiosis processes (e.g. by different users) and results in the creation of a multi-level sign system with a potentially infinite hierarchy of interpretive levels, where signs on level N are dynamically com-
View III
posed of signs on level N-1 so that only those of all the possible combinations of that lower-level signs persist, which are allowed by boundary conditions effective at level N+1. Signs on level N-1 serve as constitutive units for level N signs, which are constitutive for level N+1 signs, which can be constitutive for yet-higher-level signs; besides, signs on higher levels are constraining for signs on their adjacent lower levels. The levels have different dynamics, such that the probability of changing the relationships among signs within a level decreases for higher levels [21].
Fig. 1.
A simplistic example of the composite partial hierarchy for producing meaning: in View I, entities "b, a, d" and "d, e" are represented at level N as connected for level N+1, and a new (in respect to N) meaning may emerge (e.g. be inferred) for N+1: "b and e are connected." There are some other systems of interpretance in the structure, and the one that recognizes, for instance, pairs of connected entities -as in View III. By referring to (i.e. communicating with) III, View I may apparently learn (e.g. syntactically -see [38]) to also recognize pairs of entities. It may then reconsider "bade" as a pair "bad;de" (that will become of interest to III). Thus, the meanings in the system may change as I and III communicate.
A user of a digital city typically deals with a fragment of the global, i.e. loosely shared through the environment (that may be seen as the lowest semiotic level) by all the users, system of signs (Fig. 1). The fragment is, however, distinctively ordered in an interpretive hierarchy peculiar to the user's experience and the norms he or she adopts. Hierarchies created by different users may be different in terms of the order as well as the coverage, and they may run on different time-scales (see [20] for general argumentation). Having been combined into one structure (e.g. by means of a community network [12]), the fragments may form a global but partial and often implicit "hierarchy." This global hierarchy constitutes the functionally invariant structure of a self-organizing digital city. It allows for producing "meanings" for the system internal (adaptation) needs out of (represented) perceptions based on experience (received through, for instance, feedback -see Section 2.2) currently prevailing in the society of digital city users. The hierarchy has essentially an ordering, i.e. affecting the interpretive levels rather than signs within a level, dynamics [27].
Unlike the case of individual navigation, where perceived and conceived signs may need not be articulated explicitly, the development and operation of a digital city neatly builds on communicative use of a multi-level sign system representing the environment and the digital city itself. This sign system can be externalized -derived from the digital city structure -as a language defined in a very general way and not confined to handling verbal constructions. The digital city "describes" (and interacts with) its environment with this language, which has a syntax reflecting the organization of the environment, semantics defining meanings of the environment, and pragmatics characterizing the effect of the language use. (See, for instance, [12] presenting the Campiello System that works to construct and utilize such a language.)
Self-Organizing Digital City
Communication
Users of a digital city, although act individually, are not isolated from the surroundings: their behavior is determined not only by their purposes, but considerably by the material processes taking place in the environment, the actions of other individuals (both, users and non-users), the existing time-and functionality-constraints, the actions of groups of other individuals, the current state of the digital city, and the like. The users interact with the digital city (yet being parts of it), their environments, and simply with each other. The users are, nonetheless, autonomous, in the sense that they possess a representation of the environment adequate to sustain their purposeful behavior for some time, as autonomous is the digital city, which is recursively (through its users) closed with respect to meaning. In this situation, the operation of a digital city heavily depends on the social context of the semiosis associated with the navigation process -it depends on how the users and the digital city receive and interpret information in the course of navigation, i.e. how they communicate.
From a behavioristic point of view, an individual engaged in navigation develops an internal representation using those distinctions of the environment, which turn up solutions to the problem that are successful behaviors [5]. Signs of such a representation arrive as "tools for indication purposes" [32]. When met in an environment, these signs (i.e. the distinctions they stand for) serve to orient the navigator, whatever their "actual" meanings or rôles could be in the environment. The navigator is not really interested in "getting to the truth," but in knowing what happens or what are possible consequences -expectations, when a sign is encountered. In this aspect, signs are signifiers emerged of successful interaction between an individual and an environment as orientational "pointers" to not just an object standing in a referential relation with the sign, but to the outcomes desired for (or, at least, anticipated by) the user. Signs can be considered "anticipations of successful interactions of referral" [32], emphasizing their origin and predictable influence on behavior.
One can show that the behavioristic view of the foundational process of forming sign meanings is just a specialization of the classical view that defines information as "a difference that makes a difference" to the interpreter [4]. The specialized view, however, makes it difficult to explain communication as mere exchange of signs. Indeed, in the case of navigation with a digital city, not objective reality but subjec-tive experience is the grounding basis for signs (also see discussion in [19]). The navigator cannot frequently succeed with developing an interpretation of a sign received through communication by simply referring the sign to the observed part of the environment -the navigator's personal experience has first to be "synchronized" (up to a point) with the experience underlying the creation of the sign. The latter appears impossible or inefficient (e.g. because of time-limitations) in most cases of the use of a digital city.
A solution to the above problem comes with an advanced explanation of communication that includes aspects of information (sign) exchange as well as behavioral coordination between autopoietic systems. An autopoietic system is a dynamic system maintaining its organization on account of its own operation: each state of such a system depends on its current structure and a previous state only [24]. The structure of an autopoietic system determines the system possible (i.e. self non-destructive) behaviors that are triggered by its interactions with the environment. If the system changes its state, enforcing changes of the structure without breaking autopoiesis, the system is structurally coupled with the environment. If the environment is structurally dynamical, then both the system and the environment may mutually trigger their structural changes, sustaining the system's self-adaptation. When there are more than one autopoietic system in the environment, the adaptation processes of some of the present systems may become coupled, acting recursively through their own states. All the possible changes of states of such systems, which do not destroy their coupling, create a consensual domain for the systems. Behaviors in a consensual domain are mutually oriented. Communication, in this view, is the (observed) behavioral coordination resulting from the interactions that occur in a consensual domain (see [8] for details).
(Human) users of a digital city are (higher order) autopoietic systems [24]. Besides, the environment of a digital city is supposed to be structurally dynamical or even self-organizing (to a degree), as in the case of social systems [14,23,38]. Therefore, a digital city should be autopoietic (at least, to a degree), i.e. to be a system internally producing meanings for its own (adaptation) needs, to endure communication.
Definition of digital city
Based on the system-theoretic and semiotic analysis made in the previous sections, we can now define a digital city as follows:
A digital city is an autopoietic organization of social agents communicating by way of computers, such that every social agent is a realization of a semiosis process engendered by navigation taken place in a common (for all agents) environment.
It is important to notice that the above definition builds on the understanding of communication as the (observed) coordination activity in a consensual domain, and it does not "humanize" the social agency: equally, people and computer (and any other) systems can be social agents as long as they interact and produce meanings in the navigation process; besides, the term "navigation" is understood in the broad sense, following [34].
It can be seen that the proposed definition is general enough to encompass all the realizations of digital cities reported in the literature, which have a common identity, constituting a distinct class of digital products. On the other hand, it is sufficiently precise in giving not only the functional (i.e. what goes on) but operational (i.e. how it goes on) characterization of a digital city. By the definition, an agent does not have to physically be embedded into the navigation space but does have to be engaged into the navigation process. The latter allows us to clearly distinguish a digital city among other web-based digital products, yet leaving plenty of freedom for dynamically including and excluding resources and agents appearing in it. One should not, nevertheless, be confused by the process-orientation of the definition: not every navigation (e.g. on the World Wide Web -see [14]) is the source of the emergence of a (selforganizing) digital city, but only that, which is supported by (and supports) the functionally invariant structure for navigation in the specified space.
Discussion and Conclusions
Within descriptions of digital cities, there is often little attention to the precise definition of basic concepts, with which a digital city is characterized. This imprecision results in weakly motivated developments, which easily loose their identities when compared with other digital products, such as map repositories or Web-portals (consider, for instance, the Turin and AOL digital cities discussed in [18]). Moreover, although it is assumed by default that a digital city is deployed for a group of users rather than for a single user, most of the reported projects habitually focus on and address specific aspects related to the personal adaptation (e.g. of the interface), while the issue of the appropriateness of a digital city to a particular society remains opaque. Even less is known about possible mutual influences of a digital city and the society, and about the life cycle of a digital city. All this could be a serious reason to question the very expediency of digital cities.
In this paper, an attempt was made to find a theoretical basis for the development of digital cities. Starting from an assertion that "digital city" is a metaphor called to denote a complex digital product with properties structurally similar to the ones of physical cities, the concept of digital city was gradually refined throughout the study, as we analyzed it first -functionally, then -semiotically, and finally -from a system theoretic perspective. The definitive function of a digital city is the (information) support of the process of navigation in an environment. Navigation utilizes meanings of the environment resulting from perception and interpretation. Interpretation is intrinsic of semiosis. Different meanings are developed by semiosis processes, which create and order signs of the environment into partial hierarchies. Semiotic signhierarchies emerged during navigation can internally generate new semiosis processes and, therefore, new meanings owing to communication. If this generation is maintained regardless environmental variations, the organization of semiosis processes becomes autopoietic, and it constitutes a digital city.
The authors are quite aware of the difficulties and controversy attributed to any attempt to define a "not obviously living" system as autopoietic. By arguing that a digital city should be autopoietic, we follow the German sociologist and philosopher Niklas Luhmann [23], who was first to explicate the autopoiesis of social systems. The concept of digital city is, in our opinion, to organically expand the communication-driven autopoiesis of social systems to the new "digital" dimension. It should be stressed that neither semiotic meaning-making nor self-organization alone is an entirely new and unexplored issue in the fields of Human-Computer Interaction and Computer Supported Cooperative Work (see, for instance, [1] and, especially, [26]). Somewhat different to the previous works, we see the advantage of the application of semiotics and complex system theory not only in their suitability for theoretical exploration of digital products, but in their appropriateness for a rigorous computational treatment and technological validation of the theoretical findings, as it became apparent with the recent advent of algebraic semiotics and category theory [11], as well as evolutionary computation [8].
The view of digital city developed through the study is not only fully compatible with the contemporary vision of urban communities as self-organizing systems (a city as "a text written by millions of unknown writers…" [29,30]), but it specializes and details the mechanism of self-organization by advocating that digital cities are autopoietic. By extending the recently popular idea that not just biological, but also psychic and social systems can be autopoietic [23,24,14], autopoiesis can be considered as a general form of system development that draws on self-referential closure [9,15]. It was argued in the paper that in the case of digital cities, the concept of life, which is exploited in biology and (in a sense) sociology and urbanism, is to be replaced with the concept of semiosis as a kind of autopoietic organization. Along with systems theory that is used in the study of urbanism [30], semiotics forms the basis for investigation of (self-organizing) digital cities.
Semiosis of a digital city arranges a structure required for the reproduction of meanings by the digital city for its own "internal" needs. The meaning-(re)producing autopoiesis gives more room for a system to "survive" by letting it be autopoietic to a degree: a digital city can be less (and trivially) autopoietic if it mainly produces meanings out of perceptions, and it can be more autopoietic if it produces meanings out of meanings. For the former, consciousness of the users is the source of meaning reproduction that is typically fairly dependent on the environment, i.e. on what is not the digital city. For the latter, meanings are reproduced in the course of communication that powers the autopoiesis without paying much attention to the environment. "Meanings out of perceptions" assumes a hetero-referential closure of the digital city: the system produces meanings for other (possibly "external") meanings (e.g. it learns to send an image for clarifying a text). "Meanings out of meanings" implies a selfreferential closure: the digital city produces meanings for communication (e.g. it learns to send an image for adjusting its own interface). Resembling social systems [23], self-reference for a digital city is the ability to distinguish between heteroreference and self-reference.
The proposed definition of digital city departs from the criticized studies (also see [38] for a more general critique), which tend to focus on the micro-scale phenomena concerning the interaction of a user with a (part of a) digital city but ignore (or artifi-cially "fix") the global social dynamics of the digital city. All the users, through their social agents that are realizations of semiosis processes, are constitutive parts of the digital city, and its dynamics is determined by behaviors of the users. It is obvious that the concept of digital city becomes incongruous if the users are not included into the definition in the case of hetero-referential closure: the system would then be anything -from a database to a game -, depending on the purpose of the user, i.e. on what is the motivation or "driving force" causing the system of interpretance for the semiosis processes representing the user in the interaction with the digital city. Besides, the concept is just absurd if devoid of the users in the case of self-referential closure: how to call a digital product, which acts for its own purposes, leaving an external user unaware of them, and which is generally unpredictable in its behavior? Contrasting these two extreme points, the global and inclusive treatment advocated in this paper is not only comprehensive, but it allows for applying the rich apparatus of social studies to the study of digital cities to examine the macro-phenomena. At the same time, the proposed approach well recognizes the micro-scale dynamics: any user, whether an individual or a group, can uniquely be defined through characteristic semiosis processes. This gives us a happy opportunity to apply various theories of human-computer interaction as well as semiotics to the research and development. Combining the micro-and macro-level visions, the meaning-making self-organization implies the emergence of some ontology (following the terminology coined by the knowledge-sharing research community [13]) of the navigation space, which is understood as the functionally invariant structure of a digital city. This ontology, however, has a structural dynamics and changes throughout the life cycle of a digital city. New technological perspectives might be discovered when examining this evolution (e.g. how networking and information retrieval mechanisms should react when the digital city undergoes a change from hetero-to self-referential closure), allowing for "deliberately participating" in the digital city development (compare with the study of urbanism, [30]) that would lead us to the creation of a "participative virtual city," e.g. as discussed in [3].
The presented work offers one new contribution: the clarification of the concept of digital city. This contribution is based upon the extensive analysis supported by the literature. Another contribution of the paper would thus be providing the reader with an introduction (though, by no means complete) to the semiotic and system-theoretic aspects of the study of social system.
We do not expect our approach be perfect. The presented study seeks to explain a particular view of digital cities that would be found somehow inappropriate. We believe, however, that this is better than discussing the subject in such an elusive way, that no one can tell if it is inappropriate. This work also is to stimulate critical discussions of the concept of digital city.
Building on the conceptual and terminological basis developed through the study, our future research plans include: 1) elaboration of a semiotic theory of communication for a digital city, 2) its verification by both analysis of practical examples and computational experiments, 3) exploration of possible implications of the study of digital cities for the study of urbanism.
Throughout the paper, we will use this publication as a representative collection of studies of digital cities.
It should be noted that for the digital city, the environment as surroundings may or may not coincide with the environment as navigation space. We will not, however, distinguish these two environments for the purpose of this study: the latter is often part of the former, and in both cases, the environment is "that, which is not the digital city."
AcknowledgementThe presented work is part of the Universal Design of Digital City project in the Core Research for Evolutional Science and Technology (CREST) programme funded by the Japan Science and Technology Corporation (JST).
A theory of computer semiotics. Semiotic approaches to construction and assessment of computer systems. P B Andersen, Cambridge University PressCambridgeAndersen, P.B.: A theory of computer semiotics. Semiotic approaches to construction and assessment of computer systems. Cambridge University Press, Cambridge (1990)
What semiotics can and cannot do for HCI. P B Andersen, Paper presented at the CHI'2000 Workshop on Semiotic Approaches to User Interface Design. To appear in Knowledge Based SystemsAndersen, P.B.: What semiotics can and cannot do for HCI. Paper presented at the CHI'2000 Workshop on Semiotic Approaches to User Interface Design. To appear in Knowledge Based Systems (2000)
Digital City or Urban Simulator?. A Aurigi, Digital Cities: Experiences, Technologies and Future Perspectives. Ishida, T. and Isbister, K.Springer-Verlag1765Aurigi, A.: Digital City or Urban Simulator? In: Ishida, T. and Isbister, K. (eds): Digital Cities: Experiences, Technologies and Future Perspectives. Lecture Notes in Computer Science, Vol. 1765. Springer-Verlag (2000) 33-44
Steps to an Ecology of Mind. G Bateson, Ballantine BooksNew YorkBateson, G.: Steps to an Ecology of Mind. Ballantine Books, New York (1972)
The emergence of representation in autonomous agents. M H Bickhard, Cybernetics and Systems. Prem, E.286Epistemological Issues of Embodied AIBickhard, M.H.: The emergence of representation in autonomous agents. In: Prem, E. (ed.): Epistemological Issues of Embodied AI. Cybernetics and Systems. 28(6) (1997)
Semiotics for Beginners. WWW document. D Chandler, Chandler, D.: Semiotics for Beginners. WWW document. Posted at URL http://www.aber.ac.uk/media/Documents/S4B/ (1994)
Chaos and Socio-Spatial Dynamics. D S Dendrinos, M Sonis, Springer-VerlagNew YorkDendrinos, D.S. and Sonis, M.: Chaos and Socio-Spatial Dynamics. Springer-Verlag, New York (1990)
An investigation into the evolution of communication. Di Paolo, E A , Adaptive Behavior. 62Di Paolo, E.A.: An investigation into the evolution of communication. Adaptive Behavior. 6(2) (1998) 285-324
C Emmeche, Closure, Function, Emergence, Semiosis and Life: The Same Idea? Reflections on the Concrete and the Abstract in Theoretical Biology. Chandler, J.L.R., van de Vijver, G.New YorkThe New York Academy of Sciences901Closure: Emergent Organizations and Their DynamicsEmmeche, C.: Closure, Function, Emergence, Semiosis and Life: The Same Idea? Reflec- tions on the Concrete and the Abstract in Theoretical Biology. In: Chandler, J.L.R., van de Vijver, G. (eds): Closure: Emergent Organizations and Their Dynamics. Annals of the New York Academy of Sciences, Vol. 901. The New York Academy of Sciences, New York (2000) 187-197
. H Von Foerster, Observing Systems. Intersystems PublicationsSeaside CAvon Foerster, H.: Observing Systems. Intersystems Publications, Seaside CA (1981)
An introduction to algebraic semiotics, with applications to user interface design. J A Goguen, Computation for Metaphor, Analogy and Agents. C. NehanivSpringer-Verlag1562Goguen, J.A.: An introduction to algebraic semiotics, with applications to user interface design. In: C. Nehaniv (ed.): Computation for Metaphor, Analogy and Agents. Lecture Notes in Artificial Intelligence, Vol. 1562. Springer-Verlag (1999) 242-291
Extending the Services and the Accessibility of Community Networks. A Grasso, D Snowdon, M Koch, Digital Cities: Experiences, Technologies and Future Perspectives. Ishida, T. and Isbister, K.Springer-Verlag1765Grasso, A., Snowdon, D. and Koch, M.: Extending the Services and the Accessibility of Community Networks. In: Ishida, T. and Isbister, K. (eds): Digital Cities: Experiences, Technologies and Future Perspectives. Lecture Notes in Computer Science, Vol. 1765. Springer-Verlag (2000) 401-415
Formal Ontology, Conceptual Analysis and Knowledge Representation. N Guarino, Int. J. Human and Computer Studies. 435Guarino, N.: Formal Ontology, Conceptual Analysis and Knowledge Representation. Int. J. Human and Computer Studies. 43(5/6) (1995) 625-640
The Global Superorganism: an evolutionary-cybernetic model of the emerging network society. F Heylighen, Heylighen, F.: The Global Superorganism: an evolutionary-cybernetic model of the emerg- ing network society. Posted at URL http://pespmc1.vub.ac.be/Papers/Superorganism.pdf (2000)
The Growth of Structural and Functional Complexity during Evolution. F Heylighen, The Evolution of Complexity. Heylighen, F., Bollen, J., Riegler, A.DordrechtKluwer Academic PublishersHeylighen, F.: The Growth of Structural and Functional Complexity during Evolution. In: Heylighen, F., Bollen, J., Riegler, A. (eds.): The Evolution of Complexity. Kluwer Aca- demic Publishers, Dordrecht (1999) 17-44
J H Holland, Hidden Order: How adaptation builds complexity. Addison-WesleyHolland, J.H.: Hidden Order: How adaptation builds complexity. Addison-Wesley (1996)
A Warm Cyber-Welcome: Using an Agent-Led Group Tour to Introduce Visitors to Kyoto. K Isbister, Digital Cities: Experiences, Technologies and Future Perspectives. Ishida, T. and Isbister, K.Springer-Verlag1765Isbister, K.: A Warm Cyber-Welcome: Using an Agent-Led Group Tour to Introduce Visi- tors to Kyoto. In: Ishida, T. and Isbister, K. (eds): Digital Cities: Experiences, Technolo- gies and Future Perspectives. Lecture Notes in Computer Science, Vol. 1765. Springer- Verlag (2000) 391-400
T Ishida, K Isbister, Digital Cities: Experiences, Technologies and Future Perspectives. Springer-Verlag1765Ishida, T. and Isbister, K. (eds): Digital Cities: Experiences, Technologies and Future Perspectives. Lecture Notes in Computer Science, Vol. 1765. Springer-Verlag (2000)
B Latour, On interobjectivity. Mind. 3Latour, B.: On interobjectivity. Mind, Culture, and Activity. 3(4) (1996) 228-245
. J Law, Organizing Modernities. Blackwell. Law, J.: Organizing Modernities. Blackwell, Cambridge (1993)
J L Lemke, Closure: Emergent Organizations and Their Dynamics. Chandler, J.L.R., van de Vijver, G.New YorkThe New York Academy of Sciences901Opening Up Closure: Semiotics Across ScalesLemke, J.L.: Opening Up Closure: Semiotics Across Scales. In: Chandler, J.L.R., van de Vijver, G. (eds): Closure: Emergent Organizations and Their Dynamics. Annals of the New York Academy of Sciences, Vol. 901. The New York Academy of Sciences, New York (2000) 100-111
Helsinki Arena 2000 -Augmenting a Real City to a Virtual One. R Linturi, M.-R Koivunen, J Sulkanen, Digital Cities: Experiences, Technologies and Future Perspectives. Ishida, T. and Isbister, K.Springer-Verlag1765Linturi, R., Koivunen, M.-R. and Sulkanen, J.: Helsinki Arena 2000 -Augmenting a Real City to a Virtual One. In: Ishida, T. and Isbister, K. (eds): Digital Cities: Experiences, Technologies and Future Perspectives. Lecture Notes in Computer Science, Vol. 1765. Springer-Verlag (2000) 83-96
Social systems. N Luhmann, Stanford University PressPalo AltoLuhmann, N.: Social systems. Stanford University Press, Palo Alto (1995)
Autopoiesis and Cognition: The Realization of the Living. H Maturana, F J Varela, D. Reidel Publishing CompanyDordrechtMaturana, H. and Varela, F.J.: Autopoiesis and Cognition: The Realization of the Living. D. Reidel Publishing Company, Dordrecht (1980)
Self-Producing Systems: Implications and Applications of Autopoiesis. J Mingers, Plenum PublishingNew YorkMingers, J.: Self-Producing Systems: Implications and Applications of Autopoiesis. Ple- num Publishing, New York (1994)
Anticipation -A Spooky Computation. M Nadin, Conference on Computing Anticipatory Systems. Liege, BelgiumLecture given at the 3rd IntNadin, M.: Anticipation -A Spooky Computation. Lecture given at the 3rd Int. Conference on Computing Anticipatory Systems. Liege, Belgium. Posted at URL http://www.code.uni- wuppertal.de/de/computational_design/who/nadin/lectures/anticipa.html (1999)
The Physical Basis and Origin of Hierarchical Control. H H Pattee, Hierarchy Theory. Howard H. PatteeNew YorkGeorge BrazillerPattee, H.H.: The Physical Basis and Origin of Hierarchical Control. In: Howard H. Pattee (ed.): Hierarchy Theory. George Braziller, New York (1973) 71-108
C S Peirce, The essential Pierce: Selected philosophical writings. BloomingtonIndiana University Press2Peirce, C.S.: The essential Pierce: Selected philosophical writings, Vol. 2. Indiana Univer- sity Press, Bloomington (1998)
Notions concerning world urbanization. J Portugali, Contemporary perspectives on urbanization. Amos, F.J.C., Bourne, L.S., Portugali, J.46Portugali, J.: Notions concerning world urbanization. In: Amos, F.J.C., Bourne, L.S., Por- tugali, J. (eds): Contemporary perspectives on urbanization. Progress in Planning. 46(3) (1996) 145-162
Self-Organizing Cities. J Portugali, Futures. 294-5Portugali, J.: Self-Organizing Cities. Futures. 29(4-5) (1997) 353-380
Behavior: the Control of Perception. W T Powers, Aldine, ChicagoPowers, W.T.: Behavior: the Control of Perception. Aldine, Chicago (1973)
Semiosis in embodied autonomous systems. E Prem, Proceedings of the IEEE International Symposium on Intelligent Control. IEEE, Piscataway NJ. the IEEE International Symposium on Intelligent Control. IEEE, Piscataway NJPrem, E.: Semiosis in embodied autonomous systems. In: Proceedings of the IEEE Interna- tional Symposium on Intelligent Control. IEEE, Piscataway NJ (1998) 724-729
Development and Evolution. S Salthe, MIT PressCambridgeSalthe, S.: Development and Evolution. MIT Press, Cambridge (1993)
A framework for navigation. R Spence, Int. J. Human-Computer Studies. 515Spence, R.: A framework for navigation. Int. J. Human-Computer Studies. 51(5) (1999) 919-945
Organizational Dynamics, Social Norms and Information Systems. R Stamper, K Liu, Proceedings of the Twenty Seventh Hawaii Int. Conf. on System Sciences. the Twenty Seventh Hawaii Int. Conf. on System SciencesStamper, R. and Liu, K.: Organizational Dynamics, Social Norms and Information Sys- tems. In: Proceedings of the Twenty Seventh Hawaii Int. Conf. on System Sciences (1994) IV645-654
Signs, Information, Norms and Systems. R Stamper, Signs of Work. Holmqvist, B., Andersen, P.B., Klein, H., Posner, R.BerlinDe GruyterStamper, R.: Signs, Information, Norms and Systems. In: Holmqvist, B., Andersen, P.B., Klein, H., Posner, R. (eds): Signs of Work. De Gruyter, Berlin (1996) 349-399
Conceptual Integration and Formal Expression. M Turner, G Fauconnier, Metaphor and Symbolic Activity. 103Turner, M. and Fauconnier, G.: Conceptual Integration and Formal Expression. Metaphor and Symbolic Activity. 10(3) (1995) 183-204
Foundations of Niklas Luhmann's Theory of Social Systems. A Viskovatoff, Philosophy of the Social Sciences. 294Viskovatoff, A.: Foundations of Niklas Luhmann's Theory of Social Systems. Philosophy of the Social Sciences. 29(4) (1999) 481-516
| [] |
[
"The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation",
"The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation"
] | [
"Mia Xu Chen miachen@google.com ",
"Orhan Firat orhanf@google.com ",
"Ankur Bapna ankurbpn@google.com ",
"Melvin Johnson ",
"Wolfgang Macherey ",
"George Foster ",
"Llion Jones ",
"Niki Parmar ",
"Mike Schuster ",
"Zhifeng Chen ",
"Google Ai ",
"Yonghui Wu yonghui@google.com ",
"Macduff Hughes "
] | [] | [] | The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets. | 10.18653/v1/p18-1008 | [
"https://arxiv.org/pdf/1804.09849v2.pdf"
] | 13,747,425 | 1804.09849 | dc9a0f66aed1d4235c6e53df184bc374fb875e7b |
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
Mia Xu Chen miachen@google.com
Orhan Firat orhanf@google.com
Ankur Bapna ankurbpn@google.com
Melvin Johnson
Wolfgang Macherey
George Foster
Llion Jones
Niki Parmar
Mike Schuster
Zhifeng Chen
Google Ai
Yonghui Wu yonghui@google.com
Macduff Hughes
The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation
The past year has witnessed rapid advances in sequence-to-sequence (seq2seq) modeling for Machine Translation (MT). The classic RNN-based approaches to MT were first out-performed by the convolutional seq2seq model, which was then outperformed by the more recent Transformer model. Each of these new approaches consists of a fundamental architecture accompanied by a set of modeling and training techniques that are in principle applicable to other seq2seq architectures. In this paper, we tease apart the new architectures and their accompanying techniques in two ways. First, we identify several key modeling and training techniques, and apply them to the RNN architecture, yielding a new RNMT+ model that outperforms all of the three fundamental architectures on the benchmark WMT'14 English→French and English→German tasks. Second, we analyze the properties of each fundamental seq2seq architecture and devise new hybrid architectures intended to combine their strengths. Our hybrid models obtain further improvements, outperforming the RNMT+ model on both benchmark datasets.
Introduction
In recent years, the emergence of seq2seq models (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Cho et al., 2014) has revolutionized the field of MT by replacing traditional phrasebased approaches with neural machine translation (NMT) systems based on the encoder-decoder paradigm. In the first architectures that surpassed * Equal contribution. the quality of phrase-based MT, both the encoder and decoder were implemented as Recurrent Neural Networks (RNNs), interacting via a soft-attention mechanism (Bahdanau et al., 2015). The RNN-based NMT approach, or RNMT, was quickly established as the de-facto standard for NMT, and gained rapid adoption into large-scale systems in industry, e.g. Baidu (Zhou et al., 2016), Google (Wu et al., 2016), and Systran (Crego et al., 2016).
Following RNMT, convolutional neural network based approaches (LeCun and Bengio, 1998) to NMT have recently drawn research attention due to their ability to fully parallelize training to take advantage of modern fast computing devices. such as GPUs and Tensor Processing Units (TPUs) (Jouppi et al., 2017). Well known examples are ByteNet (Kalchbrenner et al., 2016) and ConvS2S (Gehring et al., 2017). The ConvS2S model was shown to outperform the original RNMT architecture in terms of quality, while also providing greater training speed.
Most recently, the Transformer model (Vaswani et al., 2017), which is based solely on a selfattention mechanism (Parikh et al., 2016) and feed-forward connections, has further advanced the field of NMT, both in terms of translation quality and speed of convergence.
In many instances, new architectures are accompanied by a novel set of techniques for performing training and inference that have been carefully optimized to work in concert. This 'bag of tricks' can be crucial to the performance of a proposed architecture, yet it is typically under-documented and left for the enterprising researcher to discover in publicly released code (if any) or through anecdotal evidence. This is not simply a problem for reproducibility; it obscures the central scientific question of how much of the observed gains come from the new architecture and how much can be attributed to the associated training and inference techniques. In some cases, these new techniques may be broadly applicable to other architectures and thus constitute a major, though implicit, contribution of an architecture paper. Clearly, they need to be considered in order to ensure a fair comparison across different model architectures.
In this paper, we therefore take a step back and look at which techniques and methods contribute significantly to the success of recent architectures, namely ConvS2S and Transformer, and explore applying these methods to other architectures, including RNMT models. In doing so, we come up with an enhanced version of RNMT, referred to as RNMT+, that significantly outperforms all individual architectures in our setup. We further introduce new architectures built with different components borrowed from RNMT+, ConvS2S and Transformer. In order to ensure a fair setting for comparison, all architectures were implemented in the same framework, use the same pre-processed data and apply no further post-processing as this may confound bare model performance.
Our contributions are three-fold:
1. In ablation studies, we quantify the effect of several modeling improvements (including multi-head attention and layer normalization) as well as optimization techniques (such as synchronous replica training and labelsmoothing), which are used in recent architectures. We demonstrate that these techniques are applicable across different model architectures. We quickly note two prior works that provided empirical solutions to the difficulty of training NMT architectures (specifically RNMT). In (Britz et al., 2017) the authors systematically explore which elements of NMT architectures have a significant impact on translation quality. In (Denkowski and Neubig, 2017) the authors recommend three specific techniques for strengthening NMT systems and empirically demonstrated how incorporating those techniques improves the reliability of the experimental results.
Background
In this section, we briefly discuss the commmonly used NMT architectures.
RNN-based NMT Models -RNMT
RNMT models are composed of an encoder RNN and a decoder RNN, coupled with an attention network. The encoder summarizes the input sequence into a set of vectors while the decoder conditions on the encoded input sequence through an attention mechanism, and generates the output sequence one token at a time.
The most successful RNMT models consist of stacked RNN encoders with one or more bidirectional RNNs, and stacked decoders with unidirectional RNNs. Both encoder and decoder RNNs consist of either LSTM (Hochreiter and Schmidhuber, 1997) or GRU units (Cho et al., 2014), and make extensive use of residual (He et al., 2015) or highway (Srivastava et al., 2015) connections.
In Google-NMT (GNMT) (Wu et al., 2016), the best performing RNMT model on the datasets we consider, the encoder network consists of one bi-directional LSTM layer, followed by 7 uni-directional LSTM layers. The decoder is equipped with a single attention network and 8 uni-directional LSTM layers. Both the encoder and the decoder use residual skip connections between consecutive layers.
In this paper, we adopt GNMT as the starting point for our proposed RNMT+ architecture, following the public NMT codebase 1 .
Convolutional NMT Models -ConvS2S
In the most successful convolutional sequence-tosequence model (Gehring et al., 2017), both the encoder and decoder are constructed by stacking multiple convolutional layers, where each layer contains 1-dimensional convolutions followed by a gated linear units (GLU) (Dauphin et al., 2016). Each decoder layer computes a separate dotproduct attention by using the current decoder layer output and the final encoder layer outputs. Positional embeddings are used to provide explicit positional information to the model. Following the practice in (Gehring et al., 2017), we scale the gradients of the encoder layers to stabilize training. We also use residual connections across each convolutional layer and apply weight normalization (Salimans and Kingma, 2016) to speed up convergence. We follow the public ConvS2S codebase 2 in our experiments.
Conditional Transformation-based NMT Models -Transformer
The Transformer model (Vaswani et al., 2017) is motivated by two major design choices that aim to address deficiencies in the former two model families:
(1) Unlike RNMT, but similar to the ConvS2S, the Transformer model avoids any sequential dependencies in both the encoder and decoder networks to maximally parallelize training.
(2) To address the limited context problem (limited receptive field) present in ConvS2S, the Transformer model makes pervasive use of selfattention networks (Parikh et al., 2016) so that each position in the current layer has access to information from all other positions in the previous layer. The Transformer model still follows the encoder-decoder paradigm. Encoder transformer layers are built with two sub-modules: (1) a selfattention network and (2) a feed-forward network. Decoder transformer layers have an additional cross-attention layer sandwiched between the selfattention and feed-forward layers to attend to the encoder outputs.
There are two details which we found very important to the model's performance: (1) Each sublayer in the transformer (i.e. self-attention, crossattention, and the feed-forward sub-layer) follows a strict computation sequence: normalize → transform → dropout→ residual-add.
(2) In addition to per-layer normalization, the final encoder output is again normalized to prevent a blow up after consecutive residual additions.
In this paper, we follow the latest version of the 2 https://github.com/facebookresearch/fairseq-py Transformer model in the public Tensor2Tensor 3 codebase.
A Theory-Based Characterization of NMT Architectures
From a theoretical point of view, RNNs belong to the most expressive members of the neural network family (Siegelmann and Sontag, 1995) 4 . Possessing an infinite Markovian structure (and thus an infinite receptive fields) equips them to model sequential data (Elman, 1990), especially natural language (Grefenstette et al., 2015) effectively. In practice, RNNs are notoriously hard to train (Bengio et al., 1994), confirming the well known dilemma of trainability versus expressivity. Convolutional layers are adept at capturing local context and local correlations by design. A fixed and narrow receptive field for each convolutional layer limits their capacity when the architecture is shallow. In practice, this weakness is mitigated by stacking more convolutional layers (e.g. 15 layers as in the ConvS2S model), which makes the model harder to train and demands meticulous initialization schemes and carefully designed regularization techniques.
The transformer network is capable of approximating arbitrary squashing functions (Hornik et al., 1989), and can be considered a strong feature extractor with extended receptive fields capable of linking salient features from the entire sequence. On the other hand, lacking a memory component (as present in the RNN models) prevents the network from modeling a state space, reducing its theoretical strength as a sequence model, thus it requires additional positional information (e.g. sinusoidal positional encodings).
Above theoretical characterizations will drive our explorations in the following sections.
Experiment Setup
We train our models on the standard WMT'14 En→Fr and En→De datasets that comprise 36.3M and 4.5M sentence pairs, respectively. Each sentence was encoded into a sequence of sub-word units obtained by first tokenizing the sentence with the Moses tokenizer, then splitting tokens into subword units (also known as "wordpieces") using the approach described in (Schuster and Nakajima, 2012). On the left side, the encoder network has 6 bidirectional LSTM layers. At the end of each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated. On the right side, the decoder network has 8 unidirectional LSTM layers, with the first layer used for obtaining the attention context vector through multi-head additive attention. The attention context vector is then fed directly into the rest of the decoder layers as well as the softmax layer.
We use a shared vocabulary of 32K sub-word units for each source-target language pair. No further manual or rule-based post processing of the output was performed beyond combining the subword units to generate the targets. We report all our results on newstest 2014, which serves as the test set. A combination of newstest 2012 and newstest 2013 is used for validation.
To evaluate the models, we compute the BLEU metric on tokenized, true-case output. 5 For each training run, we evaluate the model every 30 minutes on the dev set. Once the model converges, we determine the best window based on the average dev-set BLEU score over 21 consecutive evaluations. We report the mean test score and standard deviation over the selected window. This allows us to compare model architectures based on their mean performance after convergence rather than individual checkpoint evaluations, as the latter can be quite noisy for some models.
To enable a fair comparison of architectures, we use the same pre-processing and evaluation methodology for all our experiments. We refrain from using checkpoint averaging (exponential moving averages of parameters) (Junczys-Dowmunt et al., 2016) or checkpoint ensembles (Jean et al., 2015;Chen et al., 2017) to focus on evaluating the performance of individual models.
RNMT+
Model Architecture of RNMT+
The newly proposed RNMT+ model architecture is shown in Figure 1. Here we highlight the key architectural choices that are different between the RNMT+ model and the GNMT model. There are 6 bidirectional LSTM layers in the encoder instead of 1 bidirectional LSTM layer followed by 7 unidirectional layers as in GNMT. For each bidirectional layer, the outputs of the forward layer and the backward layer are concatenated before being fed into the next layer. The decoder network consists of 8 unidirectional LSTM layers similar to the GNMT model. Residual connections are added to the third layer and above for both the encoder and decoder. Inspired by the Transformer model, pergate layer normalization (Ba et al., 2016) is applied within each LSTM cell. Our empirical results show that layer normalization greatly stabilizes training. No non-linearity is applied to the LSTM output. A projection layer is added to the encoder final output. 6 Multi-head additive attention is used instead of the single-head attention in the GNMT model. Similar to GNMT, we use the bottom decoder layer and the final encoder layer output after projection for obtaining the recurrent attention context. In addition to feeding the attention context to all decoder LSTM layers, we also feed it to the softmax. This is important for both the quality of the models with multi-head attention and the stability of the training process.
Since the encoder network in RNMT+ consists solely of bi-directional LSTM layers, model parallelism is not used during training. We compensate for the resulting longer per-step time with increased data parallelism (more model replicas), so that the overall time to reach convergence of the RNMT+ model is still comparable to that of GNMT.
We apply the following regularization techniques during training.
• Dropout: We apply dropout to both embedding layers and each LSTM layer output before it is added to the next layer's input. Attention dropout is also applied.
• Label Smoothing: We use uniform label smoothing with an uncertainty=0.1 (Szegedy et al., 2015). Label smoothing was shown to have a positive impact on both Transformer and RNMT+ models, especially in the case of RNMT+ with multi-head attention. Similar to the observations in (Chorowski and Jaitly, 2016), we found it beneficial to use a larger beam size (e.g. 16, 20, etc.) during decoding when models are trained with label smoothing.
• Weight Decay: For the WMT'14 En→De task, we apply L2 regularization to the weights with λ = 10 −5 . Weight decay is only applied to the En→De task as the corpus is smaller and thus more regularization is required.
We use the Adam optimizer (Kingma and Ba, 2014) with β 1 = 0.9, β 2 = 0.999, = 10 −6 and vary the learning rate according to this schedule:
lr = 10 −4 · min 1 + t · (n − 1) np , n, n · (2n) s−nt e−s
(1) Here, t is the current step, n is the number of concurrent model replicas used in training, p is the number of warmup steps, s is the start step of the exponential decay, and e is the end step of the decay. Specifically, we first increase the learning rate linearly during the number of warmup steps, keep it a constant until the decay start step s, then exponentially decay until the decay end step e, and keep it at 5 · 10 −5 after the decay ends. This learning rate schedule is motivated by a similar schedule that was successfully applied in training the Resnet-50 model with a very large batch size (Goyal et al., 2017).
In contrast to the asynchronous training used for GNMT (Dean et al., 2012), we train RNMT+ models with synchronous training . Our empirical results suggest that when hyper-parameters are tuned properly, synchronous training often leads to improved convergence speed and superior model quality.
To further stabilize training, we also use adaptive gradient clipping. We discard a training step completely if an anomaly in the gradient norm value is detected, which is usually an indication of an imminent gradient explosion. More specifically, we keep track of a moving average and a moving standard deviation of the log of the gradient norm values, and we abort a step if the norm of the gradient exceeds four standard deviations of the moving average.
Model Analysis and Comparison
In this section, we compare the results of RNMT+ with ConvS2S and Transformer.
All models were trained with synchronous training. RNMT+ and ConvS2S were trained with 32 NVIDIA P100 GPUs while the Transformer Base and Big models were trained using 16 GPUs.
For RNMT+, we use sentence-level crossentropy loss. Each training batch contained 4096 sentence pairs (4096 source sequences and 4096 target sequences). For ConvS2S and Transformer models, we use token-level cross-entropy loss. Each training batch contained 65536 source tokens and 65536 target tokens. For the GNMT baselines on both tasks, we cite the largest BLEU score reported in (Wu et al., 2016) Table 2 shows our results on the WMT'14 En→De task. The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points. RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49. In this case, RNMT+ converged slightly faster than the Transformer Big model and maintained much more stable performance after convergence with a very small standard deviation, which is similar to what we observed on the En-Fr task. Table 3 summarizes training performance and model statistics. The Transformer Base model is the fastest model in terms of training speed. RNMT+ is slower to train than the Transformer 7 Since the ConvS2S model convergence is very slow we did not explore further tuning on En→Fr, and validated our implementation on En→De. 8 The BLEU scores for Transformer model are slightly lower than those reported in (Vaswani et al., 2017) due to four differences: 1) We report the mean test BLEU score using the strategy described in section 3.
2) We did not perform checkpoint averaging since it would be inconsistent with our evaluation for other models.
3) We avoided any manual post-processing, like unicode normalization using Moses replace-unicode-punctuation.perl or output tokenization using Moses tokenizer.perl, to rule out its effect on the evaluation. We observed a significant BLEU increase (about 0.6) on applying these post processing techniques. 4) In (Vaswani et al., 2017), reported BLEU scores are calculated using mteval-v13a.pl from Moses, which re-tokenizes its input.
Model
Test
Ablation Experiments
In this section, we evaluate the importance of four main techniques for both the RNMT+ and the Transformer Big models. We believe that these techniques are universally applicable across different model architectures, and should always be employed by NMT practitioners for best performance. We take our best RNMT+ and Transformer Big models and remove each one of these techniques independently. By doing this we hope to learn two things about each technique: (1) How much does it affect the model performance? (2) How useful is it for stable training of other techniques and hence the final model?
From • Multi-head Attention Multi-head attention contributes significantly to the quality of both models, resulting in an average increase of 0.6 BLEU for RNMT+ and 0.9 BLEU for Transformer Big models.
• Layer Normalization Layer normalization is most critical to stabilize the training process of either model, especially when multi-head attention is used. Removing layer normalization results in unstable training runs for both models.
Since by design, we remove one technique at a time in our ablation experiments, we were unable to quantify how much layer normalization helped in either case. To be able to successfully train a model without layer normalization, we would have to adjust other parts of the model and retune its hyper-parameters.
Hybrid NMT Models
In this section, we explore hybrid architectures that shed some light on the salient behavior of each model family. These hybrid models outperform the individual architectures on both benchmark datasets and provide a better understanding of the capabilities and limitations of each model family.
Assessing Individual Encoders and Decoders
In an encoder-decoder architecture, a natural assumption is that the role of an encoder is to build feature representations that can best encode the meaning of the source sequence, while a decoder should be able to process and interpret the representations from the encoder and, at the same time, track the current target history. Decoding is inherently auto-regressive, and keeping track of the state information should therefore be intuitively beneficial for conditional generation.
We set out to study which family of encoders is more suitable to extract rich representations from a given input sequence, and which family of decoders can make the best of such rich representations. We start by combining the encoder and decoder from different model families. Since it takes a significant amount of time for a ConvS2S model to converge, and because the final translation quality was not on par with the other models, we focus on two types of hybrids only: Transformer encoder with RNMT+ decoder and RNMT+ encoder with Transformer decoder. Table 5: Results for encoder-decoder hybrids.
From Table 5, it is clear that the Transformer encoder is better at encoding or feature extraction than the RNMT+ encoder, whereas RNMT+ is better at decoding or conditional language modeling, confirming our intuition that a stateful decoder is beneficial for conditional language generation.
Assessing Encoder Combinations
Next, we explore how the features extracted by an encoder can be further enhanced by incorporating additional information. Specifically, we investigate the combination of transformer layers with RNMT+ layers in the same encoder block to build even richer feature representations. We exclusively use RNMT+ decoders in the following architectures since stateful decoders show better performance according to Table 5.
We study two mixing schemes in the encoder (see Fig. 2):
(1) Cascaded Encoder: The cascaded encoder aims at combining the representational power of RNNs and self-attention. The idea is to enrich a set of stateful representations by cascading a feature extractor with a focus on vertical mapping, similar to (Pascanu et al., 2013;Devlin, 2017). Our best performing cascaded encoder involves fine tuning transformer layers stacked on top of a pre-trained frozen RNMT+ encoder. Using a pre-trained encoder avoids optimization difficulties while significantly enhancing encoder capacity. As shown in Table 6, the cascaded encoder improves over the Transformer encoder by more than 0.5 BLEU points on the WMT'14 En→Fr task. This suggests that the Transformer encoder is able to extract richer representations if the input is augmented with sequential context.
(2) Multi-Column Encoder: As illustrated in Fig. 2b, a multi-column encoder merges the outputs of several independent encoders into a single combined representation. Unlike a cascaded encoder, the multi-column encoder enables us to investigate whether an RNMT+ decoder can distinguish information received from two different channels and benefit from its combination. A crucial operation in a multi-column encoder is therefore how different sources of information are merged into a unified representation. Our best multi-column encoder performs a simple concatenation of individual column outputs.
The model details and hyperparameters of the above two encoders are described in Appendix A.5 and A.6. As shown in Table 6, the multi-column encoder followed by an RNMT+ decoder achieves better results than the Transformer and the RNMT model on both WMT'14 benchmark tasks.
Conclusion
In this work we explored the efficacy of several architectural and training techniques proposed in recent studies on seq2seq models for NMT. We demonstrated that many of these techniques are broadly applicable to multiple model architectures. Applying these new techniques to RNMT models yields RNMT+, an enhanced RNMT model that significantly outperforms the three fundamental architectures on WMT'14 En→Fr and En→De tasks. We further presented several hybrid models developed by combining encoders and decoders from the Transformer and RNMT+ models, and empirically demonstrated the superiority of the Transformer encoder and the RNMT+ decoder in comparison with their counterparts. We then enhanced the encoder architecture by horizontally and vertically mixing components borrowed from these architectures, leading to hybrid architectures that obtain further improvements over RNMT+. We hope that our work will motivate NMT researchers to further investigate generally applicable training and optimization techniques, and that our exploration of hybrid architectures will open paths for new architecture search efforts for NMT.
Our focus on a standard single-language-pair translation task leaves important open questions to be answered: How do our new architectures compare in multilingual settings, i.e., modeling an interlingua? Which architecture is more efficient and powerful in processing finer grained inputs and outputs, e.g., characters or bytes? How transferable are the representations learned by the different architectures to other tasks? And what are the characteristic errors that each architecture makes, e.g., linguistic plausibility?
A Supplemental Material
A.1 ConvS2S
For the WMT'14 En→De task, both the encoder and decoder have 15 layers, with 512 hidden units in the first ten layers, 768 units in the subsequent three layers and 2048 units in the final two layers. The first 13 layers use kernel width 3 and the final two layers use kernel width 1. For the WMT'14 En→Fr task, both the encoder and decoder have 14 layers, with 512 hidden units in the first five layers, 768 units in the subsequent four layers, 1024 units in the next three layers, 2048 units and 4096 units in the final two layers. The first 12 layers use kernel width 3 and the final two layers use kernel width 1. We train the ConvS2S models with synchronous training using 32 GPUs.
A.2 Transformer
Both the encoder and the decoder have 6 Transformer layers. Transformer base model has model dimension 512, hidden dimension 2048 and 8 attention heads. The Transformer Big model uses model dimension 1024, hidden dimension 8192 and 16 attention heads. We group the dropout in Transformer models into four types: input dropout -dropout applied to the sum of token embeddings and position encodings, residual dropout -dropout applied to the output of each sublayer before added to the sublayer input, relu dropout -dropout applied to the inner layer output after ReLU activation in each feed-forward sub-layer, attention dropout -dropout applied to attention weight in each attention sub-layer. All Transformer models use the following learning rate schedule:
lr = r 0 √ d model · min t + 1 p √ p , 1 (t + 1)(2)
where t is the current step, p is the number of warmup steps, d model is the model dimension and r 0 is a constant to adjust the magnitude of the learning rate. On WMT'14 En→De, the Transformer Base model employs all four types of dropout with dropout probs = 0.1. We use r 0 = 2.0 and p = 8000 in the learning rate schedule. For the Transformer Big model, only residual dropout and input dropout are applied, both with dropout probs = 0.3. r 0 = 3.0 and p = 40000 are used in the learning rate schedule.
On WMT'14 En→Fr, the Base model applies only residual dropout and input dropout, each with We train both Transformer base model and big model with synchronous training using 16 GPUs.
A.3 RNMT+
RNMT+ has 1024 LSTM nodes in all encoder and decoder layers. The input embedding dimension is 1024 as well. The encoder final projection layer projects the last bidirectional layer output from dimension 2048 to 1024. We use 4 attention heads in the multi-head additive attention. Label smoothing is applied with an uncertainty = 0.1. Figure 3 illustrates our learning rate schedule defined in Eq. 1.
On WMT'14 En→De, we use p = 500, s = 600000, e = 1200000 for the learning rate schedule and apply all dropout types with dropout probs = 0.3. We apply L2 regularization to the weights with λ = 10 −5 . On WMT'14 En→Fr, we use p = 500, s = 1200000, e = 3600000, dropout probs = 0.2. No weight decay is applied. RNMT+ models are trained with synchronous training using 32 GPUs.
A.4 Encoder-Decoder Hybrids
For both encoder-decoder hybrids, i.e., Transformer Big encoder with RNMT+ decoder and RNMT+ encoder with Transformer Big decoder, we use the exactly same model hyperparameters as in the Transformer Big and RNMT+ models described in above sections.
We use Transformer learning rate schedule (Eq.
2) for both hybrids. For the WMT'14 En→Fr task, we use r 0 = 4.0 and p = 50000 for the hybrid with Transformer encoder and RNMT+ decoder, and use r 0 = 3.0 and p = 40000 for the hybrid with RNMT+ encoder and Transformer decoder. Both hybrid models are trained with synchronous training using 32 GPUs.
A.5 Cascaded Encoder Hybrid
In this hybrid we stack a transformer encoder on top of the RNMT+ encoder. In our experiments we used a pre-trained RNMT+ encoder, including the projection layer, exactly as described in section 4. The outputs of the RNMT+ encoder are layer normalized and fed into a transformer encoder. This structure is illustrated in Figure 2a. The transformer encoder is identical to the one described in subsection 2.3 except for the different number of layers. Our best setup uses 4 Transformer layers stacked on top of a pre-trained RNMT+ encoder with 6 layers. To speed up convergence, we froze gradient updates in the pre-trained RNMT+ encoder. This enables us to increase the encoder capacity significantly, while avoiding optimization issues encountered in non-frozen variants of the hybrid. As an additional benefit, this enables us to train the model on P100s without the need for model parallelism.
Note that this specific layout allows us to drop hand-crafted sinusoidal positional embeddings (since position information is already captured by the underlying RNNs).
We use the Transformer learning rate schedule (Eq. 2) for this hybrid with r 0 = 2.0 and p = 16000 and train with synchronous training using 32 GPUs. We apply the same dropouts used for the transformer model to the transformer layers in the encoder, and apply L2 weight decay with λ = 10 −5 to the decoder layers.
A.6 Multi-Column Encoder Hybrid
We use a simple concatenation as the mergeroperator without fine-tuning any other model hyperparameters. After concatenation, the combined representation is projected down to the decoder dimension with a layer-normalized affine transformation. Although in this paper we only use two columns, there is no practical restriction on the total number of columns that this hybrid can combine. By combining multiple encoder representations, the network may capture different factors of variations in the input sequence.
Similar to the Cascaded-RNMT+ hybrid, we use pre-trained encoders that are borrowed from an RNMT+ model (we used a pretrained RNMT+ encoder as the first column) and an Encoder-Decoder hybrid model with Transformer encoder and RNMT+ decoder (we used the pretrained Transformer encoder). Multi-column encoder with RNMT+ decoder is trained using 16 GPUs in a synchronous training setup. We stick to the simple concatenation operation as the mergeroperator, and after concatenation, the combined representation is projected down the decoder dimension with a simple layer-normalized affine transformation. One additional note that we observed for the sake of stability and trainability, each column output should be first mapped to a space where the representation ranges are compatible, e.g., RNMT+ encoder output has not limitation on its range, but a Transformer Encoder output range is constrained by the final layer normalization applied to the entire Transformer encoder body. Therefore, we also applied layer normalization to the RNMT+ encoder outputs to match the ranges of individual encoders.
On WMT'14 En→De, we use p = 50, s = 300000, e = 900000 for the learning rate schedule and apply all dropout types with dropout probs = 0.3. We apply L2 regularization to the weights with λ = 10 −5 . On WMT'14 En→Fr, we use Transformer learning rate schedule (Eq. 2) r 0 = 1.0 and p = 10000. No weight decay or dropout is applied.
Figure 1 :
1Model architecture of RNMT+.
Figure 2 :
2Vertical and horizontal mixing of Transformer and RNMT+ components in an encoder.
Figure 3 :
3RNMT+ learning-rate schedule. dropout probs = 0.1. The learning rate schedule uses r 0 = 1.0 and p = 4000. For the big model, we apply all four types of dropout, each with dropout probs = 0.1. The learning rate schedule uses r 0 = 3.0 and p = 40000.
without reinforcement learning.Table 1shows our results on the WMT'14 En→Fr task. Both the Transformer Big model and RNMT+ outperform GNMT and ConvS2S by about 2 BLEU points. RNMT+ is slightly better than the Transformer Big model in terms of its mean BLEU score. RNMT+ also yields a much lower standard deviation, and hence we observed much less fluctuation in the training curve. It takes approximately 3 days for the Transformer Base model to converge, while both RNMT+ and the Transformer Big model require about 5 days to converge. Although the batching schemes are quite different between the Transformer Big and the RNMT+ model, they have processed about the same amount of training samples upon convergence.Model
Test BLEU
Epochs
Training
Time
GNMT
38.95
-
-
ConvS2S 7 39.49 ± 0.11
62.2
438h
Trans. Base 39.43 ± 0.17
20.7
90h
Trans. Big 8 40.73 ± 0.19
8.3
120h
RNMT+
41.00 ± 0.05
8.5
120h
Table 1: Results on WMT14 En→Fr. The num-
bers before and after '±' are the mean and stan-
dard deviation of test BLEU score over an evalua-
tion window.
Table 3 :
3Performance comparison. Examples/s are normalized by the number of GPUs used in the training job. FLOPs are computed assuming that source and target sequence length are both 50.
Table 4
4we draw the following conclu-
sions about the four techniques:
Table 6 :
6Results for hybrids with cascaded en-
coder and multi-column encoder.
https://github.com/tensorflow/nmt
https://github.com/tensorflow/tensor2tensor 4 Assuming that data complexity is satisfied.
This procedure is used in the literature to which we compare(Gehring et al., 2017;Wu et al., 2016).
Additional projection aims to reduce the dimensionality of the encoder output representations to match the decoder stack dimension.
AcknowledgmentsWe would like to thank the entire Google Brain Team and Google Translate Team for their foundational contributions to this project. We would also like to thank Noam Shazeer, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, and the entire Tensor2Tensor development team for their useful inputs and discussions.
. Jimmy Lei, Ryan Ba, Geoffrey E Kiros, Hinton, abs/1607.06450Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR abs/1607.06450. http://arxiv.org/abs/1607.06450.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Inter- national Conference on Learning Representations. http://arxiv.org/abs/1409.0473.
Learning long-term dependencies with gradient descent is difficult. Y Bengio, P Simard, P Frasconi, 10.1109/72.279181Y. Bengio, P. Simard, and P. Frasconi. 1994. Learn- ing long-term dependencies with gradient descent is difficult.
. Trans. Neur. Netw. 52Trans. Neur. Netw. 5(2):157-166.
. 10.1109/72.279181https://doi.org/10.1109/72.279181.
Massive exploration of neural machine translation architectures. Denny Britz, Anna Goldie, Minh-Thang Luong, Quoc Le, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsCopenhagen, DenmarkDenny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Process- ing. Association for Computational Linguis- tics, Copenhagen, Denmark, pages 1442-1451. https://www.aclweb.org/anthology/D17-1151.
Checkpoint ensembles: Ensemble methods from a single training process. Hugh Chen, Scott Lundberg, Su-In Lee, CoRR abs/1710.03282Hugh Chen, Scott Lundberg, and Su-In Lee. 2017. Checkpoint ensembles: Ensemble methods from a single training process. CoRR abs/1710.03282. http://arxiv.org/abs/1710.03282.
Revisiting distributed synchronous SGD. Jianmin Chen, Rajat Monga, Samy Bengio, Rafal Józefowicz, CoRR abs/1604.00981Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Józefowicz. 2016. Revisiting distributed synchronous SGD. CoRR abs/1604.00981.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merrienboer, Fethi Aglar Gülçehre, Holger Bougares, Yoshua Schwenk, Bengio, Conference on Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merrienboer, Ç aglar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoder-decoder for statis- tical machine translation. In Conference on Em- pirical Methods in Natural Language Processing. http://arxiv.org/abs/1406.1078.
Towards better decoding and language model integration in sequence to sequence models. Jan Chorowski, Navdeep Jaitly, CoRR abs/1612.02695Jan Chorowski and Navdeep Jaitly. 2016. Towards bet- ter decoding and language model integration in se- quence to sequence models. CoRR abs/1612.02695. http://arxiv.org/abs/1612.02695.
Systran's pure neural machine translation systems. Jungi Josep Maria Crego, Guillaume Kim, Anabel Klein, Kathy Rebollo, Jean Yang, Egor Senellart, Patrice Akhanov, Aurelien Brunelle, Yongchao Coquard, Satoshi Deng, Chiyo Enoue, Joshua Geiss, Ardas Johanson, Raoum Khalsa, Byeongil Khiari, Catherine Ko, Jean Kobus, Lorieux, CoRR abs/1610.05540Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter ZoldanJosep Maria Crego, Jungi Kim, Guillaume Klein, An- abel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leid- iana Martins, Dang-Chuan Nguyen, Alexandra Pri- ori, Thomas Riccardi, Natalia Segal, Christophe Ser- van, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran's pure neural machine translation systems. CoRR abs/1610.05540. http://arxiv.org/abs/1610.05540.
Language modeling with gated convolutional networks. N Yann, Angela Dauphin, Michael Fan, David Auli, Grangier, CoRR abs/1612.08083Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language model- ing with gated convolutional networks. CoRR abs/1612.08083. http://arxiv.org/abs/1612.08083.
Large scale distributed deep networks. Jeffrey Dean, Greg S Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V Le, Mark Z Mao, Marcaurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Andrew Y Ng, NIPS. Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcAurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. 2012. Large scale dis- tributed deep networks. In NIPS.
Stronger baselines for trustable results in neural machine translation. Michael Denkowski, Graham Neubig, Proceedings of the First Workshop on Neural Machine Translation. the First Workshop on Neural Machine TranslationVancouverAssociation for Computational LinguisticsMichael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural ma- chine translation. In Proceedings of the First Work- shop on Neural Machine Translation. Association for Computational Linguistics, Vancouver, pages 18-27. http://www.aclweb.org/anthology/W17-
Sharp models on dull hardware: Fast and accurate neural machine translation decoding on the cpu. Jacob Devlin, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsJacob Devlin. 2017. Sharp models on dull hard- ware: Fast and accurate neural machine trans- lation decoding on the cpu. In Proceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 2820-2825. http://aclweb.org/anthology/D17-1300.
Finding structure in time. Jeffrey L Elman, 10.1016/0364-0213(90)90002-ECognitive Science. 142Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science 14(2):179 -211.
. 10.1016/0364-0213(90)90002-E90002https://doi.org/https://doi.org/10.1016/0364- 0213(90)90002-E.
Convolutional sequence to sequence learning. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N Dauphin, CoRR abs/1705.03122Jonas Gehring, Michael Auli, David Grangier, De- nis Yarats, and Yann N. Dauphin. 2017. Con- volutional sequence to sequence learning. CoRR abs/1705.03122. http://arxiv.org/abs/1705.03122.
Priya Goyal, Piotr Dollár, Ross B Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR abs/1706.02677. Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, An- drew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch SGD: train- ing imagenet in 1 hour. CoRR abs/1706.02677. http://arxiv.org/abs/1706.02677.
Learning to transduce with unbounded memory. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, Phil Blunsom, Proceedings of the 28th International Conference on Neural Information Processing Systems -Volume. the 28th International Conference on Neural Information Processing Systems -VolumeCambridge, MA, USAMIT Press2Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to transduce with unbounded mem- ory. In Proceedings of the 28th International Conference on Neural Information Process- ing Systems -Volume 2. MIT Press, Cam- bridge, MA, USA, NIPS'15, pages 1828-1836.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CoRR abs/1512.03385Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. CoRR abs/1512.03385.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.
Multilayer feedforward networks are universal approximators. Kurt Hornik, Maxwell Stinchcombe, Halbert White, 10.1016/0893-6080(89)90020-8Neural Networks. 25Kurt Hornik, Maxwell Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural Networks 2(5):359 -366. https://doi.org/https://doi.org/10.1016/0893- 6080(89)90020-8.
On using very large target vocabulary for neural machine translation. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, Yoshua Bengio, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingSébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing.
In-datacenter performance analysis of a tensor processing unit. Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, CoRR abs/1704.04760Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, and et al. 2017. In-datacenter performance analysis of a tensor processing unit. CoRR abs/1704.04760. http://arxiv.org/abs/1704.04760.
The amu-uedin submission to the wmt16 news translation task: Attention-based nmt models as feature functions in phrase-based smt. Marcin Junczys-Dowmunt, Tomasz Dwojak, Rico Sennrich, arXiv:1605.04809arXiv preprintMarcin Junczys-Dowmunt, Tomasz Dwojak, and Rico Sennrich. 2016. The amu-uedin submission to the wmt16 news translation task: Attention-based nmt models as feature functions in phrase-based smt. arXiv preprint arXiv:1605.04809 .
Recurrent continuous translation models. Nal Kalchbrenner, Phil Blunsom, Conference on Empirical Methods in Natural Language Processing. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Conference on Empirical Methods in Natural Language Process- ing.
Neural machine translation in linear time. Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aäron Van Den Oord, Alex Graves, Koray Kavukcuoglu, CoRR abs/1610.10099Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aäron van den Oord, Alex Graves, and Ko- ray Kavukcuoglu. 2016. Neural machine trans- lation in linear time. CoRR abs/1610.10099.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, CoRR abs/1412.6980Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR abs/1412.6980. http://arxiv.org/abs/1412.6980.
The handbook of brain theory and neural networks. Yann Lecun, Yoshua Bengio, chapter Convolutional Networks for Images, Speech, and Time Series. Cambridge, MA, USAMIT PressYann LeCun and Yoshua Bengio. 1998. The handbook of brain theory and neural net- works. MIT Press, Cambridge, MA, USA, chapter Convolutional Networks for Images, Speech, and Time Series, pages 255-258.
A decomposable attention model for natural language inference. P Ankur, Oscar Parikh, Dipanjan Täckström, Jakob Das, Uszkoreit, EMNLP. Ankur P. Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In EMNLP.
How to construct deep recurrent neural networks. Razvan Pascanu, Kyunghyun Aglar Gülçehre, Yoshua Cho, Bengio, CoRR abs/1312.6026Razvan Pascanu, Ç aglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. 2013. How to construct deep recurrent neural networks. CoRR abs/1312.6026. http://arxiv.org/abs/1312.6026.
Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Tim Salimans, Diederik P Kingma, CoRR abs/1602.07868Tim Salimans and Diederik P. Kingma. 2016. Weight normalization: A simple reparameterization to ac- celerate training of deep neural networks. CoRR abs/1602.07868. http://arxiv.org/abs/1602.07868.
Japanese and Korean voice search. Mike Schuster, Kaisuke Nakajima, IEEE International Conference on Acoustics, Speech and Signal Processing. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and Korean voice search. 2012 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing .
On the computational power of neural nets. H T Siegelmann, E D Sontag, 10.1006/jcss.1995.1013Journal of Computer and System Sciences. 501H.T. Siegelmann and E.D. Sontag. 1995. On the computational power of neural nets. Journal of Computer and System Sciences 50(1):132 -150. https://doi.org/https://doi.org/10.1006/jcss.1995.1013.
. Klaus Rupesh Kumar Srivastava, Jürgen Greff, Schmidhuber, abs/1505.00387Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. CoRR abs/1505.00387. http://arxiv.org/abs/1505.00387.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in Neural Information Processing Systems. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems. pages 3104-3112.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, CoRR abs/1512.00567Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. CoRR abs/1512.00567.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR abs/1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Atten- tion is all you need. CoRR abs/1706.03762.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, CoRR abs/1609.08144Oriol Vinyals. Greg Corrado, Macduff Hughes, and Jeffrey DeanYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. http://arxiv.org/abs/1609.08144.
Deep recurrent models with fast-forward connections for neural machine translation. Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, Wei Xu, CoRR abs/1606.04199Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural machine translation. CoRR abs/1606.04199. http://arxiv.org/abs/1606.04199.
| [
"https://github.com/facebookresearch/fairseq-py",
"https://github.com/tensorflow/nmt",
"https://github.com/tensorflow/tensor2tensor"
] |
[
"Mix-and-Match: Scalable Dialog Response Retrieval using Gaussian Mixture Embeddings",
"Mix-and-Match: Scalable Dialog Response Retrieval using Gaussian Mixture Embeddings"
] | [
"Gaurav Pandey gpandey1@in.ibm.com ",
"Sachindra Joshi ",
"\nIBM Research\nIndia Danish Contractor\n",
"\nIBM Research\nIndia\n",
"\nIBM Research\nIndia\n"
] | [
"IBM Research\nIndia Danish Contractor",
"IBM Research\nIndia",
"IBM Research\nIndia"
] | [] | Embedding-based approaches for dialog response retrieval embed the context-response pairs as points in the embedding space. These approaches are scalable, but fail to account for the complex, manto-many relationships that exist between context-response pairs. On the other end of the spectrum, there are approaches that feed the context-response pairs jointly through multiple layers of neural networks. These approaches can model the complex relationships between context-response pairs, but fail to scale when the set of responses is moderately large (>100). In this paper, we combine the best of both worlds by proposing a scalable model that can learn complex relationships between context-response pairs. Specifically, the model maps the contexts as well as responses to probability distributions over the embedding space. We train the models by optimizing the Kullback-Leibler divergence between the distributions induced by context-response pairs in the training data. We show that the resultant model achieves better performance as compared to other embedding-based approaches on publicly available conversation data. | 10.48550/arxiv.2204.02710 | [
"https://arxiv.org/pdf/2204.02710v1.pdf"
] | 247,996,476 | 2204.02710 | 8708ffb9e983fad5455761db107104224cedb0af |
Mix-and-Match: Scalable Dialog Response Retrieval using Gaussian Mixture Embeddings
Gaurav Pandey gpandey1@in.ibm.com
Sachindra Joshi
IBM Research
India Danish Contractor
IBM Research
India
IBM Research
India
Mix-and-Match: Scalable Dialog Response Retrieval using Gaussian Mixture Embeddings
10.1145/nnnnnnn.nnnnnnnACM Reference Format: Gaurav Pandey, Danish Contractor, and Sachindra Joshi. 2022. Mix-and-Match: Scalable Dialog Response Retrieval using Gaussian Mixture Embed-dings. In Proceedings of ACM Conference (Conference'17). ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnnconversation modellingdialog modellingresponse retrieval
Embedding-based approaches for dialog response retrieval embed the context-response pairs as points in the embedding space. These approaches are scalable, but fail to account for the complex, manto-many relationships that exist between context-response pairs. On the other end of the spectrum, there are approaches that feed the context-response pairs jointly through multiple layers of neural networks. These approaches can model the complex relationships between context-response pairs, but fail to scale when the set of responses is moderately large (>100). In this paper, we combine the best of both worlds by proposing a scalable model that can learn complex relationships between context-response pairs. Specifically, the model maps the contexts as well as responses to probability distributions over the embedding space. We train the models by optimizing the Kullback-Leibler divergence between the distributions induced by context-response pairs in the training data. We show that the resultant model achieves better performance as compared to other embedding-based approaches on publicly available conversation data.
INTRODUCTION
Since the advent of deep learning, several neural network-based approaches have been proposed for predicting responses given a dialog context (the set of utterances so far). These models can broadly be classified into generative and retrieval-based. Generative response predictors feed the dialog context to an encoder (flat [43,47,53] or hierarchical [40]) and the resultant embeddings are fed to a decoder to generate the response token-by-token. When these models were deployed on real-world conversations, it was found that the generated responses were often uninformative and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference'17, July 2017, Washington, DC, USA Figure 1: An example of a context with multiple valid responses. Note that each response contains different information and hence must have embeddings that are far way from each other. However, embedding-based approaches for retrieval attempt to bring all such responses close to the context and hence, close to each other.
lacked diversity [23]. To incorporate diversity among the responses, variants of this standard architecture that use latent variables have also been explored [6,14,33,41].
In contrast to generative models, retrieval-based response predictors [4,17,48,51] retrieve the response from a predefined set of responses given the dialog context. Such methods find application in a variety of real-world dialog modeling and collaborative human-agent tasks. For instance, dialog modeling frameworks typically utilize the notion of "intents" and "dialog flows" which aim to model the "goal" of a user-utterance [1]. To make task of building and identifying such intents easier, some tools mine conversation logs to identify responses that are often associated with dialog contexts (intents) [12] and then surface these responses for review by humans. These reviewed responses are then modeled into the dialog flow for different intents. Another instance, of human-agent collaboration powered by system returned responses is in 'Agent Assist' environments where a system makes recommendations to a customer-support or contact-center agent in real-time [13]. Responses from retrieval based systems score higher on fluency and informativeness and have also been used to power real-world chat bots [45].
The success of a good response retrieval system lies in learning a good similarity function between the context and the response. In addition, it also needs to be scalable so that it can retrieve responses from the universe of tens of thousand of responses efficiently. These two requirements present a tradeoff between the richness of scoring and scalability, as discussed below. Trade-off between Scoring and Scalability: Typically, in neural dialog retrieval models, the contexts and the responses in the conversation logs are embedded as points in the embedding space [25]. Approaches such as contrastive learning [5] are then used to ensure that the context is closer to the ground-truth response than the other responses. Figure 1 shows a dialog context followed by multiple responses. Despite the apparent diversity among responses, all the responses are valid for the dialog context. A typical embedding-based approach for retrieval would bring the embedding of the dialog context close to the embedding of all the valid responses [19,27,50,52]. However this has the undesirable effect of making the valid, but diverse, responses gravitate towards each other in the embedding space. Similarly, a generic response is a valid response for several dialog contexts. Again, an embedding-based approach would bring all the context embeddings close to the embedding of the generic response, even though the contexts are unrelated to each other.
Thus, typical embedding-based approaches for retrieval fail to capture the complex, many-to-many relationships that exist in conversations. More complex matching networks such as Sequential Matching Networks [48] and BERT [8] based cross-encoders jointly feed the context-response pairs through multiple layers of neural networks for generating the similarity score. While these approaches have proven to be effective for response retrieval, they are very expensive in terms of inference time. Specifically, if is the total number of dialog contexts and is the total number of responses available for retrieval during inference, these methods have a time complexity of ( ). Hence, they can't be used in a real-world setting for retrieving from thousands of responses. Contributions: An effective response retrieval system must be able to capture the complex relationship that exists between a context and a response, while simultaneously being fast enough to be deployed in the real world. In this paper we present a scalable and efficient dialog-retrieval system that maps the contexts as well as the responses to probability distributions over the embedding space (instead of points in the embedding space). To capture the complex many-to-many relationships between the context and response, we use multimodal distributions such as Gaussian mixtures to model each context and response. The resultant model is referred to as 'Mix-and-Match'. Intuitively, if a response is a valid response for a given dialog context, we want the corresponding probability distribution to be "close" to the context distribution. We formalize this notion of closeness among distributions by using Kullback-Leibler (KL) divergence. Specifically, we minimize the Kullback-Leibler divergence between the context distribution and the distribution of the ground-truth response while maximizing the divergence from the distributions of other negatively-samples responses. We derive approximate but closed-form expressions for the KL divergence when the underlying distributions are Gaussian mixtures. This approximation significantly alleviates the computation cost of KL-divergence, thereby making it suitable for use in real-world settings. In addition, we state how our model reduces to some existing multi-embedding representations [21] under certain assumptions about the nature of Gaussian Mixtures. We demonstrate our work on two publicly available dialog datasets -Ubuntu Dialog Corpus (v2) [25] and the Twitter Customer Support dataset 1 as well as on an internal real-world technical support dataset. Using automated as well as human studies, we demonstrate that Mix-and-Match outperforms recent embedding-based retrieval methods.
RELATED WORK
Our work is broadly related with two current areas of research -Resonse retrieval (Section 2.1 and Probabilitic Embeddings (Section 2.2).
Response Retrieval Systems
Retrieval based systems for dialog models have been applied in a variety of settings. Existing work has studied the problem of grounding responses in external knowledge such as documents [22,29,35], structured knowledge [30,37], with varying degrees of knowledge-level supervision [36,37]. In such cases, a knowledge instance is first fetched and then a response is generated. In contrast to knowledge grounded responses, in response retrieval settings, the dialog context is used to directly fetch responses from a universe of responses without relying on external knowledge. Depending on how the context and responses are encoded for retrieval, approaches can be classified into methods that use: (i) Independent encodings (ii) Joint Encodings. Independent Encodings: One of the earliest methods used for dialog retrieval uses TF-IDF [39] scores to represent context and responses. A common architecture employed by neural methods for dialog retrieval is a dual encoder. Here, the context and responses are encoded using a shared architecture but in different parameter spaces. Early versions of such methods employed LSTMs [25] but more recently, pre-trained models have been used [19,24,26,38]. Models such as DPR [19], S-BERT [38], MEBERT [28] encode contexts and responses using dual encoders based on the BERT [11] pre-trained model, and learn a scoring function using negative samples. Work that focuses on improving re-ranking by selecting better negative samples has also been done [24,50]. Models such as Poly-Encoder [16], MEBERT [28], ColBERT [21] use multiple representations for dialog contexts instead of using a single representation. While PolyEncoder generates multiple encodings for the dialog context using a special attention layer, MEBERT [28] directly creates multiple embedding representations using specialized layers. However, instead of using multiple encoders, CoLBERT uses a BERT based dual-encoder architecture to encode contexts and responses, 2 but it does not use a single embedding for scoring. Instead, it uses a scoring function that directly operates on the contextual token representations of the dialog context and responses. In particular, it uses the maximum similarity between any contextual representation pair to compute the overall similarity. The advantage of these approaches is that it makes the similarity function more expressive while retaining the the scalability offered by traditional dual-encoder architectures. Joint Encoding: In contrast to methods that independently encode context and response pairs, methods such as Sequential Matching Networks [49], Cross encoders using BERT [8,31] jointly encode context and dialog responses. However, such models are slow during inference because all candidate responses need to be jointly encoded with the dialog context for scoring at runtime. This is in contrast to dual-encoder architectures where response embeddings can be computed offline and cached for efficient retrieval. Models such as ConvRT [46], TwinBERT [26] use distillation to train a dual encoder from a cross encoder models to help a train better dual-encoder model.
Probabilistic Embeddings
Probabilistic embeddings have been applied in tasks for building better word representations [3,34], entity comparison [10], facial recognition [7], pose estimation [44], generating multimodal embeddings [2,9], etc. The motivation in some of these tasks is similar to ours -for instance, Qian et al. [34] use Gaussian embeddings to represent words to better capture meaning and ambiguity. However, to the best of our knowledge, the problem of applying probabilistic embeddings in dialog modeling tasks hasn't been explored. In this work, we represent dialog contexts as Mixture of Gaussians present approximate closed form expressions for efficiently computing KLdivergence based distance measures, thereby making it suitable for use in real-world settings.
MIX-AND-MATCH
To capture the complex many-to-many relationships between the dialog context and a response, we use Gaussian mixtures to model each context and response. We consider a dialog to be a sequence of utterances ( 1 , . . . , ). At any time-step , the set of utterances prior to that time-step is referred to as the context. The utterance that immediately follows the context 3 is referred to as the response. Instead of modeling the context and response as point embeddings, we use probability distributions induced by the context and the 3 We use the words context and dialog context interchangeably throughout the paper. response on the embedding space, denoted as ( ) and ( ) 4 respectively, where is any point in the embedding space R .
Overview
An overview of the model is shown in Figure 2. The context and response are first encoded using a pre-trained BERT model. The model consists of a Gaussian Mixture Parameter Generator, ( , ), which takes as input an encoded text sequence along with the number of Gaussian Mixtures, and then returns the means and variance 2 for the every Gaussian mixture component ∈ {1, . . . }, as its output. The encoded representations of the context and response from BERT are used to generate Gaussian Mixture distributions over the embedding space R using the parameter generator . We then compute the KL divergence between the context and response distributions and use contrastive loss to bring the context closer to the ground-truth response as compared to other, negatively-sampled responses.
Text Encoder
The text encoder maps the raw text to a contextualized embedding. Given a text sequence, we split it into tokens using the BERT tokenizer [20]. The BERT encoder [20] takes the tokens as input and outputs the contextualized embedding of each token at the output. These embeddings are denoted as ( 1 , . . . , ), where is the number of tokens in the text sequence.
Parameter Generation of Gaussian Mixtures
We use the parameter generator with the inputs and to generate the parameters ( ), 2 ( ) for each component of the mixture ∈ {1, . . . }. For simplicity, we assume a restricted form of Gaussian mixture that assigns equal probability to each Gaussian component. Further, we also assume that Gaussian components are axis-aligned that is, their covariance matrix is diagonal. Specifically, the probability distribution over the embedding space R induced by the input text embeddings is as follows:
( ) = 1 ∑︁ =1 N ( ; ( ), 2 ( ))(1)
Given an input sequence of text with token embedding representations 1 . . . | | , we initialize trainable embeddings 1 , . . . , with same dimensions as . These trainable embeddings are used to attend on to get attended token representations 1 , . . . , . That is, = =1 , where are the normalized attention weights and are defined as follows:
= exp( T ) ¯= 1 exp( T ) , 1 ≤ ≤(2)
Finally, the attended token embeddings are applied through two linear maps in parallel to generate the mean and log-variance of each Gaussian component in the mixture. That is, = 1 ( ) and log( 2 )) = 2 ( ), where 1 and 2 are linear maps.
Context and Response Encodings
Given the dialog context and response , we generate the Gaussian Mixture representations ( ) (for context) and ( ) (for response) using , with and components respectively. The Gaussian components of the mixture are denoted as ( ; ) (for context) and ( ; ℓ) (for response) and are given by
( ; ) = N ( ; (c), 2 (c)) (3) ( ; ℓ) = N ( ; ℓ (r), 2 ℓ (r))(4)
where (c) and 2 (c) are the means and variances of the ℎ Gaussian component for the context, and ℓ (r) and ℓ (r) are the means and variances of the ℓ ℎ Gaussian component of the response. The parameters of the text encoders (BERT and module) for context and response are not shared.
Scoring Function
We want the context distribution to be 'close' to the distribution of the ground-truth response while simultaneously being away from distributions induced by other responses. We use the KL divergence to quantify this degree of closeness. The KL divergence between the distributions and over the embedding space R is given by
KL( || ) = ∫ ∈R ( ) log ( ) ( ) d(5)
This integral has a closed form expression if both and are Gaussian. However, for Gaussian mixtures, this integral needs to be approximated. We derive the following approximation to the KL divergence between two GMMs. Theorem 3.1. Let and be two Gaussian mixture distributions be and Gaussian components as defined in (3) and (4) respectively. The KL divergence between the the two GMMs can be approximated by the following quantity
KL( || ) ≈ 1 ∑︁ =1 min ∈ {1,..., } KL( (.; ℓ)|| (.; )) + log( / ) ,(6)
where (.; ) and (.; ℓ) are the ℎ and ℓ ℎ Gaussian component of the context and response distributions as defined in (3) and (4).
A detailed derivation of the above approximation is provided in the Appendix. Note that the theorem above holds even when when the individual components of the mixture are not Gaussian.
Intuitively, the approximation for KL divergence works as follows. For each Gaussian component in the response distribution, we find the closest Gaussian component in the context distribution. We compute the KL divergence between these neighboring components and average it over all the Gaussian components in the response.
When the components are Gaussian, the KL divergence between the components can be tractably computed using the following equation:
KL( (.; ℓ)|| (.; )) = 1 2 ∑︁ =1 log 2 (c) 2 ℓ (r) + 2 ℓ (r) + ( ℓ (r) − (c)) 2 (c) 2 − ,(7)
where is the dimension of the embedding space. Using equations (6) and (7), we get a closed form approximation to the Kullback-Leibler divergence between context and response GMMs.
Loss Function
We use -pair contrastive loss [42] for training the distributions induced by the context and response. Intuitively, given a batch B of context-response pairs, we minimize the KL divergence between the context and the true response while simultaneously maximixing the KL divergence with respect to other randomly selected responses. The loss for a given context-response pair (c, r) can be written as
loss = exp(−KL( || )) ¯∈B exp(−KL(¯|| ))(8)
We average this loss across all the context-response pairs in the batch and minimize it during training. The BERT encoders, the randomly initialized embeddings as well as the linear layers for computing the means and variances, are trained in an end-to-end manner.
Relationship with ColBERT and SBERT
The approximation for KL divergence that we derived in equation (6), shares a subtle relationship with the expression for similarity used in ColBERT [21] and SBERT [38]. Let { 1 , . . . , } and { 1 , . . . , } be the contextualized token embeddings at the last layer of BERT for context and response respectively. The ColBERT similarity between the context and response is given by
(c, r) = ∑︁ =1 max 1≤ ≤ ( , ) ,(9)
where ( , ) is the inner product between the contextualized token embeddings. Thus, for each context token, ColBERT finds the most similar response token (in embedding space) and computes the similarity between these two. This similarity is then averaged over all the tokens in the response.
Instead, the KL divergence approximation derived in equation (6) finds the closest Gaussian component of the context GMM for each Gaussian component in the response GMM. Next, the KL divergence is computed between these neighboring components and averaged over all the Gaussian components in the response GMM.
The KL divergence approximation derived in equation (6) reduces to the negative of ColBERT similarity (up to a scalar coefficient) when the following restrictions are imposed on the context and response distributions:
• The Gaussian components in the context and response GMM have identity covariance. This however, makes the model less expressive. Instead, our model uses a trainable diagonal co-variance matrix. • The number of Gaussian components in context and response equals the number of tokens in the context and response respectively. • The means of the Gaussian components have unit norm.
Further, if we use single Gaussian mixture components, under similar assumptions as above, the model reduces to SBERT.
Inference
During inference, we are provided a context and a collection of responses to select from. We map the context as well as the list of responses to their corresponding probability distributions over the embedding space. Next, we compute the KL divergence between the distribution induced by the context and every response in the list. Using the equation derived in (6), this can be computed efficiently and involves standard matrix operations only. We select the top-responses that have the least KL divergence, where is specified during evaluation.
EXPERIMENTS
We answer the following questions through our experiments: (1) How does our model compare with recent dual-encoder based retrieval systems for the task of response retrieval? (2) Are the responses retrieved by our model more relevant and diverse? (3) Do human users of our system notice a difference in quality of response as compared to the recent, ColBERT system?
Datasets
We conduct our experiments on two publicly available datasets -Ubuntu Dialogue Corpus [25](v2.0) 5 and the Twitter Customer Support Dataset 6 , and one internal dataset. The Ubuntu Dialog Corpus v2.0 contains 500 context-response pairs in the training set and 20 context-response pairs in the validation set and test set respectively. The conversations deal with technical support for issues faced by Ubuntu users. The Twitter Customer Support Dataset contains ∼ 1 million context-response pairs in the training data and ∼ 120 context-response pairs in validation and test sets. 5 https://github.com/rkadlec/ubuntu-ranking-dataset-creator 6 https://www.kaggle.com/thoughtvector/customer-support-on-twitter
The conversations deal with customer support provided by several companies on Twitter.
We also conduct our experiments on an internal real-world technical support dataset with ∼ 127 conversations. We will refer to this dataset as 'Tech Support dataset' in the rest of the paper. The Tech Support dataset contains conversations pertaining to an employee seeking assistance from an agent (technical support) -to resolve problems such as password reset, software installation/licensing, and wireless access. In contrast to Ubuntu dataset, which used user forums to construct the data, this dataset has clearly two distinct users -employee and agent. In all our experiments we model the agent response turns only.
For each conversation in the Tech Support dataset, we sample context and response pairs. Note that multiple context-response pairs can be generated from a single conversation. For each conversation, we sample 25% of the possible context-response pairs. We create validation pairs by selecting 5000 conversations randomly and sampling their context response pairs. Similarly, we create test pairs from a different subset of 5000 conversations. The remaining conversations are used to create training context-response pairs.
Baselines
We compare our proposed model against two scalable baselines -SBERT [38] and ColBERT [21] -a recent state-of-the-art retrieval model. Similar to Mix-and-Match, both the baselines use independent encoders (dual-encoders to encode the contexts and responses. Hence, these baselines can be used for large-scale retrieval at an acceptable cost. [38] uses two BERT encoders for embedding the inputs (context and response). The contextualized embeddings at the last layer are pooled to generate fixed size embeddings for context and response. Since context and response are from two different domains, we force the two BERT encoders to not share the parameters. We use inner product between the context and response embeddings as the similarity measure and train the two encoders via contrastive loss.
SBERT. SBERT
ColBERT.
Just like SBERT, ColBERT [21] uses two BERT encoders to encode the inputs. However, instead of pooling the contextualized embeddings at the last layer, a late interaction is computed between all the contextualized token embeddings of the context and response. Unlike the original implementation of Col-BERT, we do not enforce the context and response encoders to share parameters. This is essential for achieving reasonable performance for dialogs. The model is trained via contrastive loss.
Model and training details
We use the pretrained 'bert-base' model provided by Hugging Face 7 . The dimension of the embedding space is fixed to be 128 for all the models. The number of Gaussian components in the context and response distributions is selected by cross-validation from the set {1, 2, 4, 8, 16, 32, all}. Here, the 'all' setting refers to the case where the context/response distribution has as many Gaussian components as the number of tokens in the context/response. We use the 'AdamW' optimizer provided by Hugging Face (Adam optimizer with a fixed weight decay) with a learning rate of 1.5 − 5 for all our experiments. A fixed batch size of 16 context-response pairs is used. To prevent overfitting, we use early-stopping with the loss function defined in Section 3.6 on validation set as the stopping criteria.
Response Retrieval
In this setting, each context is paired with 5000 randomly selected responses along with the ground truth response for the given context. The list of 5000 responses are randomly selected from the test data for each instance. Hence, the response universe associated with each dialog-context may be different. The task then is to retrieve the ground truth response given the context. For efficient computation, the full universe of responses are encoded once and stored. Note that is only possible for dual-encoder architectures (such as Mix-and-Match, SBERT, ColBERT); the major performance bottleneck in cross-encoder approaches arises from this step where the response encodings are dependent on the context and hence need to be encoded each time for every new dialog context.
For Mix-and-Match, the response encoder outputs the means and variances of the GMM induced by the response in the embedding space. We use a batch-size of 50 to encode the responses and cache the generated parameters (mean and variance) of the response-GMMs.
Similarly, the context is encoded by the context encoder to output the means and variances of the components of context-GMM. We compute the KL divergence between the context distribution and distribution of each response in the associated list of 5000 responses using the expressions derived in (6) and (7). The values are sorted in ascending order and the top-responses are selected for evaluation.
A similar setting is used for SBERT and ColBERT with the exception that the embeddings are stored instead of means and variances. Moreover, we sort the responses based on SBERT and ColBERT similarity in descending order.
Results.
We use MRR and Recall@k for evaluating the various models. For evaluating MRR, we sort the associated set of 5000 responses with each context, based on KL divergence in ascending order. Next, we compute the rank of the ground truth response in the sorted list. The MRR is then obtained as the mean of the reciprocal rank for all the contexts. For Recall@k, we pick the top-responses with the least KL divergence. The percentage of contexts for which the ground truth response is present in the top-responses is referred to as Recall@k. The results are shown in Table 1.
As can be observed, SBERT that uses a single embedding to represent the entire context as well as response, achieves the lowest recall. By using all the token embeddings to represent the context and response, ColBERT achieves better performance than SBERT. Finally, by using Gaussian mixture probability distributions to represent context and response, Mix-and-Match achieves substantial improvement in recall@k and MRR on all the datasets as compared to SBERT and ColBERT. Thus, richer the representation of context and response, better is the recall. Note that the relative improvement is less in Tech Support as there is less diversity among the responses in the training data of Tech Support. The agents are trained to handle calls in specific way that reduces the diversity.
Response Recommendation
The response retrieval setting described in the previous section is unrealistic since it assumes that the ground truth response is also present in a set of 5000 responses. In reality, when a response retrieval model such as [13] is deployed for response recommendation, it must retrieve from a large set of all the responses present in the training data (often running into hundreds of thousands of responses).
To deal with the large set of responses present in the training data, we encode them offline using the response encoder of Mixand-Match. As in the previous section, we use a batch-size of 50 for encoding the responses. After the means and variances of all the Gaussian components of response GMMs have been generated, we save them to a file along with the corresponding responses. To ensure faster retrieval, we use Faiss [18] for indexing the means of the Gaussian components of response GMMs. Faiss is a library for computing fast vector-similarities and has been used for vector-based searching in huge sets. We use the IVFPQ index of faiss (Inverted File with Product Quantization) that discretizes the embedding space into a finite number of cells. This allows for faster search computations.
We flatten the tensor of means of Gaussian components of all response GMMs to a matrix of mean vectors. The matrix of mean vectors is added to the IVFPQ index. A pointer is maintained from the mean of each Gaussian component to the corresponding response as well as the means and variances of its Gaussian components.
When a new context arrives, we compute the means and variances of its Gaussian components. For each Gaussian component, we retrieve the top-10 responses by using the mean of the Gaussian component as the search query. After retrieving the top-10 responses for each Gaussian component, we load the corresponding means and variance. Finally, we compute the KL divergence between the context GMM and the GMMs of all the retrieved responses. The values are sorted in ascending order and the topresponses are selected for evaluation.
BLEU.
Since the ground truth response may not be present verbatim in the set, metrics such as recall and MRR cannot be computed in this setting. We therefore use the BLEU metric [32] for evaluating the quality of the responses. The BLEU metric measures the count of ngrams that are common between the ground truth and predicted response. The results are shown in Table 2. As can be observed from the table, the BLEU scores are quite low for Ubuntu dataset, suggesting that most retrieved responses have very little overlap with the ground truth response. As in the previous section, SBERT is outperformed by ColBERT in terms of BLEU-2 and BLEU-4. Finally, Mix-and-Match outperforms both the models on all three datasets. This suggests that the responses retrieved by Mix-and-Match are relevant to the dialog context.
Diversity.
The primary strength of the Mix-and-Match system is its capability to associate multiple diverse responses with the same context. To capture the diversity among the top-responses retrieved for a given context, we measure the distance between every pair of responses and average it across all pairs. Thus, if R is the set of retrieved responses for a given context, the BERT distance Table 1: Comparison of Mix-and-Match against baselines on retrieval tasks. Given a context, the task involves retrieving from a set of 5000 responses that also contains the ground truth response. among the responses in R is given by
BERTDistance(R) = 1 |R| 2 ∑︁ r∈R ∑︁ r∈R || (r) − (r)|| 2 ,(10)
where (r) is the pooled BERT embedding of r.
The results are shown in Table 2. As can be observed from the table, SBERT has the least diversity among the retrieved responses. This is expected since all the retrieved responses must be close to the context embedding and hence, close to each other. ColBERT fares better in terms of diversity since it uses multiple embeddings to represent contexts and responses. Finally, Mix-and-Match that uses GMMs to represent contexts and responses achieves the best diversity. This suggests that having multiple or probabilistic embeddings helps in improving the diversity among the retrieved responses.
4.5.3
Scalability. Next, we evaluate the time taken by the Mixand-Match model to retrieve from the FAISS index as comapred to baselines. The similarity/KL-divergence computations as well as vector similarity searches for the FAISS index, are performed on a single A100 GPU. Unsurprisingly, SBERT achieves the lowest latency of 8.9 ms for retrieval per dialog context. ColBERT achieves a latency of 89.7 ms. The latency of Mix-and-Match ranges from 36.7 ms to 477.8 ms depending upon the number of Gaussian components in the mixture. Note that, even in the worst case, the latency is less than 0.5s, thus making the model suitable for practical use in the real world. SBERT, ColBERT and Mix-and-Match use independent encoders to encode the responses. Hence, response encoding can be done offline. During inference, the context is encoded once and its similarity /KL divergence with the pre-encoded responses is computed. In contrast, for models that use joint encoding [11,49], the context must be jointly encoded with every response during inference. Thus, the time taken by joint encoding approaches is proportional to the number of responses in the retrieval set, making these approaches unsuitable for practical real-world deployment.
Human evaluation
We also conducted a human study comparing the output responses of ColBERT and Mix-and-match. We used samples from the Twitter data set for this study as it does not require domain expertise to assess the relevance of responses. Three users were asked to review 30 twitter dialogs contexts along with the top-4 responses returned by each system, 8 in a response recommendation setting. Users were (i) please know that earlybird offers the convenience of automatic check -in, but does not guarantee a specific boarding position . still, we apologize if you were disappointed in the position you received &hope to make it up to you while onboard (ii) sorry if we've missed a chance to be of better service to you. since we're separate entities, we're re unable to alter or change the incorrect alamo car reservation (ii) by law we ' re required to let folks with specific boarding needs or disabilities on the aircraft first, though we apologize for your frustration this morning (iii) oh no ! so sorry to hear that. please speak with our agents in the airport about reaccommodations (iii) sorry for any confusion, our agents know the proper procedures and questions toask to determine the best boarding option Ground Truth Response: apologies for any frustration, as the # of earlybird customers vary on each flt, you're guaranteed automatic check in, not a specific boarding position . Mix-and-Match (i) Our apologies , we are currently experiencing a system challenge which we are working to resolve . kindly bear with us.
(i) how long has this been happening ? what type of phone do you have ? please send us a dm so we can fix it . thank you (ii) our sincere apologies for any inconveniences caused, we are having a technical issue, resolution is underway (ii) that ' s not good at all ! please dm us with your zip code and nearest streets intersection to check the coverage (iii) it is not our intention to make you upset. please feel free to reach out to us if you have already called back and still need further assistance.
(iii) does this happen in specific locations ? when did you begin to experience these issues with your connection ? are you having issues making calls and sending text as well ? Ground Truth Response: let ' s flip thing around ! meet in the dms https://t.co/sbivwmm6x2 presented the outputs from each system in random order and they were blind to the system returning the responses. We asked our users the following:
(1) Given the dialog context and the response sets from two different systems, label each response with a "yes" or "no' depending on whether the response is a relevant response recommendation for the dialog context. Thus, each response returned by both systems was individually labeled by three human users. (2) Given the dialog context and the response sets from two different systems, which of the response set is more diverse? Thus, each context-recommendation set was assessed by three human users.
We count the number of votes received by the top-ranked response for each system and report percentage wins for each system. In addition, we also report a head-to-head comparison in which the two models were assessed for diversity (no ties). Finally, to assess whether diversity is accompanied by relevance in the response set, we define a metric called Diversified-Relevance (DR) which weighs the diversity wins by the number of relevant responses returned by each system. Specifically, , the Diversified-Relevance for a ∈ {ColBERT, Mix-and-Match} is given by:
= 4 1{ } * 1{ } 4 ,(11)
where is the number of dialogs used in the human study, 4, is the number of response recommendations per dialog, 1{
} is an indicator function that takes the value 1 if was voted as being more diverse its responses to ℎ dialog context, and 1{ }, is an indicator function that takes the value 1 if the ℎ response recommendation by was voted as being relevant 9 .
4.6.1 Results . As can be seen in Table 3, the top-ranked response returned by Mix-and-Match received significantly higher number of votes (40%) in favour as compared to ColBERT. In 43% of the 9 As can be seen returns a score between 0 and 1.
cases there was no-clear winner. Finally, in 58% of the dialogs, Mixand-Match was found to present a more diverse set of response recommendations.
In order to assess, if the diversity is accompanied by relevance, we also report the scores in Table 4. As can be seen the DR scores for Mix-and-Match is significantly higher than ColBERT (0.35 vs 0.25). Overall, the results from our human-study indicate that Mix-and-Match returns more diverse and relevant responses.
Qualitative Study
We present two sample outputs in Tables 5 and Table 6; Table 5 shows a sample with a single-turn dialog context where the user is complaining about flight boarding positions. The responses retrieved by both ColBERT and Mix-and-Match are presented. As can be seen, Mix-and-Match returns a relevant response at the top ranked position (highlighted in green) and another related response at the second position. In contrast, ColBERT retrieved generic or unrelated responses. Table 6 shows a sample with a multi-turn dialog context where the user is complaining about bad cellphone coverage. As before, the responses retrieved by both ColBERT and Mix-and-Match are presented. As can be seen, Mix-and-Match returns a relevant response at the top ranked position (highlighted in green) and related responses at other positions. In contrast, ColBERT retrieved generic or unrelated responses.
CONCLUSION
In this paper we presented a dialog response retrieval method called -Mix-and-Match, which is designed to accommodate the many-to-many relationships that exist between a dialog context and responses. We modeled the dialog context and response using mixtures of gaussians, instead of point embeddings. This allows the network to be more expressive and it does not force the representations of unrelated responses to move closer, as would have been the case with traditional dual-encoder learning objectives. We derived and presented a closed form expressions for efficiently computing the KL-divergence based distance measures and showed its suitability for real-world settings. We also related our model to existing retrieval methods, SBERT and ColBERT, under specific assumptions about the nature of the GMMs. We demonstrated the effectiveness of our retrieval systems on three different datasets -Ubuntu, Twitter and an internal, real-world Tech support dataset. Additional experiments for response relevance,including a human study were performed on the publicly available datasets. We found that not only is our model able to retrieve more relevant responses as compared to recent retrieval systems, it also presented more diverse results. This is especially important for response recommendation systems [13] where human agents may chose from a set of recommendations.
The first term is the negative of entropy while the second term is the cross entropy. We approximate the cross entropy by expanding the GMM in terms of its Gaussian components, and applying Jensen's inequality:
H ( , ) = − 1 ∑︁ ℓ=1 ∫ ( ; ℓ) log ∑︁ =1 ℓ ( ) ( ; ) ℓ ( ) d ≤ − 1 ∑︁ ℓ=1 ∑︁ =1 ℓ ( ) ∫ ( ; ℓ) log ( ; )d 1 ∑︁ ℓ=1 ∑︁ =1 ℓ ( ) log ℓ ( ) + log = 1 ∑︁
Here, the first equality follows by multiplying and dividing the terms within the log by the variational distribution ℓ ( ). The last inequality follows by applying Jensen's inequality. The above upper bound holds for all choice of . The bound can be tightened by minimizing it with respect to ℓ ( ). We assume ℓ to be a onehot vector which can only be non-zero for one context component . Every one-hot ℓ has an entropy of 0 and hence, the second term in the equation is always 0. For a one-hot ℓ , the above equation is minimized when ℓ assigns all its weights to the component of context GMM with lowest cross-entropy. Using the optimal one-hot , the above equation can be written as
The entropy of can be derived as a special case of the above equation by replacing in the above equation by . Thus, the entropy of a GMM can be upper-bounded by
H ( ) ≤ 1 ∑︁ ℓ=1 H ( (.; ℓ)) + log(15)
Finally, the KL divergence can be approximated by replacing (14) and (15) in (12). Note that the resultant quantity is neither an upper nor a lower bound, but still a useful approximation.
□
Figure 2 :
2An overview of our model -Mix-and-Match.
5 :
5Sample of a single-turn dialog context -Mix-and-Match returns a relevant response at the top ranked position and another related response at the second position. In contrast, ColBERT retrieved generic or unrelated responses. Dialog Context User: @southwestair i'm bummed i paid 15 for early bird and still only got a b boarding position don t think i ll do that anymore notworthit Responses Retrieved ColBERT Mix-and-Match (i)if you check your itinerary email, it ' ll tell you what type of fare you purchased.wanna get away fares are nonrefundable, but anytime and business select fares can be refunded. thanks for reaching out
6 :
6Sample of a multi-turn dialog context -Mix-and-Match returns a relevant response at the top ranked position and related responses at other positions. In contrast, ColBERT retrieved generic or unrelated responses. Dialog Context User: the worst mobile serive in 2015 2017 cellphone badservice miami florida Agent: hey send us a dm and we'll ensure a great experience channeyt User: tmobilehelp poor service low signal slow service it s miami Responses Retrieved ColBERT
of Theorem 3.1Proof. The proof follows a similar line of reasoning as the proof provided in[15] The KL divergence between and
ℓ
( )H ( (.; ℓ), (.; )) − H ( ℓ ) + log
(.; ℓ), (.; )) + log
∈
{1,..., } KL( (.; ℓ)|| (.; )) + log( / )
Table 2 :
2Comparison of Mix-and-Match against baselines
for the response recommendation task. Given a context, the
task involves retrieving from the set of all responses in the
training data. To handle the large set of responses, we use
a FAISS [18] index for pre-retrieval. The computation of di-
versity are discussed in detail in Section 4.5
Dataset
Model
BLEU-2 BLEU-4
Diversity
(BERTDist.)
SBERT
5.86
0.49
2.33
Ubuntu (v2)
ColBERT
6.66
0.58
3.19
Mix-and-Match
7.16
0.64
3.60
SBERT
19.84
10.3
1.76
Twitter
ColBERT
20.67
11.09
2.17
Mix-and-Match
22.83
12.62
2.60
SBERT
12.09
5.82
1.49
Tech Support
ColBERT
16.57
8.58
2.55
Mix-and-Match
18.82
10.57
3.02
Table 3 :
3The top-response returned by the Mix-and-match model is found to be relevant more often (40% vs 17%) than ColBERT. In addition, the set of responses returned by Mixand-Match are also more diverse (58% vs 42% for ColBERT).ColBERT Win Mix and Match Win Tie
Response
Relevance @1
17%
40%
43%
Diversity
42%
58%
NA
Table 4 :
4The Diversified-Relevance scores for ColBERT and Mix-and-Match in our human study.ColBERT
Mix and Match
Diversified-Relevance (DR)
0.25
0.35
Table
Table
Conference'17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
https://www.kaggle.com/thoughtvector/customer-support-on-twitter2 The work was originally presented for retrieval in QA tasks
Formally, these are densities induced by the corresponding distributions
https://huggingface.co/bert-base-uncased
a total of 360 independent context-response assessments.
A maturity assessment framework for conversational AI development platforms. Johan Aronsson, Philip Lu, Daniel Strüber, Thorsten Berger, Proceedings of the 36th Annual ACM Symposium on Applied Computing. the 36th Annual ACM Symposium on Applied ComputingJohan Aronsson, Philip Lu, Daniel Strüber, and Thorsten Berger. 2021. A maturity assessment framework for conversational AI development platforms. In Proceed- ings of the 36th Annual ACM Symposium on Applied Computing. 1736-1745.
Multimodal Word Distributions. Ben Athiwaratkun, Andrew Wilson, 10.18653/v1/P17-1151Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Ben Athiwaratkun and Andrew Wilson. 2017. Multimodal Word Distributions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, 1645-1656. https://doi.org/10.18653/v1/P17-1151
Probabilistic FastText for Multi-Sense Word Embeddings. Ben Athiwaratkun, Andrew Wilson, Anima Anandkumar, 10.18653/v1/P18-1001Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Ben Athiwaratkun, Andrew Wilson, and Anima Anandkumar. 2018. Probabilistic FastText for Multi-Sense Word Embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Melbourne, Australia, 1-11. https: //doi.org/10.18653/v1/P18-1001
A retrieval-based dialogue system utilizing utterance and context embeddings. Alexander Bartl, Gerasimos Spanakis, CoRR abs/1710.05780Alexander Bartl and Gerasimos Spanakis. 2017. A retrieval-based dialogue system utilizing utterance and context embeddings. CoRR abs/1710.05780 (2017).
Signature verification using a "siamese" time delay neural network. Jane Bromley, W James, Léon Bentz, Isabelle Bottou, Yann Guyon, Cliff Lecun, Eduard Moore, Roopak Säckinger, Shah, International Journal of Pattern Recognition and Artificial Intelligence. 7Jane Bromley, James W Bentz, Léon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Säckinger, and Roopak Shah. 1993. Signature verification using a "siamese" time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence 7, 04 (1993), 669-688.
Latent Variable Dialogue Models and their Diversity. Kris Cao, Stephen Clark, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational Linguistics2Kris Cao and Stephen Clark. 2017. Latent Variable Dialogue Models and their Diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Vol. 2. 182-187.
Reliable Probabilistic Face Embeddings in the Wild. Kai Chen, Qi Lv, Taihe Yi, Zhengming Yi, arXiv:2102.04075Kai Chen, Qi Lv, Taihe Yi, and Zhengming Yi. 2021. Reliable Probabilistic Face Embeddings in the Wild. CoRR abs/2102.04075 (2021). arXiv:2102.04075 https: //arxiv.org/abs/2102.04075
Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context. Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye, arXiv:2104.08523arXiv preprintXiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, and Zheng Ye. 2021. Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context. arXiv preprint arXiv:2104.08523 (2021).
Sanghyuk Chun, Joon Seong, Rafael Oh, Sampaio De Rezende, Yannis Kalantidis, and Diane Larlus. 2021. Probabilistic Embeddings for Cross-Modal Retrieval. Sanghyuk Chun, Seong Joon Oh, Rafael Sampaio de Rezende, Yannis Kalantidis, and Diane Larlus. 2021. Probabilistic Embeddings for Cross-Modal Retrieval.
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021), 8411-8420.
Entity-balanced Gaussian pLSA for Automated Comparison. Danish Contractor, Parag Singla, Mausam , 10.18653/v1/N16-1009Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsDanish Contractor, Parag Singla, and Mausam. 2016. Entity-balanced Gaussian pLSA for Automated Comparison. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, San Diego, California, 69-79. https://doi.org/10.18653/v1/N16-1009
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arxiv:181004805Comment: 13 pagesJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. http://arxiv.org/abs/1810.04805 cite arxiv:1810.04805Comment: 13 pages.
Bootstrapping Dialog Models from Human to Human Conversation Logs. Pankaj Dhoolia, Vineet Kumar, Danish Contractor, Sachindra Joshi, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI PressPankaj Dhoolia, Vineet Kumar, Danish Contractor, and Sachindra Joshi. 2021. Bootstrapping Dialog Models from Human to Human Conversation Logs. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 16024-16025. https://ojs.aaai.org/ index.php/AAAI/article/view/18000
Agent Assist through Conversation Analysis. P Kshitij, Nathaniel Fadnis, Jatin Mills, Haggai Ganhotra, Gaurav Roitman, Doron Pandey, Yosi Cohen, Mass, R Chulaka Shai Erera, Danish Gunasekara, Contractor, Q Vera Siva Sankalp Patel, Sachindra Liao, Luis A Joshi, David Lastras, Konopnicki, 10.18653/v1/2020.emnlp-demos.20Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 -Demos. Qun Liu and David Schlangenthe 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 -DemosOnlineAssociation for Computational LinguisticsKshitij P. Fadnis, Nathaniel Mills, Jatin Ganhotra, Haggai Roitman, Gaurav Pandey, Doron Cohen, Yosi Mass, Shai Erera, R. Chulaka Gunasekara, Danish Contractor, Siva Sankalp Patel, Q. Vera Liao, Sachindra Joshi, Luis A. Lastras, and David Konopnicki. 2020. Agent Assist through Conversation Analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 -Demos, Online, November 16-20, 2020, Qun Liu and David Schlangen (Eds.). Association for Computational Linguistics, 151-157. https://doi.org/10.18653/v1/2020.emnlp-demos.20
Di-alogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder. Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, Sunghun Kim, International Conference on Learning Representations. Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, and Sunghun Kim. 2018. Di- alogWAE: Multimodal Response Generation with Conditional Wasserstein Auto- Encoder. In International Conference on Learning Representations.
Approximating the Kullback Leibler divergence between Gaussian mixture models. R John, Hershey, A Peder, Olsen, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07. IEEE4317John R Hershey and Peder A Olsen. 2007. Approximating the Kullback Leibler di- vergence between Gaussian mixture models. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07, Vol. 4. IEEE, IV-317.
Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, Jason Weston, arXiv: Computation and LanguageSamuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. arXiv: Computation and Language (2019).
An Information Retrieval Approach to Short Text Conversation. Zongcheng Ji, Zhengdong Lu, Hang Li, CoRR abs/1408.6988Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An Information Retrieval Approach to Short Text Conversation. CoRR abs/1408.6988 (2014).
Billion-scale similarity search with gpus. Jeff Johnson, Matthijs Douze, Hervé Jégou, IEEE Transactions on Big Data. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data (2019).
Dense Passage Retrieval for Open-Domain Question Answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-Tau Yih, 10.18653/v1/2020.emnlp-main.550Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsVladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open- Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Online, 6769-6781. https://doi.org/10.18653/v1/2020.emnlp-main.550
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT. Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina ToutanovaNAACL-HLTJacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT. 4171-4186.
Colbert: Efficient and effective passage search via contextualized late interaction over bert. Omar Khattab, Matei Zaharia, Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. the 43rd International ACM SIGIR conference on research and development in Information RetrievalOmar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 39-48.
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Tim Wen Tau Yih, Sebastian Rocktäschel, Douwe Riedel, Kiela, ArXiv abs/2005.11401Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. ArXiv abs/2005.11401 (2020).
A Diversity-Promoting Objective Function for Neural Conversation Models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego California, USANAACL HLT 2016Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A Diversity-Promoting Objective Function for Neural Conversation Models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016. 110-119. http://aclweb.org/anthology/ N/N16/N16-1014.pdf
Quadruplet-BERT: An Efficient Model For Embedding-Based Large-Scale Retrieval. Peiyang Liu, Sen Wang, Xi Wang, Wei Ye, Shikun Zhang, 10.18653/v1/2021.naacl-main.292Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, OnlinePeiyang Liu, Sen Wang, Xi Wang, Wei Ye, and Shikun Zhang. 2021. Quadruplet- BERT: An Efficient Model For Embedding-Based Large-Scale Retrieval. In Pro- ceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Compu- tational Linguistics, Online, 3734-3739. https://doi.org/10.18653/v1/2021.naacl- main.292
Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau, arXiv:1506.08909The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprintRyan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909 (2015).
Twinbert: Distilling knowledge to twin-structured bert models for efficient retrieval. Wenhao Lu, Jian Jiao, Ruofei Zhang, arXiv:2002.06275arXiv preprintWenhao Lu, Jian Jiao, and Ruofei Zhang. 2020. Twinbert: Distilling knowledge to twin-structured bert models for efficient retrieval. arXiv preprint arXiv:2002.06275 (2020).
Sparse, Dense, and Attentional Representations for Text Retrieval. Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins, 10.1162/tacl_a_00369Transactions of the Association for Computational Linguistics. 9Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations for Text Retrieval. Transactions of the Association for Computational Linguistics 9 (2021), 329-345. https://doi.org/10. 1162/tacl_a_00369
Sparse, Dense, and Attentional Representations for Text Retrieval. Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins, 10.1162/tacl_a_00369Transactions of the Association for Computational Linguistics. 9Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations for Text Retrieval. Transactions of the Association for Computational Linguistics 9 (2021), 329-345. https://doi.org/10. 1162/tacl_a_00369
Gaurav Pandey, and Danish Contractor. 2021. Variational Learning for Unsupervised Knowledge Grounded Dialogs. Mayank Mishra, Dhiraj Madan, arXiv:2112.00653Mayank Mishra, Dhiraj Madan, Gaurav Pandey, and Danish Contractor. 2021. Variational Learning for Unsupervised Knowledge Grounded Dialogs. CoRR abs/2112.00653 (2021). arXiv:2112.00653 https://arxiv.org/abs/2112.00653
Simulated Chats for Building Dialog Systems: Learning to Generate Conversations from Instructions. Biswesh Mohapatra, Gaurav Pandey, Danish Contractor, Sachindra Joshi, 10.18653/v1/2021.findings-emnlp.103Findings of the Association for Computational Linguistics: EMNLP 2021. Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau YihDominican RepublicAssociation for Computational LinguisticsVirtual Event / Punta CanaBiswesh Mohapatra, Gaurav Pandey, Danish Contractor, and Sachindra Joshi. 2021. Simulated Chats for Building Dialog Systems: Learning to Generate Con- versations from Instructions. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16- 20 November, 2021, Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, 1190-1203. https://doi.org/10.18653/v1/2021.findings-emnlp.103
Passage Re-ranking with BERT. Rodrigo Nogueira, Kyunghyun Cho, arXiv:1901.04085Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. CoRR abs/1901.04085 (2019). arXiv:1901.04085 http://arxiv.org/abs/1901.04085
BLEU: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics. Association for Computational Linguistics, 311-318.
A Hierarchical Latent Structure for Variational Conversation Modeling. Yookoon Park, Jaemin Cho, Gunhee Kim, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersYookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A Hierarchical Latent Struc- ture for Variational Conversation Modeling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 1792-1801.
Conceptualized and Contextualized Gaussian Embedding. Chen Qian, Fuli Feng, Lijie Wen, Tat-Seng Chua, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Chen Qian, Fuli Feng, Lijie Wen, and Tat-Seng Chua. 2021. Conceptualized and Contextualized Gaussian Embedding. Proceedings of the AAAI Conference on Artificial Intelligence 35, 15 (May 2021), 13683-13691. https://ojs.aaai.org/index. php/AAAI/article/view/17613
Open-Retrieval Conversational Question Answering. Association for Computing Machinery. Chen Qu, Liu Yang, Cen Chen, Minghui Qiu, W Bruce Croft, Mohit Iyyer, 10.1145/3397271.3401110New York, NY, USAChen Qu, Liu Yang, Cen Chen, Minghui Qiu, W. Bruce Croft, and Mohit Iyyer. 2020. Open-Retrieval Conversational Question Answering. Association for Comput- ing Machinery, New York, NY, USA, 539-548. https://doi.org/10.1145/3397271. 3401110
Unsupervised Learning of KB Queries in Task-Oriented Dialogs. Dinesh Raghu, Nikhil Gupta, Mausam , Trans. Assoc. Comput. Linguistics. 9Dinesh Raghu, Nikhil Gupta, and Mausam. 2021. Unsupervised Learning of KB Queries in Task-Oriented Dialogs. Trans. Assoc. Comput. Linguistics 9 (2021), 374-390. https://transacl.org/ojs/index.php/tacl/article/view/2515
Multi-Level Memory for Task Oriented Dialogs. Revanth Reddy, Danish Contractor, Dinesh Raghu, Sachindra Joshi, 10.18653/v1/n19-1375Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Jill Burstein, Christy Doran, and Thamar Soloriothe 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Minneapolis, MN, USAAssociation for Computational Linguistics1Revanth Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2019. Multi-Level Memory for Task Oriented Dialogs. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Jill Burstein, Christy Doran, and Thamar Solorio (Eds.). Association for Computational Linguistics, 3744-3754. https://doi.org/10.18653/v1/n19-1375
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3982-3992. https://doi.org/10.18653/v1/D19-1410
The Probabilistic Relevance Framework: BM25 and Beyond. S Robertson, Foundations and Trends® in Information Retrieval. 3S. Robertson. 2009. The Probabilistic Relevance Framework: BM25 and Be- yond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333- 389. http://scholar.google.de/scholar.bib?q=info:U4l9kCVIssAJ:scholar.google. com/&output=citation&hl=de&as_sdt=2000&as_vis=1&ct=citation&cd=1
Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models. Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, Joelle Pineau, AAAI. Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building End-To-End Dialogue Systems Using Generative Hierar- chical Neural Network Models. In AAAI.
A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, C Aaron, Yoshua Courville, Bengio, AAAI. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues.. In AAAI. 3295-3301.
Improved deep metric learning with multi-class n-pair loss objective. Kihyuk Sohn, Advances in neural information processing systems. Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in neural information processing systems. 1857-1865.
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, William B Dolan, HLT-NAACL. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and William B. Dolan. 2015. A Neural Network Approach to Context-Sensitive Generation of Conversational Responses. In HLT-NAACL.
View-Invariant Probabilistic Embedding for Human Pose. Jennifer J Sun, Jiaping Zhao, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Ting Liu, Computer Vision -ECCV 2020 -16th European Conference. Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael FrahmGlasgow, UK12350Proceedings, Part VJennifer J. Sun, Jiaping Zhao, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, and Ting Liu. 2020. View-Invariant Probabilistic Embedding for Human Pose. In Computer Vision -ECCV 2020 -16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V (Lecture Notes in Computer Science, Vol. 12350), Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm (Eds.).
. Springer, 10.1007/978-3-030-58558-7_4Springer, 53-70. https://doi.org/10.1007/978-3-030-58558-7_4
A survey on response selection for retrieval-based dialogues. Chongyang Tao, Jiazhan Feng, Rui Yan, Wei Wu, Daxin Jiang, IJCAI. Chongyang Tao, Jiazhan Feng, Rui Yan, Wei Wu, and Daxin Jiang. 2021. A survey on response selection for retrieval-based dialogues. In IJCAI.
Distilling Knowledge for Fast Retrieval-Based Chat-Bots. Amir Vakili Tahami, Kamyar Ghajar, Azadeh Shakery, 10.1145/3397271.3401296Association for Computing MachineryNew York, NY, USAAmir Vakili Tahami, Kamyar Ghajar, and Azadeh Shakery. 2020. Distilling Knowl- edge for Fast Retrieval-Based Chat-Bots. Association for Computing Machinery, New York, NY, USA, 2081-2084. https://doi.org/10.1145/3397271.3401296
A Neural Conversational Model. Oriol Vinyals, V Quoc, Le, CoRR abs/1506.05869Oriol Vinyals and Quoc V. Le. 2015. A Neural Conversational Model. CoRR abs/1506.05869 (2015).
A Sequential Matching Framework for Multi-turn Response Selection in Retrievalbased Chatbots. Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, Ming Zhou, CoRR abs/1710.11344Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, and Ming Zhou. 2017. A Sequential Matching Framework for Multi-turn Response Selection in Retrieval- based Chatbots. CoRR abs/1710.11344 (2017).
Sequential Match Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots. Yu Wu, Wei Wu, Ming Zhou, Zhoujun Li, ACL. Yu Wu, Wei Wu, Ming Zhou, and Zhoujun Li. 2017. Sequential Match Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots. In ACL.
Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N Bennett, Junaid Ahmed, Arnold Overwijk, 9th International Conference on Learning Representations, ICLR 2021, Virtual Event. AustriaLee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. https://openreview.net/forum?id=zeFrfgyZln
Learning to Respond with Deep Neural Networks for Retrieval-Based Human-Computer Conversation System. Rui Yan, Yiping Song, Hua Wu, SIGIR. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to Respond with Deep Neural Networks for Retrieval-Based Human-Computer Conversation System. In SIGIR.
Few-Shot Conversational Dense Retrieval. Shi Yu, Zhenghao Liu, Chenyan Xiong, Tao Feng, Zhiyuan Liu, 10.1145/3404835.3462856Association for Computing MachineryNew York, NY, USAShi Yu, Zhenghao Liu, Chenyan Xiong, Tao Feng, and Zhiyuan Liu. 2021. Few- Shot Conversational Dense Retrieval. Association for Computing Machinery, New York, NY, USA, 829-838. https://doi.org/10.1145/3404835.3462856
DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, William B Dolan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsYizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B Dolan. 2020. DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 270-278.
| [
"https://github.com/rkadlec/ubuntu-ranking-dataset-creator"
] |
[
"Tagset Design and Inflected Languages",
"Tagset Design and Inflected Languages"
] | [
"David Elworthy \nOxford Science Park\nSharp Laboratories of Europe Ltd\nOX4 4GAOxford\n",
"Edmund Halley Road \nOxford Science Park\nSharp Laboratories of Europe Ltd\nOX4 4GAOxford\n"
] | [
"Oxford Science Park\nSharp Laboratories of Europe Ltd\nOX4 4GAOxford",
"Oxford Science Park\nSharp Laboratories of Europe Ltd\nOX4 4GAOxford"
] | [] | An experiment designed to explore the relationship between tagging accuracy and the nature of the tagset is described, using corpora in English, French and Swedish. In particular, the question of internal versus external criteria for tagset design is considered, with the general conclusion that external (linguistic) criteria should be followed. Some problems associated with tagging unknown words in inflected languages are briefly considered. | null | [
"https://arxiv.org/pdf/cmp-lg/9504002v2.pdf"
] | 9,668,001 | cmp-lg/9504002 | 0f555d03c57cf62260b1b2cf2e30c125d672d1c0 |
Tagset Design and Inflected Languages
David Elworthy
Oxford Science Park
Sharp Laboratories of Europe Ltd
OX4 4GAOxford
Edmund Halley Road
Oxford Science Park
Sharp Laboratories of Europe Ltd
OX4 4GAOxford
Tagset Design and Inflected Languages
arXiv:cmp-lg/9504002v2 4 Apr 1995
An experiment designed to explore the relationship between tagging accuracy and the nature of the tagset is described, using corpora in English, French and Swedish. In particular, the question of internal versus external criteria for tagset design is considered, with the general conclusion that external (linguistic) criteria should be followed. Some problems associated with tagging unknown words in inflected languages are briefly considered.
Tagset Design
Tagging by means of a Hidden Markov Model (HMM) is widely recognised as an effective technique for assigning parts of speech to a corpus in a robust and efficient manner. An attractive feature of the technique is that the algorithm itself is independent of the (natural) language to which it is applied. All of the "knowledge engineering" is localised in the choice of tagset and the method of training. Typically, training makes use of a manually tagged corpus, or an untagged corpus with some initial bootstrapping probabilities. Some attention has been given to how to make such techniques effective; for example Cutting et al. (1992) suggest ways of training trigram taggers, and Merialdo (1994) and Elworthy (1994) consider the amount and quality of the seeding data needed to construct an accurate tagger.
In training a tagger for a given language, a major part of the knowledge engineering required can therefore be localised in the choice of the tagset. The design of an appropriate tagset is subject to both external and internal criteria. The external criterion is that the tagset must be capable of making the linguistic (for example, syntactic or morphological) distinctions required in the output corpora. Tagsets used in the past have included varying amounts of detail. For example, the Penn treebank tagset (Marcus et al., 1993) omits a number of the distinctions which are made in the LOB and Brown tagsets on which it is based (Garside et al., 1987;Francis and Kučera, 1992) in cases where the surface form of the words allows the distinctions to be recovered if they are needed. Thus, the auxiliary verbs be, do and have have the same tags as other verbs in Penn, but are each separated out in the LOB tagset.
A second design criterion on tagsets is the internal one of making the tagging as effective as possible. As an example, one of the most common errors made by taggers with the LOB and Brown tagsets is mistagging a word as a subordinating conjunction (CS) rather than a preposition (IN), or viceversa (Macklovitch, 1992). A higher level of syntactic analysis indicating the phrasal structure would be required to predict which tag is correct, and this information is not available to fixed-context taggers. The Penn treebank therefore uses a single tag for both cases, leaving the resolution -if required -to some other process. Similarly, most tagsets do not distinguish transitive and intransitive verbs, since taggers which use a context of only two or three words will generally not be able to make the right predictions. Distinctions of this sort are usually found only in corpora such as Susanne which are parsed as well as tagged.
The problem of tagset design becomes particularly important for highly inflected languages, such as Greek or Hungarian. If all of the syntactic variations which are realised in the inflectional system were represented in the tagset, there would be a huge number of tags, and it would be practically impossible to implement or train a simple tagger. Note in passing that this may not as serious a problem as it first appears. If the language is very highly inflected, it may be be possible to do all (or a large part) of the work of a tagger with a word-by-word morphological analysis instead. Nevertheless, there are many languages which have enough ambiguity that tagging is useful, but a rich enough tagset that the criteria on which it is designed must be given careful consideration.
In this paper, I report two experiments which address the internal design criterion, by looking at how tagging accuracy varies as the tagset is modified, in English, French and Swedish. Although the choice of language was dictated by the corpora which were available, they represent three different degrees of complexity in their inflectional systems. English has a very limited system, marking little more than plurality on nouns and a restricted range of verb properties. French has a little more complexity, with gender, number and person marked, while Swedish has more detailed marking for gender, number, definiteness and case. As a subsidiary issue, we will also look at how the tagger performs on unknown words, i.e. ones not seen in the training data. The usual approach here is to hypothesise all tags in the tagset for an unknown word, other than ones where all the words that may have the tag can be enumerated in advance (closed class tags). HMM taggers often perform poorly on unknown words.
Alternative tagsets were derived by taking the initial tagset for each corpus (from manual tagging of the corpus) and condensing sets of tags which represent a grammatical distinction such as gender into single tags. The changes were then applied to the training corpus. This allows us to effectively produce a corpus tagged according to a different scheme without having to manually re-tag the corpus. The changes in the tagsets were motivated purely by grammatical considerations, and did not take the errors actually observed into account. In general what we will look at in the results is how the tagging accuracy changes as the size of the tagset changes. This is a deliberately naive approach, and it is adopted with the goal of continuing in the relatively "knowledge-free" tradition of work in HMM tagging. The aim of the experiment is to determine, crudely, whether a bigger tagset is better than a smaller one, or whether external criteria requiring human intervention should be used to choose the best tagset. The results for the three languages turn out to be quite different, and the general conclusion (which is the overall contribution of the paper) will be that the external criterion should be the one to dominate tagset design: there is a limit to how knowledge-free we can be.
As a preliminary to this work, note that it is hard to reason about the effect of changing the tagset. It can be argued that a smaller tagset should improve tagging accuracy, since it puts less of a burden on the tagger to make fine distinctions. In informationtheoretic terms, the number of decisions required is smaller, and hence the tagger need contribute less information to make the decisions. A smaller tagset may also mean that more words have only one possible tag and so can be handled trivially. Conversely, more detail in the tagset may help the tagger when the properties of two adjacent words give support to the choice of tag for both of them; that is, the transitions between tags contribute the information the tagger needs. For example, if determiners and nouns are marked for number, then the tagger can effectively model agreement in simple noun phrases, by having a higher probability for a singular determiner followed by a singular noun that it does for a singular determiner followed by a plural noun. Theory on its own does not help much in deciding which point of view should dominate.
The experiments
Design of the experiments
Two experiments were conducted on three corpora: 300k words of Swedish text from the ECI Multilingual CD-ROM, and 100k words each of English and French from a corpus of International Telecommunications Union text 1 . In the first experiment the whole of each corpus was used to train the model, and a small sample from the same text was used as test data. For the second experiment, 95% of the corpus was used in training and the remainder in testing. The importance of the second test is that it includes unknown words, which are difficult to tag. The tagsets were progressively modified, by textually substituting simplified tags for the original ones and e e-running the training and test procedures using the modified corpora. The changes to the tagset are listed below. In the results that follow, we will identify tagset that include a given distinction with an uppercase letter and ones that do not with a lowercase letter; for example G for a tagset that marks gender, and g for one that does not.
Swedish
The changes made were entirely based on inflections.
G Gender: masculine, neuter, common gender ("UTR" in the tagset). N Number: singular, plural. D Definiteness: definite, indefinite. C Case: nominative, genitive.
French The changes other than V were based on inflections.
G Gender: masculine, feminine. N Number: singular, plural. P Person: identified as 1st to 6th in the tagset. V Verbs: treat avoir and etre as being the same as any other verb.
English The changes here are more varied than for the other languages, and generally consisted of removing some of the finer subdivisions of the major classes. The grouping of some of these changes is admittedly a little ad hoc, and was intended to give a good distribution of tagset sizes; not all combinations were tried.
C Reduce specific conjunction classes to a common class, and simplify one adjective class. A Simplify noun and adverb classes. P Simplify pronoun classes. N Number: all singular/plural distinctions removed. V Use the same class for have, do and be as for other verbs.
The sizes of the resulting tagsets and the degree of ambiguity in the corpora which resulted appear below. Accuracy figures quoted here are for ambiguous and unknown words only, and therefore factor out effects due to the varying degree of ambiguity as the tagset changes. In fact, this is a rather approximate way of accounting for ambiguity, since it does not take the length of ambiguous sequences into account, and the accuracy is likely to deteriorate more on long sequences of ambiguous words than on short ones.
The tests were run using Good-Turing correction to the probability estimates; that is, rather than estimating the probability of the transition from a tag i to a tag j as the count of transition from i to j in the training corpus divided by the total frequency of tag i, one was added to the count of all transitions, and the total tag frequencies adjusted correspondingly. The purpose in using this correction is to correct for corpora which might not provide enough training data. On the largest tagsets, the correction was found to give a very slight reduction in the accuracy for Swedish, and to improve the French and English accuracies by about 1.5%, suggesting that it is indeed needed.
Results
The first experiment, with no unknown words, gave accuracies on ambiguous words of 91-93% for Swedish, 94-97% for French and 85-90% for English. The results for English are surprisingly low (for example, on the Penn treebank, the tagger gives an accuracy of 95-96%), and may be due to long sequences of ambiguous words. The results appear in table 1. The figures include the degree of ambiguity, that is, the number of words in the corpus for which more than one tag was hypothesised. The accuracy is plotted against the size of the tagset in figures 1-3, where the numbers on the points correspond to the index of tagsets listed. Summarising the patterns:
Swedish Larger tagset generally gives higher accuracy. The results are quite widely spread.
French Clustered, with an accuracy on all tagsets which do not mark gender of around 96%-96.5%; when gender is marked 94%-94.5%.
English Larger tagset tends to give larger accuracy, though with less of a spread than for Swedish.
The sizes of the tagsets ranged from approximately 80-200 tags for Swedish, 35-90 for French, and 70-160 for English. As discussed above, it is not clear what would happen with larger tagsets, but some experiments based on the Susanne corpus and using tagsets ranging from 236 to 425 tags suggest that the trend to higher accuracy continues with even bigger tagsets.
In the second experiment, the test corpora included "unknown" words, which had not been seen during training, and for which the tagger hypothesises all open-class tags. Two results are interesting to look at here: the accuracy on the unknown words, and the accuracy on words which were ambiguous but were found in the training corpus. The results, in outline, are:
Swedish Similar results on known words to first experiment. For unknown words, smaller tagsets give higher accuracy.
French For ambiguous words, the pattern and accuracy were similar to first experiment. For unknown words, the pattern of accuracies was again similar, with tagsets that do not include gender giving accuracies of 51%-52%, and those which do giving 45%-46%.
English Ambiguous words gave similar results to the first test. Unknown words show a weak tendency to give higher accuracy on smaller tagsets.
Typical accuracies on ambiguous words were 90-92%, 93-97% and 83-88% for Swedish, French and English respectively, with the corresponding accuracies on unknown words being 25-50%, 45-52% and 44-58%. Table 2 lists the results, giving the tagset size, the degree of ambiguity and the accuracies on known ambiguous and unknown words. The ambiguous word accuracy is plotted in figures 4-6.
What seems to come out from these results is that there is not a consistent relationship between the size of the tagsets and the tagging accuracy. The most common pattern was for a larger tagset to give higher accuracy, but there were notable exceptions in French (where gender marking was the key factor), in Swedish unknown words (which show the reverse trend) and in English unknown words (which show no very clear trend at all). This seems to fit quite well with the difficulties that were suggested above in reasoning about the effect of tagset size. The main conclusion of this paper is therefore that the knowledge engineering component of setting up a tagger should concentrate on optimising the tagset for external criteria, and that the internal criterion of tagset size does not show sufficient generality to be taken into account without prior knowledge of properties of the language. Perhaps this is not too surprising, but it is useful to have an experimental confirmation that the linguistics matters rather than the engineering.
Unknown words
One final observation about the experiments: the accuracy on unknown words was very low in all of the tests, and was particularly bad in Swedish. The tagger used in the experiments took a very simpleminded approach to unknown words. An alternative that is often used is to limit the possible tags using a simple morphological analysis or some other examination of the surface form of the word. For example, in a variant of the English tagger which was not used in these experiments, a module which reduces the range of possible tags based on testing for only seven surface characteristics such as capitalisation and word endings improved the unknown word accuracy by 15-20%.
The results above show that if it were not for unknown words, there might be some argument for favouring larger tagsets, since they have some tendency to give a higher accuracy. A tentative experiment on the contribution of using morphological or surface analysis in French and Swedish was therefore carried out. Firstly, in both languages, the unknown words from the second experiment were looked up in the lexicon trained from the full corpus to see what tags they might have. For Swedish, 96% of the unknown words came from inflected classes, and had a single tag; for French the figure was about 60%. In both cases, very few of the unknown words (less than 1%) had more than one tag. This provides some hope that an inflectional analysis might should help considerably with unknown words 2 . For confirmation, the list of French unknown words was given to a French grammarian, who predicted that it would be possible to make a good guess at the correct tag from the morphology for around 70% of the words, and could narrow down the possible tags to two or three for about a further 25%. However, further research is needed to determine how realistic these estimates turn out to be.
Conclusion
We have shown how a simple experiment in changing the tagset shows that the relationship between tagset size and accuracy is a weak one and is not consistent against languages. This seems to go against the "folklore" of the tagging community, where smaller tagsets are often held to be better for obtaining good accuracy. I have suggested that what is important is to choose the tagset required for the application, rather than to optimise it for the tagger. A follow-up to this work might be to apply similar tests in other languages to provide a further confirmation of the results, and to see if language families which similar characteristics can be identified. A further conclusion might be that when a corpus is being tagged by hand, a large tagset should be used, since it can always be reduced to a smaller one if the application demands it. Perhaps the major factor we have to set against this is the danger of introducing more human errors into the manual tagging process, by increasing the cognitive load on the human annotators.
Figure 1 :Figure 2 :Figure 4 :
124Results Results for French (no unknown wordsResults for Swedish (with unknown words)
Figure 6 :
6Results for English (with unknown words)
Table 1 :
1Results for test with no unknown wordsLanguage Index
Tagset
Size of
Degree of
Ambiguous word
tagset ambiguity (%)
accuracy (%)
Swedish
1
GNDC
194
41.57
92.02
2
GnDC
170
39.29
92.23
3
GNDc
167
41.49
91.92
4
GNdC
162
41.45
91.67
5
gNDC
152
41.54
91.88
6
GnDc
147
39.21
92.04
7
GndC
141
37.43
91.86
8
GNdc
140
41.37
91.63
9
gNDc
134
41.47
91.82
10
gNdC
126
41.42
91.34
11
Gndc
123
37.36
91.74
12
gnDC
121
39.18
91.35
13
gNdc
113
41.34
91.29
14
gnDc
105
39.11
91.28
15
gndC
96
37.32
91.56
16
gndc
86
37.25
91.52
French
1
GNPV
87
49.77
94.43
2
GNPv
80
49.75
94.35
3
GNpV
76
49.77
94.31
4
GNpv
74
49.75
94.39
5
GnPV
64
49.49
94.28
6
gNPV
62
47.48
96.34
7
GnPv
57
49.47
94.36
8
gNPv
55
47.64
96.22
9
GnpV
53
49.49
94.25
10
gNpV
51
47.48
96.14
11
Gnpv
51
49.47
94.36
12
gnPV
49
47.03
96.34
13
gNpv
49
47.46
96.10
14
gnPv
42
47.01
96.30
15
gnpV
38
47.03
96.30
16
gnpv
36
47.01
96.34
English
1
CAPNV
153
47.95
89.56
2
CApNV
150
47.47
89.27
3
cAPNV
145
47.91
89.50
4
CAPNv
140
47.95
89.33
5
CAPnV
137
47.95
89.20
6
CaPNV
129
47.95
89.20
7
CAPnv
124
47.95
89.01
8
capNV
119
47.43
88.94
9
capnV
108
47.13
88.45
10
capNv
106
47.43
88.48
11
capnv
95
47.13
85.42
Table 2 :
2Results for test with unknown words
Language Index
Tagset
Size of
Degree of
Ambiguous word Unknown word
tagset ambiguity (%)
accuracy (%)
accuracy (%)
Swedish
1
GNDC
194
52.60
91.09
23.42
2
GnDC
170
50.56
91.62
26.28
3
GNDc
167
52.59
91.01
24.17
4
GNdC
162
52.48
90.77
28.48
5
gNDC
152
52.57
91.19
29.33
6
GnDc
147
50.55
91.51
26.48
7
GndC
141
48.86
91.40
36.29
8
GNdc
140
52.46
90.71
28.63
9
gNDc
134
52.56
91.11
29.48
10
gNdC
126
52.45
90.43
36.14
11
Gndc
123
48.85
91.32
36.44
12
gnDC
121
50.46
91.02
34.73
13
gNdc
113
52.43
90.42
36.24
14
gnDc
105
50.45
90.94
35.39
15
gndC
96
48.75
91.09
48.00
16
gndc
86
48.74
91.02
47.85
French
1
GNPV
87
58.37
93.86
45.74
2
GNPv
80
58.35
93.86
45.41
3
GNpV
76
58.37
93.78
45.58
4
GNpv
74
58.35
93.78
45.41
5
GnPV
64
58.09
93.63
45.74
6
gNPV
62
56.54
96.50
50.58
7
GnPv
57
58.07
93.74
46.08
8
gNPv
55
56.52
96.46
51.25
9
GnpV
53
58.09
93.75
45.74
10
gNpV
51
56.54
96.38
50.92
11
Gnpv
51
58.07
93.78
46.24
12
gnPV
49
56.09
96.26
50.92
13
gNpv
49
56.62
96.34
50.75
14
gnPv
42
56.08
96.26
52.45
15
gnpV
38
56.09
96.26
52.25
16
gnpv
36
56.08
96.34
52.59
English
1
CAPNV
153
55.65
87.57
46.49
2
CAPnv
150
55.17
87.54
46.69
3
cAPNV
145
55.60
87.46
46.29
4
CAPNv
140
55.65
87.52
46.09
5
CAPnV
137
55.65
87.62
51.70
6
CaPNV
129
55.65
87.43
46.29
7
CAPnv
124
55.65
87.48
51.70
8
capNV
119
55.13
87.38
46.49
9
capnV
108
55.00
83.66
56.51
10
capNv
106
55.13
83.66
44.29
11
capnv
95
55.00
83.56
55.11
The English and French corpora were kindly supplied to us by Tony McEnery, and are translationequivalent. See McEnery et al. (1994) for details.
A Practical Part-of-Speech Tagger. [ References, Cutting, Third Conference on Applied Natural Language Processing. Proceedings of the Conference. Trento, ItalyAssociation for Computational LinguisticsReferences [Cutting et al.1992] Doug Cutting, Julian Kupiec, Jan Pedersen, and Penelope Sibun (1992). A Practical Part-of-Speech Tagger. In Third Con- ference on Applied Natural Language Process- ing. Proceedings of the Conference. Trento, Italy, pages 133-140, Association for Computational Linguistics.
Does Baum-Welch Re-estimation Help Taggers?. David Elworthy, Fourth Conference on Applied Natural Language Processing. Proceedings of the Conference. Stuttgart, Germany. Association for Computational LinguisticsDavid Elworthy (1994). Does Baum- Welch Re-estimation Help Taggers? In Fourth Conference on Applied Natural Language Process- ing. Proceedings of the Conference. Stuttgart, Ger- many, pages 53-58, Association for Computa- tional Linguistics.
Although the figures here are likely to represent a best case, given how little of the corpora was held out. Francis and Kučera1992] W. N. Francis and F. KučeraHoughton MifflinFrequency Analysis of English UsageAlthough the figures here are likely to represent a best case, given how little of the corpora was held out. [Francis and Kučera1992] W. N. Francis and F. Kučera (1992). Frequency Anal- ysis of English Usage. Houghton Mifflin.
The Computational Analysis of English: A Corpus-based Approach. [ Garside, LongmanLondon[Garside et al.1987] Roger Garside, Geoffrey Leech, and Geoffrey Sampson (1987). The Computa- tional Analysis of English: A Corpus-based Ap- proach. Longman, London.
Where the Tagger Falters. Elliott Macklovitch, Proceedings of the Fourth Conference on Theoretical and Methodological Issues in Machine Translation. the Fourth Conference on Theoretical and Methodological Issues in Machine TranslationElliott Macklovitch (1992). Where the Tagger Fal- ters. In Proceedings of the Fourth Conference on Theoretical and Methodological Issues in Machine Translation, pages 113-126.
Building a Large Annotated Corpus of English: The Penn Treebank. Marcus, Computational Linguistics. 192[Marcus et al.1993] Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz (1993). Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.
The Exploitation of Parallel Corpora in Projects ET10/63 and CRATER. [ Mcenery, International Conference on New Methods in Language Processing. Proceedings of the Conference. UMISTCentre for Computational Linguistics[McEnery et al.1994] A. M. McEnery, M. P. Oakes, R. Garside, J. Hutchinson, and G. N. Leech (1994). The Exploitation of Parallel Corpora in Projects ET10/63 and CRATER. In International Conference on New Methods in Language Process- ing. Proceedings of the Conference, pages 108-115, Centre for Computational Linguistics, UMIST.
Tagging English Text with a Probabilistic Model. Bernard Merialdo, Computational Linguistics. 202Bernard Merialdo (1994). Tagging English Text with a Probabilistic Model. Com- putational Linguistics, 20(2):155-171.
| [] |
[
"Uncertainty Sentence Sampling by Virtual Adversarial Perturbation",
"Uncertainty Sentence Sampling by Virtual Adversarial Perturbation"
] | [
"Hanshan Zhang zhanghanshan@zuoyebang.com \nZuoyebang Education Technology (Beijing) Co\nLtd\n",
"Zhen Zhang zhangzhen@zuoyebang.com \nZuoyebang Education Technology (Beijing) Co\nLtd\n",
"Hongfei Jiang jianghongfei@zuoyebang.com \nZuoyebang Education Technology (Beijing) Co\nLtd\n",
"Yang Song songyang@zuoyebang.com \nZuoyebang Education Technology (Beijing) Co\nLtd\n"
] | [
"Zuoyebang Education Technology (Beijing) Co\nLtd",
"Zuoyebang Education Technology (Beijing) Co\nLtd",
"Zuoyebang Education Technology (Beijing) Co\nLtd",
"Zuoyebang Education Technology (Beijing) Co\nLtd"
] | [] | Active learning for sentence understanding attempts to reduce the annotation cost by identifying the most informative examples. Common methods for active learning use either uncertainty or diversity sampling in the poolbased scenario. In this work, to incorporate both predictive uncertainty and sample diversity, we propose Virtual Adversarial Perturbation for Active Learning (VAPAL) , an uncertainty-diversity combination framework, using virtual adversarial perturbation(Miyato et al., 2019)as model uncertainty representation. VAPAL consistently performs equally well or even better than the strong baselines on four sentence understanding datasets: AG-NEWS, IMDB, PUBMED, and SST-2, offering a potential option for active learning on sentence understanding tasks. | 10.48550/arxiv.2210.14576 | [
"https://export.arxiv.org/pdf/2210.14576v2.pdf"
] | 253,116,549 | 2210.14576 | 7df53a51a0efe5e6ad9cd27aad319426203fc238 |
Uncertainty Sentence Sampling by Virtual Adversarial Perturbation
Hanshan Zhang zhanghanshan@zuoyebang.com
Zuoyebang Education Technology (Beijing) Co
Ltd
Zhen Zhang zhangzhen@zuoyebang.com
Zuoyebang Education Technology (Beijing) Co
Ltd
Hongfei Jiang jianghongfei@zuoyebang.com
Zuoyebang Education Technology (Beijing) Co
Ltd
Yang Song songyang@zuoyebang.com
Zuoyebang Education Technology (Beijing) Co
Ltd
Uncertainty Sentence Sampling by Virtual Adversarial Perturbation
Active learning for sentence understanding attempts to reduce the annotation cost by identifying the most informative examples. Common methods for active learning use either uncertainty or diversity sampling in the poolbased scenario. In this work, to incorporate both predictive uncertainty and sample diversity, we propose Virtual Adversarial Perturbation for Active Learning (VAPAL) , an uncertainty-diversity combination framework, using virtual adversarial perturbation(Miyato et al., 2019)as model uncertainty representation. VAPAL consistently performs equally well or even better than the strong baselines on four sentence understanding datasets: AG-NEWS, IMDB, PUBMED, and SST-2, offering a potential option for active learning on sentence understanding tasks.
Introduction
In recent years, deep neural networks have made a significant achievement in natural language processing (Yang et al., 2019;Devlin et al., 2019;Raffel et al., 2020;He et al., 2020). These neural models usually need a large amount of data which have to be labeled and trained. Active learning is an effective way to reduce both computational costs and human labor by selecting the most critical examples to label.
Two samplings, uncertainty sampling and diversity sampling are often used in common active learning methods. Uncertainty sampling (Lewis and Gale, 1994) selects difficult examples based on the model confidence score. In a batch setting, the sampled data points are near-identical (Ash et al., 2019). This phenomenon suggests that we might need to take diversity into account besides uncertainty. A naive combination of uncertainty and diversity, however, negatively affects the test accuracy (Hsu and Lin, 2015). Ash et al. (2019) presents a practical framework, BADGE, which combines uncertainty and diversity. BADGE measures uncertainty by gradient embedding and achieves diversity by clustering.
However, BADGE relies on model confidence scores, which require a model warm-start to calibrate. Specifically, the correctness likelihoods do not increase consistently with higher model confidence scores. To avoid the warm-start requirement, Yuan et al. (2020) presents ALPS, a cold-start approach that uses self-supervised loss (Masked Language Model Loss) as sentence representation. Nevertheless, MLM loss can be seen as a language model perplexity, not a direct downstream taskrelated measurement.
From another point of view, deep learning models are vulnerable to adversarial examples (Goodfellow et al., 2014;Kurakin et al., 2016), indicating that measuring uncertainty based on model confidence scores is overconfident. Virtual adversarial training (Miyato et al., 2015(Miyato et al., , 2019(Miyato et al., , 2016 modifies inputs with special perturbations, virtual adversarial perturbation, which can change the output distribution of the model in the most significant way in the sense of KL-divergence. It is tested valid in industry-scale semi-supervised learning setting (Chen et al., 2021).
We propose VAPAL (Virtual Adversarial Perturbation for Active Learning) in this work. VAPAL computes perturbation for each data point in an unlabeled pool to measure model uncertainty. VAPAL clusters the data points to achieve diversity like BADGE and ALPS with perturbations acquired. Since virtual adversarial perturbation could be calculated without label information, our method VA-PAL is also advantageous over BADGE in that it does not require a warm-start. Unlike ALPS, our method does not rely on a special self-supervised loss. In other words, VAPAL could be applied to any differentiable model.
We use four datasets (AGNEWS, IMDB, PUBMED, and SST-2) to evaluate VAPAL through two tasks, namely sentiment analysis and topic classification. Baselines cover uncertainty, diversity, and two SOTA hybrid active learning methods (BADGE, ALPS).
Our main contributions are as follows:
• We take Virtual Adversarial Perturbation to measure model uncertainty in sentence understanding tasks for the first time. The local smoothness is treated as model uncertainty, which relies less on the poorly calibrated model confidence scores.
• We present VAPAL (Virtual Adversarial Perturbation for Active Learning) to combine uncertainty and diversity in a combination framework.
• We show that VAPAL method is equally good or better than the baselines in four tasks. Our methods can successfully replace the gradientbased representation of BADGE. Furthermore, it does not rely on specific self-supervised loss, unlike Masked Language Model Loss used in ALPS.
Related Work
To reduce the cost of labeling, active learning seeks to select the most informative data points from the unlabeled data to require humans to obtain labels. We then train the learner model on the new labeled data and repeats. The prior active learning sampling methods primarily focus on uncertainty or diversity. The uncertainty sampling methods are the most popular and widely used strategies which select the difficult examples to label (Lewis and Gale, 1994;Joshi et al., 2009;Houlsby et al., 2011). Diversity sampling selects a subset of data points from the pool to effectively represent the whole pool distribution (Geifman and El-Yaniv, 2017;Sener and Savarese, 2017;Gissin and Shalev-Shwartz, 2019). A successful active learning method requires the incorporation of both aspects, but the exact implementation is still open for discussion.
Recently, hybrid approaches that combine uncertainty and diversity sampling have also been proposed. Naive combination frameworks are shown to be harmful to the test accuracy and rely on hyperparameters (Hsu and Lin, 2015). Aiming for sophisticated combination frameworks, Ash et al. (2019) propose Batch Active Learning By Diverse Gradient Embeddings (BADGE), and Yuan et al. (2020) propose Active Learning by Processing Surprisal (ALPS). They follow the same framework that first builds uncertainty representations for unlabeled data and clustering for diversity. BADGE measures data uncertainty as the gradient magnitude with respect to parameters in the final (output) layer and forms a gradient embedding based data representation. However, according to Yuan et al. (2020), BADGE has two main issues: reliance on warm-starting and computational inefficiency. ALPS builds data embeddings from self-supervised loss (Masked Language Model loss) (Yuan et al., 2020). Nevertheless, the MLM loss is an indirect proxy for model uncertainty in the downstream classification tasks, and ALPS might work only with a pre-trained language model using MLM.
What else can be used as a model uncertainty representation and can be efficiently combined to achieve diversity sampling? Virtual adversarial perturbation from virtual adversarial training (Miyato et al., 2019) is a promising option. Deep learning methods often face possible over-fitting in model generalization, especially when the training set is relatively small. In adversarial training, adversarial attacks are utilized to approximate the smallest perturbation for a given latent state to cross the decision boundary (Goodfellow et al., 2014;Kurakin et al., 2016). It has proven to be an important proxy for assessing the robustness of the model. Moreover, the labeled data is scarce. Virtual adversarial training (VAT) does not require true label information, thus fully using the unlabeled data. VAT can be seen as a regularization method based on Local Distributional Smoothness (LDS) loss. LDS is defined as a negative measure of the smoothness of the model distribution facing local perturbation around input data points in the sense of KL-divergence (Miyato et al., 2019). The virtual adversarial perturbation can be crafted without the label information, which can help alleviate the warm-starting issue BADGE faces. Yu and Pao (2020) roughly rank grouped examples of model prediction by LDS score. Our method is inspired by the same research line (Miyato et al., 2019) but different from it in many folds. Our method aims to project data into a model smoothness representation space rather than a rough scalar score, so it is more effective. We introduce the virtual adversarial perturbation as sentence representations by which model uncertainty is inherently expressed. Furthermore, we consider both uncertainty and diversity in a uniform framework based on this informative representation.
Methods
Notation
The notation for the sake of active learning tasks will be introduced in the following section. First, each sentence x i is a sequence of tokens:
xi =< t0, . . . , ts >, ts ∈ V (1)
where t s is tokens in sentence, V is vocabulary set and x i is from a discrete space D. Second, for labeled examples,
L = {(xi, yi) | xi ∈ D, yi ∈ Y, i = {1, . . . , L}}(2)
Where L is labeled set for classification task and the labels belongs to set Y = {1, . . . , C}. Except labeled set, we also have an unlabeled data pool,namely,
U = {(xu) | xu ∈ D, i = {1, . . . , U }}(3)
Thus, for our total data set,
D = L + U. Let f (x; θ) = σ(V ·h(x; W )) : D → Y with pa- rameters θ = (W, V ).
We fine-tune learner model f (x; θ) on the labeled set L with pre-trained model BERT, h(x; W ) and the attached sequences classification head (V ).
Virtual Adversarial Perturbation
Using model smoothness as a regularization strategy to avoid over-fitting during model training is an effective way in general. Vitual Adversarial perturbation is introduced by Miyato et al. (2015Miyato et al. ( , 2019Miyato et al. ( , 2016 for calculating the local smoothness of the conditional label distribution around each input data point. With small perturbation r, we can compute the KL-divergence as follows:
D KL (r, x i , θ) = D KL (p(y | x i , θ)||p(y | x i +r, θ)) (4) r i v−adv = argmax{D KL (r, x i , θ) : r 2 ≤ } (5) where r i v−adv is the optimal virtual adversar- ial perturbation for data point x i . Note r i v−adv
is the most sensitive direction of model distribution p(y|x i , θ) in the sense of KL-divergence. As for finding optimal r v−adv , we can approximate r v−adv with the power iteration method (Golub and van der Vorst, 2000):
d ← ∆ r D KL (r i v−adv , x i ,θ) | r=ξd(6)
With a larger number of power iteration, I p , the approximation can be improved monotonically.
LDS (Local Distributional Smoothness) loss can be calculated (Miyato et al., 2015(Miyato et al., , 2019(Miyato et al., , 2016 as :
LDS(x i , θ) = −D KL (r i v−adv , x i , θ) (7)
The loss can be thought as a negative measure of model local smoothness given input x i .
Virtual Adversarial Perturbation for
Activate (
V t = {r x v−adv | x ∈ U \ L t−1 } 6: M ← k-means(V t ) k=m 7: Q = {argmin x∈U \L t−1 c−r x v−adv | c ∈ M } 8: L t ← L t−1 ∪ Q 9:
Train a model θ t on L t 10: end for 11: return Final Model θ T VAPAL, described in Algorithm 1, starts by drawing an initial set of m examples using K-MEANS on r i v−adv to find m nearest examples respect to m centers. Its main process has three computations at each step t: a r v−adv computation, a sampling computation, and a fine-tune f (x; θ) computation. In detail, at each time step t, for every x i ∈ U, we compute the r i v−adv . Given these virtual adversarial perturbations {r i v−adv : x i ∈ U}, VAPAL selects a m size set of data points by sampling via k-MEANS (Lloyd, 1982). The algorithm use these sampled example set asking oracle to get the corresponding labels, retrain the model and repeats.
The more detail of the main computations: a r v−adv computation, a sampling computation, a fine-tune f (x; θ) computation is discussed in the following sections.
Modified r v−adv Computation
Unlike computer vision and speech tasks, it is nottrivial for VAPAL to compute r v−adv directly on sentences(input space) because the text is drawn from a discrete space. Thus we modified the computation as follows:
D KL (r, h i , θ) = D KL (p(y | h i , θ)||p(y | h i +r, θ)) (8) d ← ∆ r D KL (r i v−adv , h i ,θ) | r=ξd(9)
where h i = h(x i ; W ). Instead of applying attack on input discrete space, we use the [CLS] token embedding from pre-trained BERT encoder h(x i ; W ) to represent data points in D. So the r v−adv is in R d space. The r i v−adv is thought as a measurement of model uncertain about example x i in the sense of KL-divergence which don't need to considerŷ as the true label. Thus the first limitation of BADGE: reliance on warm-starting is not taking a major impact on VAPAL. Considering the second limitation of BADGE, computational inefficiency, the gradient embeddings in BADGE is g x ∈ R Cd where C is the number of labels and d input hidden dimension of the last project layer. In contrast, r v−adv is only d dimension, which largely reduces the cost of computing distances of clustering step.
The Sampling step: k-MEANS Clustering
After computing the virtual adversarial perturbations set V t = {r x v−adv | x ∈ U} in the unlabeled pool, we apply k-MEANS to cluster the perturbations (from our experiment, k-MEANS is slightly better than k-MEANS++). We select m data points whose virtual adversarial perturbation is nearest to m cluster centers. The final set of sentences Q are the queries for oracle to label.
Fine-Tuning
We sample a batch of m = 100 examples from the training data set, query the labels, and then move the labeled set out of the unlabeled data pool for active learning simulation. The initial BERT encoder h(x; W ) is a pre-trained BERT-based model. We fine-tune the classifier f (x; θ t ) with labeled data set and report accuracy performance on the test set for each iteration. To avoid warm-starting (Ash and Adams, 2020), we don't continue fine-tune the model f (x; θ t−1 ) from previous iteration. The total iteration is set to be 10. A total of 10000 examples is collected for each classification task in the end.
Experiments
We evaluate VAPAL on sentiment analysis, sentence classification for three different domains: news articles, medical abstracts, and sentiment reviews. For the labeling, we directly use the label information in four data sets. In all experiments, we report the averaged test F1 score result of five runs with different random seeds.
Dataset
We follow (Yuan et al., 2020) and use IMDB (Maas et al., 2011),SST-2 (Socher et al., 2013), PUBMED (Dernoncourt and Lee, 2017) and AG-NEWS (Zhang et al., 2015). We summarised four data sets' statistics in Table.1.
Baselines
We compare VAPAL against four baseline methods: three warm-start methods (Entropy, BADGE), and two cold-start methods (RAND,ALPS):
RAND directly sample data points from the unlabeled pool, U, with a uniform distribution.
Entropy select data examples with highest predictive entropy with model (Lewis and Gale, 1994;Wang and Shang, 2014) BADGE (Ash et al., 2019), computes loss gradient embeddings g
x = ∂ ∂θout L CE (f (x; θ),ŷ x ) whereŷ x = argmax y∈Y f (x; θ t ) y . Then g x i is (gx i ) = (f (x : θ) − 1(ŷ = i))h(x; W )(10)
for each data point in unlabeled data pool U respecting to the model's last layer parameters. Then select batch set of data by using cluster centers of k-MEANS++ (Arthur and Vassilvitskii, 2007) on the gradient embedding set for labeling.
ALPS (Yuan et al., 2020) passes the unmasked sequence example x i through BERT and MLM head, randomly samples 15% tokens to compute the cross-entropy loss against the target labels. Using these MLM losses for each data point in the pool, they use k-MEANS to select a batch set of examples for labeling.
Data is randomly sampled at start for the warmstart method. The implement of these four methods is based on the code from (Yuan et al., 2020) 1
Implementation Details
The batch size for fine-tuning is set to be 32 with three epochs. The max sequence length is set to be 128 for all data sets. We use AdamW (Loshchilov and Hutter, 2019) with learning rate of 2e − 5, β 1 = 0.9,β 2 = 0.99 and no warm-up learning rate schedule. For calculating r v−adv , the power of iteration I p is 10, = 1 and ξ = 10.
Following (Yuan et al., 2020), the uncased BERT-BASE 2 is used as encoder h(x; θ) for three tasks, IMDB,SST-2,AGNEWS. For PUBMED, the pretrained BERT is from SCIBERT (Beltagy et al., 2019) 3 . All experiments are run on one GeForce GTX 2080 GPU and 2.50GHz Intel(R) Xeon(R) Platinum 8255C CPU.
Result and Discussion
The test F1 score across all data sets and baseline acquisition functions is presented in Figure 1. VA-PAL can perform equally well or even better than other baseline approaches especially on PUBMED, SST-2. Meanwhile on AGNEWS, most methods yield similar results except ENTROPY. The pool performance of Entropy indicates that the uncertainty method based on model confidence score does suffer from the poorly calibrated models.
Similarly, we can see that BADGE also achieves low performance, even though it uses both uncertainty and diversity. It indicates that the constructed gradient embeddings from the assumed target label do not benefit data sampling.
On PUBMED and SST-2, VAPAL achieved a larger performance margin than other methods most of the time excluding the early stages of ALPS on SST-2. VAPAL consistently outperforms BADGE, suggesting that our method can be a successful replacement of BADGE. It can also be concluded that ALPS and VAPAL are compatible with each other from Figure.1.
Unexpectedly, We do not observe the same performance-boosting reported in (Yuan et al., 2020) between ALPS and BADGE on PUBMED dataset (Figure.1c). The stability of AL methods against randomness needs to be further explored in the future. The performance differences of ALPS on four datasets, especially the large drop 2 https://huggingface.co/transformers/ 3 https://github.com/allenai/scibert on PUBMED and the second-best performance on SST-2, show the limitation of using perplexity (loss) of pretrained language model as model uncertainty measure. The correlation of perplexity and the difficulty of an example in downstream tasks should be evaluated in future research.
It is interesting to observe that test F1 score across the different methods converges in the end as the sampled data points increase. The performance gap between AL methods and fully supersized is still in need of improvement. This observation highlights the future direction of AL strategy in the early stages.
Is r v−adv The Best Choice Applying VAT
For AL?
Different approaches could be used to effectively introduce the idea of VAT into the uncertainty sampling stage. In this section, we discuss another two different ways compared to VAPAL: LDR class Yu and Pao (2020) , a method that uses Local Distribution Roughness LDR(x i , θ) = D KL (r i v−adv , x i , θ) as score function which is a measure of model local smoothness given input x i , selects data point which has LDR x i ≥ P RT where P RT is percentile rank threshold , groups data points respected to the model predict labelŷ and selecting same amount data from these groups regrading their LDR scores. We use P RT = 90 for evaluation following the original setting.
LDS vec instead of using the average score of KL, We consider the output of KL-divergence loss which is a C-way vector as representation for each data point and then apply k-MEANS like VAPAL.
The performance of these two methods and our VAPAL is shown in Firgure.2. From the Figure.2, the LDS vec generally performs better than LDR class in AGNEWS, PUBMED,SST-2 expect IMDB.
From this observation, we might conclude that the uncertainty-diversity framework, which combines vector representation of model local smoothness and clustering methods, is a better option. Comparing VAPAL and LDS vec , the r v−adv is indicated to be the best uncertainty proxy measure- : Average test F1 score of active learning simulation over 10 iterations with 100 sentences per iteration for four data sets. The numbers are the averaged test F1 score of five runs with different random seeds. Overall, VAPAL can select more informative data points for model resulting higher test F1 score especially in PUBMED data set. The fully supervised performance (Full) is also reported. ment than others.
Is Rand Starting Seed Examples
Important?
To answer the question: is random initial starting examples important for virtual adversarial training methods? We experiment to compare two approaches, random selecting from uniform distribution or using the proposed methods with a random initial classification head to get the first hundred seed examples. From Figure.3 and VAPAL on the GPT2 4 5 on AGNEWS and SST-2, reported in Figure.4. We set two masked probabilities, 15% and 100% for ALPS and other settings remain the same. The performance of VA-PAL consistently outperforms ALPS in two data sets, especially a large gap in the early stages under SST-2 as shown in Figure.4b.
Conclusion
We propose a novel active learning acquisition function called VAPAL. As a proxy of the model's uncertainty, we firstly introduce the virtual adversarial perturbation into active learning for sentence understanding tasks. We use perturbation to represent data points and select the most representative points by clustering following BADGE and ALPS's uncertainty-diversity framework. With the help of virtual adversarial perturbation, we can acquire uncertain examples without the label information. Experiments on various sentence classification tasks and domains demonstrate that VAPAL is another potential option for active learning in sentence understanding tasks. Our empirical results show that further improvement is still required in the future, like Finding more sophisticated local smoothness measurements to encode uncertainty and diversity information for natural language understanding tasks. Another exciting direction is the sensitiveness of active learning methods to random seeds, as we have mentioned. The test F1 score results w.r.t the label size in the training with GPT2 as the encoder. The VAPAL is consistently outperform the current state-of-art ALPS and this shows its ability for pretrain causal language model
Neural network f (x; θ), unlabeled data pool U , number of query size m, number of iterations T , number of iterations of the power method I p , number of batch size B 1: for t ← 1, . . . , T do 2:for all examples x in U \ L
Figure 1
1Figure 1: Average test F1 score of active learning simulation over 10 iterations with 100 sentences per iteration for four data sets. The numbers are the averaged test F1 score of five runs with different random seeds. Overall, VAPAL can select more informative data points for model resulting higher test F1 score especially in PUBMED data set. The fully supervised performance (Full) is also reported.
Figure 2 :
2Evaluating three way of applying VAPAL into active learning. Two vector based data point representation, VAPAL and LDS vec are generally outperform the score based LDS class .
Figure 3 : 2 Figure 4 :
324Is Rand Starting Seeds Important? RAND-* methods are using the first 100 seeds points from uniform sampling and other methods is directly using proposed sampling strategy (a) Dataset: AGNEWS (b) Dataset: SST-Generalization Ability:
Table 1 :
1Summary of used Sentence Classification data sets
, VAT methods could select important examples without random initial starting seeds and the VAT methods cloud be suffered from bad initial starting examples. The performance gaps between RANDs and NONE-RANDS under VAPAL method reflects that VAPAL can guide the model training with the help of more informative sampled examples, especially in PUB-MERD and SST-2. VAPAL's performance is more stable than others, which indicate the rand starting seed examples might not be important for VAPAL. The virtual adversarial perturbation can be applied to any differentiable deep model. Can it work well with the pre-train causal language model, like GPT2? To answer this question, we evaluate ALPS5.3 Can VAPAL apply to
GPT2SequenceClassification ?
https://github.com/forest-snow/alps
GPT2 pre-train model is from https:// huggingface.co/gpt2 5 For classification task, we use sequence classifier from https://huggingface.co/docs/ transformers/model_doc/gpt
K-Means++: the Advantages of Careful Seeding. David Arthur, Sergei Vassilvitskii, 10.1145/1283383.1283494Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. the eighteenth annual ACM-SIAM symposium on Discrete algorithms8David Arthur and Sergei Vassilvitskii. 2007. K- Means++: the Advantages of Careful Seeding. Pro- ceedings of the eighteenth annual ACM-SIAM sym- posium on Discrete algorithms, 8:1027-1025.
On warm-starting neural network training. Jordan T Ash, Ryan P Adams, Advances in Neural Information Processing Systems. NeurIPSJordan T. Ash and Ryan P. Adams. 2020. On warm-starting neural network training. Advances in Neural Information Processing Systems, 2020- December(NeurIPS):1-21.
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, Alekh Agarwal, Jordan T. Ash, Chicheng Zhang, Akshay Krishna- murthy, John Langford, and Alekh Agarwal. 2019. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds.
SciB-ERT: A Pretrained Language Model for Scientific Text. Iz Beltagy, Kyle Lo, Arman Cohan, Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. pages 3615-3620.
Industry Scale Semi-Supervised Learning for Natural Language Understanding. Luoxin Chen, Francisco Garcia, Varun Kumar, He Xie, Jianhua Lu, Luoxin Chen, Francisco Garcia, Varun Kumar, He Xie, and Jianhua Lu. 2021. Industry Scale Semi- Supervised Learning for Natural Language Under- standing.
PubMed 200k RCT: a Dataset for Sequential Sentence Classification in Medical Abstracts. Franck Dernoncourt, Ji Young Lee, Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a Dataset for Sequential Sentence Classi- fication in Medical Abstracts. pages 308-313.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming Wei Chang, Kenton Lee, Kristina Toutanova, NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference. 1Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies -Proceedings of the Conference, 1:4171- 4186.
Deep Active Learning over the Long Tail. Yonatan Geifman, Ran El-Yaniv, Yonatan Geifman and Ran El-Yaniv. 2017. Deep Ac- tive Learning over the Long Tail. (m):1-10.
Daniel Gissin, Shai Shalev-Shwartz, Discriminative Active Learning. Daniel Gissin and Shai Shalev-Shwartz. 2019. Dis- criminative Active Learning. pages 1-11.
Eigenvalue computation in the 20th century. Gene H Golub, Henk A Van Der, Vorst, 10.1016/S0377-0427(00)00413-1Journal of Computational and Applied Mathematics. 1231Numerical Analysis. III: Linear AlgebraGene H. Golub and Henk A. van der Vorst. 2000. Eigenvalue computation in the 20th century. Jour- nal of Computational and Applied Mathematics, 123(1):35-65. Numerical Analysis 2000. Vol. III: Linear Algebra.
Explaining and Harnessing Adversarial Examples. Ian J Goodfellow, Jonathon Shlens, Christian Szegedy, 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and Harnessing Adver- sarial Examples. 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings, pages 1-11.
DeBERTa: Decodingenhanced BERT with Disentangled Attention. Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. DeBERTa: Decoding- enhanced BERT with Disentangled Attention.
Bayesian Active Learning for Classification and Preference Learning. Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, Máté Lengyel, Neil Houlsby, Ferenc Huszár, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian Active Learning for Classification and Preference Learning. pages 1-17.
Active learning by learning. Wei Ning Hsu, Hsuan Tien Lin, Proceedings of the National Conference on Artificial Intelligence. the National Conference on Artificial Intelligence4Wei Ning Hsu and Hsuan Tien Lin. 2015. Active learn- ing by learning. Proceedings of the National Con- ference on Artificial Intelligence, 4:2659-2665.
Multi-class active learning for image classification. Ajay J Joshi, 10.1109/CVPR.2009.52066272009 IEEE Conference on Computer Vision and Pattern Recognition. Fatih Porikli, and Nikolaos PapanikolopoulosAjay J. Joshi, Fatih Porikli, and Nikolaos Pa- panikolopoulos. 2009. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2372-2379.
Adversarial examples in the physical world. Alexey Kurakin, Ian Goodfellow, Samy Bengio, 10.1201/9781351251389-85th International Conference on Learning Representations. ICLR 2017 -Workshop Track ProceedingsAlexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. 5th International Conference on Learning Represen- tations, ICLR 2017 -Workshop Track Proceedings, (c):1-14.
A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, 10.1007/978-1-4471-2099-5_1Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. the 17th Annual International ACM SIGIR Conference on Research and Development in Information RetrievalDavid D. Lewis and William A. Gale. 1994. A sequen- tial algorithm for training text classifiers. Proceed- ings of the 17th Annual International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR 1994, pages 3-12.
Least Squares Quantization in PCM. P Stuart, Lloyd, 10.1109/TIT.1982.1056489IEEE Transactions on Information Theory. 282Stuart P. Lloyd. 1982. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28(2):129-137.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, 7th International Conference on Learning Representations. Ilya Loshchilov and Frank Hutter. 2019. Decou- pled weight decay regularization. In 7th Inter- national Conference on Learning Representations, ICLR 2019.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOregon, USAAssociation for Computational LinguisticsPortlandAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.
Adversarial Training Methods for Semi-Supervised Text Classification. Annual review of psychology. Takeru Miyato, Andrew M Dai, Ian Goodfellow, Takeru Miyato, Andrew M. Dai, and Ian Goodfel- low. 2016. Adversarial Training Methods for Semi- Supervised Text Classification. Annual review of psychology, pages 345-375.
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning. Takeru Miyato, Masanori Shin Ichi Maeda, Shin Koyama, Ishii, 10.1109/TPAMI.2018.2858821IEEE Transactions on Pattern Analysis and Machine Intelligence. 418Takeru Miyato, Shin Ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. Virtual Adversarial Training: A Regularization Method for Supervised and Semi- Supervised Learning. IEEE Transactions on Pat- tern Analysis and Machine Intelligence, 41(8):1979- 1993.
Distributional smoothing with virtual adversarial training. Takeru Miyato, Masanori Shin Ichi Maeda, Ken Koyama, Shin Nakae, Ishii, 4th International Conference on Learning Representations. ICLR 2016 -Conference Track ProceedingsTakeru Miyato, Shin Ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2015. Distributional smoothing with virtual adversarial training. 4th International Conference on Learning Representa- tions, ICLR 2016 -Conference Track Proceedings.
Exploring the limits of transfer learning with a unified text-totext transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21:1-67.
Ozan Sener, Silvio Savarese, Active Learning for Convolutional Neural Networks: A Core-Set Approach. Iclr2018. Ozan Sener and Silvio Savarese. 2017. Active Learn- ing for Convolutional Neural Networks: A Core-Set Approach. Iclr2018, pages 1-9.
Reasoning With Neural Tensor Networks for Knowledge Base Completion. Richard Socher, Danqi Chen, Christopher Manning, Danqi Chen, Andrew Ng, Neural Information Processing Systems. Richard Socher, Danqi Chen, Christopher Manning, Danqi Chen, and Andrew Ng. 2013. Reasoning With Neural Tensor Networks for Knowledge Base Completion. Neural Information Processing Sys- tems (2003), pages 926-934.
A new active labeling method for deep learning. Dan Wang, Yi Shang, 10.1109/IJCNN.2014.6889457Proceedings of the International Joint Conference on Neural Networks. the International Joint Conference on Neural NetworksDan Wang and Yi Shang. 2014. A new active label- ing method for deep learning. Proceedings of the International Joint Conference on Neural Networks, pages 112-119.
XLNet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Advances in Neural Information Processing Systems. 32Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. Advances in Neural Infor- mation Processing Systems, 32(NeurIPS):1-18.
Virtual Adversarial Active Learning. Feng Chin, Hsing Kuo Yu, Pao, 10.1109/BigData50022.2020.9378021Proceedings -2020 IEEE International Conference on Big Data, Big Data 2020. -2020 IEEE International Conference on Big Data, Big Data 2020Chin Feng Yu and Hsing Kuo Pao. 2020. Virtual Adver- sarial Active Learning. Proceedings -2020 IEEE In- ternational Conference on Big Data, Big Data 2020, pages 5323-5331.
Cold-start Active Learning through Self-supervised Language Modeling. Michelle Yuan, Hsuan-Tien Lin, Jordan Boyd-Graber, 10.18653/v1/2020.emnlp-main.637Michelle Yuan, Hsuan-Tien Lin, and Jordan Boyd- Graber. 2020. Cold-start Active Learning through Self-supervised Language Modeling. pages 7935- 7948.
Character-level Convolutional Networks for Text Classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level Convolutional Networks for Text Classification. pages 1-9.
| [
"https://github.com/allenai/scibert",
"https://github.com/forest-snow/alps"
] |
[
"Pretrained Transformers for Offensive Language Identification in Tanglish",
"Pretrained Transformers for Offensive Language Identification in Tanglish"
] | [
"Sean Benhur \nPSG College of Arts and Science\nCivil Aerodrome Post\nCoimbatoreIndia\n",
"Kanchana Sivanraju \nPSG College of Arts and Science\nCivil Aerodrome Post\nCoimbatoreIndia\n"
] | [
"PSG College of Arts and Science\nCivil Aerodrome Post\nCoimbatoreIndia",
"PSG College of Arts and Science\nCivil Aerodrome Post\nCoimbatoreIndia"
] | [] | This paper describes the system submitted to Dravidian-Codemix-HASOC2021: Hate Speech and Offensive Language Identification in Dravidian Languages (Tamil-English and Malayalam-English). This task aims to identify offensive content in code-mixed comments/posts in Dravidian Languages collected from social media. Our approach utilizes pooling the last layers of pretrained transformer multilingual BERT for this task which helped us achieve rank nine on the leaderboard with a weighted average score of 0.61 for the Tamil-English dataset in subtask B. After the task deadline, we sampled the dataset uniformly and used the MuRIL pretrained model, which helped us achieve a weighted average score of 0.67, the top score in the leaderboard. Furthermore, our approach to utilizing the pretrained models helps reuse our models for the same task with a different dataset. Our code and models are available in GitHub 1 | null | [
"https://arxiv.org/pdf/2110.02852v4.pdf"
] | 238,407,877 | 2110.02852 | 6c226925b9756eb224e9ab6f4ed5cd5666ff1037 |
Pretrained Transformers for Offensive Language Identification in Tanglish
Sean Benhur
PSG College of Arts and Science
Civil Aerodrome Post
CoimbatoreIndia
Kanchana Sivanraju
PSG College of Arts and Science
Civil Aerodrome Post
CoimbatoreIndia
Pretrained Transformers for Offensive Language Identification in Tanglish
Hate SpeechOffensive ContentBERTTransformer
This paper describes the system submitted to Dravidian-Codemix-HASOC2021: Hate Speech and Offensive Language Identification in Dravidian Languages (Tamil-English and Malayalam-English). This task aims to identify offensive content in code-mixed comments/posts in Dravidian Languages collected from social media. Our approach utilizes pooling the last layers of pretrained transformer multilingual BERT for this task which helped us achieve rank nine on the leaderboard with a weighted average score of 0.61 for the Tamil-English dataset in subtask B. After the task deadline, we sampled the dataset uniformly and used the MuRIL pretrained model, which helped us achieve a weighted average score of 0.67, the top score in the leaderboard. Furthermore, our approach to utilizing the pretrained models helps reuse our models for the same task with a different dataset. Our code and models are available in GitHub 1
Introduction
In the era of the internet, people from various age groups engage in social media, it has become a one-stop shop for all activities from learning to entertainment, but it is also filled with offensive and disturbing content, which is potentially harmful to everyone [1]. To prevent this, an automated system of identifying and flagging offensive content should be developed. Though there is a substantial amount of work done on major languages like English to identify offensive content [2], it is a challenging task to identify and flag offensive content in low resource languages, since many users tend to write their language in English script, which is called code-switching or code-mixing [3,4,5]. Developing NLP systems on code-mixed text is challenging since the number of datasets is scarce [6,7,8,9] and there are no clear patterns on these texts. The spelling and context might vary depending upon the user.
Dravidian languages are under-resourced in natural language processing [10]. Dravidian name was derived from Tamil, Dravidian means Tamil [11], Dravidian languages are Tamili languages [12]. Tamil is a language spoken by Tamils in Tamil Nadu, India, Sri Lanka, and the Tamil diaspora worldwide, with official recognition in India, Sri Lanka, and Singapore [13,14,15]. Current Tamil script was developed from the Tamili script, the Vatteluttu alphabet, and the Chola-Pallava script. There are 12 vowels, 18 consonants, and 1 ytam in this word (voiceless velar fricative) [16,17,18,19]. The Tamil script is also used to write minority languages including Saurashtra, Badaga, Irula, and Paniya. Tamil Eluttu "means" sound, letter, phoneme" in Tolkappiyam (about 5,320 BCE), and thus includes the sounds of the Tamil language, as well as how they are created (phonology) [20,21,22,23]. All the Tamili (Dravidian) languages evolved from Tamil language [24,25].
HASOC2021: Hate Speech and Offensive Content Identification is a competition that helps increase research in offensive language identification in code mixed languages such as Tamil-English and Malayalam-English [6]. The dataset consists of comments/posts that were collected from Youtube and social media. Each comment/post is annotated with an offensive language label at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
In this paper, we present our system developed for HASOC 2021; the rest of the paper is organized as follows. Section 2 discusses the research work on offensive language identification and natural language processing in under-resourced languages. Following this, in section 3, we present the methodology of our work, from preprocessing the dataset, our model architecture, and training procedures. In section 4, we discuss our results. Finally, in section 6, we conclude with a summary and future work.
Related Work
Offensive Language identification has been widely made across many people in multiple languages. Shared tasks like HASOC-19 [26] dealt with hate speech and offensive language identification in Indo-European languages. HASOC-Dravidian-CodeMix -FIRE 2020 [27][28] is the first shared task for identifying offensive content in Tamili languages. Previous work on Tamili languages on hope speech [29,30], troll meme detection [31], multimodal sentiment analysis [9] have paved the way to research in Tamili languages.
Researchers have used a wide variety of techniques for the identification of offensive language. There have been previous work [32] in using classical machine learning models with efficient feature generation. Other researchers in [33] [34] have used an ULMFit model [35] and pretrained XLM-Roberta model with translated and transliterated texts for this task.
Methodology
This section briefly describes our methodology for this task, including data preparation, model architecture, and training strategies. For this HASOC 2021 competition, we only use the datasets that were provided for the HASOC task. Table 1 shows the statistics of the train and dev distribution.
Dataset
The dataset given for subtask, Offensive Language Identification in Tamil-English, consists of Youtube comments, present in code-mixed data containing text written in both native and roman scripts in English. For training our model, we concatenate both training and dev sets; we remove the URLs, English stopwords, @username mentions, NAN values, emojis, and also punctuations; this preprocessing method is applied to all the train, dev, and test sets. After the task deadline, we sample the dataset uniformly to handle the class imbalance problem in this dataset, which helps us improve our score. Table 1 shows the statistics of the given dataset after preprocessing.
Model Architecture
We use pretrained transformer models with custom pooled output for this task of identifying offensive content. We have used mBERT and MuRIL pretrained models from huggingface checkpoints. In this section, we describe our pooling operations on the pretrained models and the pretrained models.
Attention Pooler: In this method, the attention operation described in the below equation is applied to the last hidden state of the pretrained transformer; empirically, this helps the model learn the contribution of individual tokens. Finally, the returned pooled output from the transformer is further passed to a linear layer for the prediction of the label.
= ℎ ( ℎ )ℎ(1)
where ℎ and are learnable weights.
= ( + )(2)
Mean Pooler: In this method, the average of the last layer of the pretrained transformer is taken. This acts like a pooling layer in a convolutional neural net. An alternative to this method is to use max pooling, but max-pooling selects only the words with essential features rather than utilizing all the words. Since our dataset is code-mixed and the spelling of the tokens are not precise, we choose to go with mean pooling approach.
mBERT Multilingual models of BERT [36]. This model was pre-trained using the same pretraining strategy that was employed to BERT, which is Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). It was pretrained on the Wikipedia dump of top 104 languages. To account for the data imbalance due to the size of Wikipedia for a given language, exponentially smoothed weighting of data was performed during data creation and word piece vocabulary creation. This results in high resource languages being under-sampled while low resourced languages being over-sampled.
MuRIL MuRIL [37], pretrained model is trained on 16 different Indian Languages; the model was pretrained on Masked Language Modeling(MLM) and Translated Language Modelling(TLM). This model outperforms mBERT on all the tasks in XTREME [38]
Training
Though finetuning transformers gives better results and is dominant across leaderboards of various NLP competitions. Transformer models are still unstable due to catastrophic forgetting [39]. For this offensive language identification task, we carefully choose our hyperparameters for experimentation. We finetune our custom models with binary-cross-entropy loss and AdamW optimizer, which decouples the weight decay from the optimization step. Linear scheduler for learning rate scheduling with 2e-5 as an initial step is used with this training strategy. The training hyperparameters are listed in Table 2.
Results and Discussion
In this HASOC 2021 competition, the teams were ranked by the weighted F1-score of their classification system. This section discusses our experimental results; since we have used both training and dev sets for training, the train set in the dataset distribution refers to the concatenated given train and dev sets. The W-Precision, W-Recall, and W-F1-Score refer to the Weighted precision, weighted recall, and weighted F1-Score. Table 3 shows our results obtained before the task deadline using Attention Pooler and mBERT without sampling the dataset. After the task deadline, we uniformly sample our dataset and run or experiments on MuRIL and mBERT with Attention Pooling and Mean Pooling. The results are provided in Table 4 and Table 5. The W-precision, W-Recall and W-F1 Score stands for Weighted Precision, Weighted Recall and Weighted F1-Score. From the above results, we conclude that the pretrained model MuRIL with MeanPooler performs best than others. Also, one can infer that the difference between training and test scores shows that the model is suffering from overfitting, and also sampling the dataset uniformly is a crucial step to increasing the score.
Conclusion
In this paper, we have presented our solution for the Offensive Language Identification system, which uses pretrained transformers mBERT and MuRIL. As a result, we achieve Rank 9 on the leaderboard and a 0.67 f1-score after the task deadline. For future research, we will consider improving the results by using any external dataset and other pretrained models and reducing the generalization error of the model.
Table 1
1Distribution of Tamil English DatasetDistribution Data
Train
4937
Test
1000
Table 2
2Hyperparameters used across experimentsHyperparameters
Values
Learning Rate
2e-5
Maximim Sequence Length 512
Batch Size
8
Epochs
5
Weight Decay
0.01
Dropout
0.5
AdamW epsilon
1e-06
Table 3
3Results before task deadlineDataset Distribution W-Precision W-Recall W-F1 ScoreTrain
0.90
0.91
0.92
Test
0.61
0.60
0.61
Table 4
4Results on Train datasetModel W-Precision W-Recall W-F1 ScoremBERT with AttentionPooler 0.93
0.90
0.93
mBERT with MeanPooler
0.90
0.92
0.91
MuRIL with AttentionPooler 0.88
0.88
0.88
MuRIL with MeanPooler
0.93
0.93
0.93
Table 5
Results on Test data
Model
W-Precision W-Recall W-F1 Score
mBERT with AttentionPooler 0.65
0.65
0.65
mBERT with MeanPooler
0.61
0.61
0.61
MuRIL with AttentionPooler 0.63
0.63
0.63
MuRIL with MeanPooler
0.67
0.67
0.67
S U Hegde, A Hande, R Priyadharshini, S Thavareesan, R Sakuntharaj, S Thangasamy, B Bharathi, B R Chakravarthi, arXiv:2108.03886Do Images really do the Talking? Analysing the significance of Images in Tamil Troll meme classification. arXiv preprintS. U. Hegde, A. Hande, R. Priyadharshini, S. Thavareesan, R. Sakuntharaj, S. Thangasamy, B. Bharathi, B. R. Chakravarthi, Do Images really do the Talking? Analysing the significance of Images in Tamil Troll meme classification, arXiv preprint arXiv:2108.03886 (2021).
Dataset for Identification of Homophobia and Transophobia in Multilingual YouTube Comments. B R Chakravarthi, R Priyadharshini, R Ponnusamy, P K Kumaresan, K Sampath, D Thenmozhi, S Thangasamy, R Nallathambi, J P Mccrae, arXiv:2109.00227arXiv preprintB. R. Chakravarthi, R. Priyadharshini, R. Ponnusamy, P. K. Kumaresan, K. Sampath, D. Then- mozhi, S. Thangasamy, R. Nallathambi, J. P. McCrae, Dataset for Identification of Homopho- bia and Transophobia in Multilingual YouTube Comments, arXiv preprint arXiv:2109.00227 (2021).
TrollsWithOpinion: A Dataset for Predicting Domain-specific Opinion Manipulation in Troll Memes. S Suryawanshi, B R Chakravarthi, M Arcan, S Little, P Buitelaar, arXiv:2109.03571arXiv preprintS. Suryawanshi, B. R. Chakravarthi, M. Arcan, S. Little, P. Buitelaar, TrollsWithOpinion: A Dataset for Predicting Domain-specific Opinion Manipulation in Troll Memes, arXiv preprint arXiv:2109.03571 (2021).
A Hande, K Puranik, K Yasaswini, R Priyadharshini, S Thavareesan, A Sampath, K Shanmugavadivel, D Thenmozhi, B R Chakravarthi, arXiv:2108.12177Offensive Language Identification in Low-resourced Code-mixed Dravidian languages using Pseudo-labeling. arXiv preprintA. Hande, K. Puranik, K. Yasaswini, R. Priyadharshini, S. Thavareesan, A. Sampath, K. Shan- mugavadivel, D. Thenmozhi, B. R. Chakravarthi, Offensive Language Identification in Low-resourced Code-mixed Dravidian languages using Pseudo-labeling, arXiv preprint arXiv:2108.12177 (2021).
Benchmarking multi-task learning for sentiment analysis and offensive language identification in under-resourced Dravidian languages. A Hande, S U Hegde, R Priyadharshini, R Ponnusamy, P K Kumaresan, S Thavareesan, B R Chakravarthi, arXiv:2108.03867arXiv preprintA. Hande, S. U. Hegde, R. Priyadharshini, R. Ponnusamy, P. K. Kumaresan, S. Thavareesan, B. R. Chakravarthi, Benchmarking multi-task learning for sentiment analysis and of- fensive language identification in under-resourced Dravidian languages, arXiv preprint arXiv:2108.03867 (2021).
B R Chakravarthi, P K Kumaresan, R Sakuntharaj, A K Madasamy, S Thavareesan, P B , S Navaneethakrishnan, J P Mccrae, T Mandl, Overview of the HASOC-DravidianCodeMix Shared Task on Offensive Language Detection in Tamil and Malayalam. CEUR2021Working Notes of FIRE 2021 -Forum for Information Retrieval EvaluationB. R. Chakravarthi, P. K. Kumaresan, R. Sakuntharaj, A. K. Madasamy, S. Thavareesan, P. B, S. Chinnaudayar Navaneethakrishnan, J. P. McCrae, T. Mandl, Overview of the HASOC-DravidianCodeMix Shared Task on Offensive Language Detection in Tamil and Malayalam, in: Working Notes of FIRE 2021 -Forum for Information Retrieval Evaluation, CEUR, 2021.
R Priyadharshini, B R Chakravarthi, S Thavareesan, D Chinnappa, T Durairaj, E Sherly, Overview of the DravidianCodeMix 2021 Shared Task on Sentiment Detection in Tamil, Malayalam, and Kannada, in: Forum for Information Retrieval Evaluation, FIRE 2021. Association for Computing MachineryR. Priyadharshini, B. R. Chakravarthi, S. Thavareesan, D. Chinnappa, T. Durairaj, E. Sherly, Overview of the DravidianCodeMix 2021 Shared Task on Sentiment Detection in Tamil, Malayalam, and Kannada, in: Forum for Information Retrieval Evaluation, FIRE 2021, Association for Computing Machinery, 2021.
B R Chakravarthi, R Priyadharshini, V Muralidaran, N Jose, S Suryawanshi, E Sherly, J P Mccrae, arXiv:2106.09460DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text. arXiv preprintB. R. Chakravarthi, R. Priyadharshini, V. Muralidaran, N. Jose, S. Suryawanshi, E. Sherly, J. P. McCrae, DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text, arXiv preprint arXiv:2106.09460 (2021).
B R Chakravarthi, K Soman, R Ponnusamy, P K Kumaresan, K P Thamburaj, J P Mccrae, arXiv:2106.04853DravidianMultiModality: A Dataset for Multi-modal Sentiment Analysis in Tamil and Malayalam. arXiv preprintB. R. Chakravarthi, K. Soman, R. Ponnusamy, P. K. Kumaresan, K. P. Thamburaj, J. P. McCrae, et al., DravidianMultiModality: A Dataset for Multi-modal Sentiment Analysis in Tamil and Malayalam, arXiv preprint arXiv:2106.04853 (2021).
Leveraging orthographic information to improve machine translation of under-resourced languages. B R Chakravarthi, NUI GalwayPh.D. thesisB. R. Chakravarthi, Leveraging orthographic information to improve machine translation of under-resourced languages, Ph.D. thesis, NUI Galway, 2020.
D Chinnappa, P Dhandapani, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsTamil lyrics corpus: Analysis and experimentsD. Chinnappa, P. Dhandapani, Tamil lyrics corpus: Analysis and experiments, in: Proceedings of the First Workshop on Speech and Language Technologies for Dravid- ian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 1-9. URL: https://aclanthology.org/2021.dravidianlangtech-1.1.
JudithJeyafreedaAndrew@DravidianLangTech-EACL2021:offensive language detection for Dravidian code-mixed YouTube comments. J J Andrew, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsJ. J. Andrew, JudithJeyafreedaAndrew@DravidianLangTech-EACL2021:offensive language detection for Dravidian code-mixed YouTube comments, in: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 169-174. URL: https://aclanthology.org/2021. dravidianlangtech-1.22.
ThamizhiMorph: A morphological parser for the Tamil language. K Sarveswaran, G Dias, M Butt, Machine Translation. 35K. Sarveswaran, G. Dias, M. Butt, ThamizhiMorph: A morphological parser for the Tamil language, Machine Translation 35 (2021) 37-70.
SSNCSE_NLP@DravidianLangTech-EACL2021: Offensive language identification on multilingual code mixing text. B Bharathi, A S , Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsB. Bharathi, A. S. A, SSNCSE_NLP@DravidianLangTech-EACL2021: Offensive language identification on multilingual code mixing text, in: Proceedings of the First Work- shop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 313-318. URL: https://aclanthology.org/2021. dravidianlangtech-1.45.
SSNCSE_NLP@DravidianLangTech-EACL2021: Meme classification for Tamil using machine learning approach. B Bharathi, A S , Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsB. Bharathi, A. S. A, SSNCSE_NLP@DravidianLangTech-EACL2021: Meme classifica- tion for Tamil using machine learning approach, in: Proceedings of the First Work- shop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 336-339. URL: https://aclanthology.org/2021. dravidianlangtech-1.49.
Word embedding-based Part of Speech tagging in Tamil texts. S Thavareesan, S Mahesan, 10.1109/ICIIS51140.2020.93426402020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS). S. Thavareesan, S. Mahesan, Word embedding-based Part of Speech tagging in Tamil texts, in: 2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS), 2020, pp. 478-482. doi:10.1109/ICIIS51140.2020.9342640.
Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts. S Thavareesan, S Mahesan, 10.1109/MERCon50084.2020.91853692020 Moratuwa Engineering Research Conference (MERCon). S. Thavareesan, S. Mahesan, Sentiment Lexicon Expansion using Word2vec and fastText for Sentiment Prediction in Tamil texts, in: 2020 Moratuwa Engineering Research Conference (MERCon), 2020, pp. 272-276. doi:10.1109/MERCon50084.2020.9185369.
S Thavareesan, S Mahesan, 10.1109/ICIIS47346.2019.9063341Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation. 2019 14th Conference on Industrial and Information Systems (ICIIS)S. Thavareesan, S. Mahesan, Sentiment Analysis in Tamil Texts: A Study on Machine Learning Techniques and Feature Representation, in: 2019 14th Conference on Industrial and Information Systems (ICIIS), 2019, pp. 320-325. doi:10.1109/ICIIS47346.2019. 9063341.
Ontology-based Tamil-English cross-lingual information retrieval system. D Thenmozhi, C Aravindan, Sādhanā. 43D. Thenmozhi, C. Aravindan, Ontology-based Tamil-English cross-lingual information retrieval system, Sādhanā 43 (2018) 1-14.
A novel hybrid approach to detect and correct spelling in Tamil text. R Sakuntharaj, S Mahesan, 2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS). IEEER. Sakuntharaj, S. Mahesan, A novel hybrid approach to detect and correct spelling in Tamil text, in: 2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS), IEEE, 2016, pp. 1-6.
Use of a novel hash-table for speeding-up suggestions for misspelt Tamil words. R Sakuntharaj, S Mahesan, 2017 IEEE International Conference on Industrial and Information Systems (ICIIS). IEEER. Sakuntharaj, S. Mahesan, Use of a novel hash-table for speeding-up suggestions for mis- spelt Tamil words, in: 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), IEEE, 2017, pp. 1-5.
Detecting and correcting real-word errors in Tamil sentences. R Sakuntharaj, S Mahesan, Ruhuna Journal of Science. 9R. Sakuntharaj, S. Mahesan, Detecting and correcting real-word errors in Tamil sentences, Ruhuna Journal of Science 9 (2018).
A refined pos tag sequence finder for Tamil sentences. R Sakuntharaj, S Mahesan, 2018 IEEE International Conference on Information and Automation for Sustainability (ICIAfS). IEEER. Sakuntharaj, S. Mahesan, A refined pos tag sequence finder for Tamil sentences, in: 2018 IEEE International Conference on Information and Automation for Sustainability (ICIAfS), IEEE, 2018, pp. 1-6.
Building discourse parser for Thirukkural. A R , S , Proceedings of the 16th International Conference on Natural Language Processing. the 16th International Conference on Natural Language ProcessingHyderabad, IndiaNLP Association of India, International Institute of Information TechnologyA. R, S. C N, Building discourse parser for Thirukkural, in: Proceedings of the 16th International Conference on Natural Language Processing, NLP Association of India, International Institute of Information Technology, Hyderabad, India, 2019, pp. 18-25. URL: https://aclanthology.org/2019.icon-1.3.
Information extraction framework for Kurunthogai. C Subalalitha, Sādhanā. 44C. Subalalitha, Information extraction framework for Kurunthogai, Sādhanā 44 (2019) 1-6.
Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages. P Majumder, D Patel, S Modha, T Mandl, 10.1145/3368567.3368584doi:10. 1145/3368567.3368584P. Majumder, D. Patel, S. Modha, T. Mandl, Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages, 2019. doi:10. 1145/3368567.3368584.
Overview of the track on sentiment analysis for dravidian languages in code-mixed text, in: Forum for Information Retrieval Evaluation. B R Chakravarthi, R Priyadharshini, V Muralidaran, S Suryawanshi, N Jose, E Sherly, J P Mccrae, B. R. Chakravarthi, R. Priyadharshini, V. Muralidaran, S. Suryawanshi, N. Jose, E. Sherly, J. P. McCrae, Overview of the track on sentiment analysis for dravidian languages in code-mixed text, in: Forum for Information Retrieval Evaluation, 2020, pp. 21-24.
Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german, in: Forum for Information Retrieval Evaluation. T Mandl, S Modha, A Kumar, M , B R Chakravarthi, T. Mandl, S. Modha, A. Kumar M, B. R. Chakravarthi, Overview of the hasoc track at fire 2020: Hate speech and offensive language identification in tamil, malayalam, hindi, english and german, in: Forum for Information Retrieval Evaluation, 2020, pp. 29-32.
HopeEDI: A multilingual hope speech detection dataset for equality, diversity, and inclusion. B R Chakravarthi, Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, Association for Computational Linguistics. the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, Association for Computational LinguisticsBarcelona, SpainB. R. Chakravarthi, HopeEDI: A multilingual hope speech detection dataset for equal- ity, diversity, and inclusion, in: Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, Associa- tion for Computational Linguistics, Barcelona, Spain (Online), 2020, pp. 41-53. URL: https://aclanthology.org/2020.peoples-1.5.
Findings of the shared task on hope speech detection for equality, diversity, and inclusion. B R Chakravarthi, V Muralidaran, Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion. the First Workshop on Language Technology for Equality, Diversity and InclusionAssociation for Computational LinguisticsB. R. Chakravarthi, V. Muralidaran, Findings of the shared task on hope speech detection for equality, diversity, and inclusion, in: Proceedings of the First Workshop on Lan- guage Technology for Equality, Diversity and Inclusion, Association for Computational Linguistics, Kyiv, 2021, pp. 61-72. URL: https://aclanthology.org/2021.ltedi-1.8.
Findings of the shared task on troll meme classification in Tamil. S Suryawanshi, B R Chakravarthi, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsS. Suryawanshi, B. R. Chakravarthi, Findings of the shared task on troll meme classification in Tamil, in: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 126-132. URL: https://aclanthology.org/2021.dravidianlangtech-1.16.
Theedhum nandrum@dravidian-codemix-fire2020: A sentiment polarity classifier for youtube comments with code-switching between tamil, malayalam and english. B Lakshmanan, S K Ravindranath, arXiv:2010.031892020B. Lakshmanan, S. K. Ravindranath, Theedhum nandrum@dravidian-codemix-fire2020: A sentiment polarity classifier for youtube comments with code-switching between tamil, malayalam and english, 2020. arXiv:2010.03189.
Hypers@DravidianLangTech-EACL2021: Offensive language identification in Dravidian code-mixed YouTube comments and posts. C Vasantharajan, U Thayasivam, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesKyivAssociation for Computational LinguisticsC. Vasantharajan, U. Thayasivam, Hypers@DravidianLangTech-EACL2021: Offensive language identification in Dravidian code-mixed YouTube comments and posts, in: Pro- ceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 195-202. URL: https://aclanthology.org/2021.dravidianlangtech-1.26.
Towards offensive language identification for Dravidian languages. S Sai, Y Sharma, Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages. the First Workshop on Speech and Language Technologies for Dravidian LanguagesAssociation for Computational LinguisticsS. Sai, Y. Sharma, Towards offensive language identification for Dravidian languages, in: Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages, Association for Computational Linguistics, Kyiv, 2021, pp. 18-27. URL: https: //aclanthology.org/2021.dravidianlangtech-1.3.
Fine-tuned language models for text classification. J Howard, S Ruder, arXiv:1801.06146J. Howard, S. Ruder, Fine-tuned language models for text classification, CoRR abs/1801.06146 (2018). URL: http://arxiv.org/abs/1801.06146. arXiv:1801.06146.
J Devlin, M.-W Chang, K Lee, K Toutanova, Bert , arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. arXiv:1810.04805.
S Khanuja, D Bansal, S Mehtani, S Khosla, A Dey, B Gopalan, D K Margam, P Aggarwal, R T Nagipogu, S Dave, S Gupta, S C B Gali, V Subramanian, P Talukdar, arXiv:2103.10730Multilingual representations for indian languages. 2021S. Khanuja, D. Bansal, S. Mehtani, S. Khosla, A. Dey, B. Gopalan, D. K. Margam, P. Aggarwal, R. T. Nagipogu, S. Dave, S. Gupta, S. C. B. Gali, V. Subramanian, P. Talukdar, Muril: Multilingual representations for indian languages, 2021. arXiv:2103.10730.
Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. J Hu, S Ruder, A Siddhant, G Neubig, O Firat, M Johnson, arXiv:2003.110802020J. Hu, S. Ruder, A. Siddhant, G. Neubig, O. Firat, M. Johnson, Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization, 2020. arXiv:2003.11080.
M Mosbach, M Andriushchenko, D Klakow, arXiv:2006.04884On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. 2021M. Mosbach, M. Andriushchenko, D. Klakow, On the stability of fine-tuning bert: Miscon- ceptions, explanations, and strong baselines, 2021. arXiv:2006.04884.
| [] |
[
"DialogCC: Large-scale Multi-Modal Dialogue Dataset",
"DialogCC: Large-scale Multi-Modal Dialogue Dataset"
] | [
"Young-Jun Lee \nKAIST\n\n",
"Byungsoo Ko \nNAVER Vision\n\n",
"Han-Gyu Kim \nNAVER Clova Speech\n\n",
"Ho-Jin Choi \nKAIST\n\n"
] | [
"KAIST\n",
"NAVER Vision\n",
"NAVER Clova Speech\n",
"KAIST\n"
] | [] | As sharing images in an instant message is a crucial factor, there has been active research on learning a imagetext multi-modal dialogue model. However, training a well-generalized multi-modal dialogue model is challenging because existing multi-modal dialogue datasets contain a small number of data, limited topics, and a restricted variety of images per dialogue. In this paper, we present a multi-modal dialogue dataset creation pipeline that involves matching large-scale images to dialogues based on CLIP similarity. Using this automatic pipeline, we propose a large-scale multi-modal dialogue dataset, DialogCC, which covers diverse real-world topics and various images per dialogue. With extensive experiments, we demonstrate that training a multi-modal dialogue model with our dataset can improve generalization performance. Additionally, existing models trained with our dataset achieve state-of-theart performance on image and text retrieval tasks. The source code and the dataset will be released after publication. | 10.48550/arxiv.2212.04119 | [
"https://export.arxiv.org/pdf/2212.04119v1.pdf"
] | 254,408,863 | 2212.04119 | 89924944fe899fc26e8dfa447900ca849f47b76a |
DialogCC: Large-scale Multi-Modal Dialogue Dataset
Young-Jun Lee
KAIST
Byungsoo Ko
NAVER Vision
Han-Gyu Kim
NAVER Clova Speech
Ho-Jin Choi
KAIST
DialogCC: Large-scale Multi-Modal Dialogue Dataset
/passing2961/DialogCC
As sharing images in an instant message is a crucial factor, there has been active research on learning a imagetext multi-modal dialogue model. However, training a well-generalized multi-modal dialogue model is challenging because existing multi-modal dialogue datasets contain a small number of data, limited topics, and a restricted variety of images per dialogue. In this paper, we present a multi-modal dialogue dataset creation pipeline that involves matching large-scale images to dialogues based on CLIP similarity. Using this automatic pipeline, we propose a large-scale multi-modal dialogue dataset, DialogCC, which covers diverse real-world topics and various images per dialogue. With extensive experiments, we demonstrate that training a multi-modal dialogue model with our dataset can improve generalization performance. Additionally, existing models trained with our dataset achieve state-of-theart performance on image and text retrieval tasks. The source code and the dataset will be released after publication.
Introduction
People share various images with each other when communicating via instant messaging tools. Such behavior increases social bonding (rapport) as well as engagement. The ability to share images is also necessary for a dialogue model for better bonding conversations. In the visual dialogue domain, the majority of previous works have focused on image-grounded dialogues, where two persons talk about given images [2,6,18,27,29,31,38,39,46,47,51]. In practical situations, humans actively share images during conversations rather than merely talking about a given image, which is called image sharing behavior [49]. Recent studies for image sharing have proposed multi-modal dialogue datasets, which are constructed either manually by crowd-sourcing (PhotoChat [49]) or automatically by utilizing image-text similarity (MMDD [20] 1 ). 1 Figure 1. Dataset statistics comparison. We compare our proposed dataset DialogCC with existing multi-modal dialogue datasets: MMDD [20] and PhotoChat [49]. Compared to the existing datasets, DialogCC contains large-scale dialogues and images while covering more diverse words and hypernyms. Moreover, learning with various images for the same dialogue or utterance in DialogCC would benefit a model to obtain better generalization.
However, existing multi-modal dialogue datasets have three significant limitations; (1) Scalability. Recent years have witnessed the success of large-scale multi-modal pretraining models [1,12,17,26,32], which benefited from the power of large-scale image-caption datasets [4,7,35,37,42]. Nevertheless, as shown in Figure 1 (# unique dialogs and # unique images), existing image-dialogue datasets [20,49] contain a small number of dialogues and images that are limited to training a large-scale multi-modal dialogue model. (2) Diversity. A large-scale multi-modal dialogue model should be able to cover open domain conversation. However, as illustrated in Figure 1 (# unique words and # unique hypernyms), existing datasets cover a limited number of words, topics, and domains. Such lack of diversity can also be found in conversational skills (e.g., empathizing with situations [15,24,33] and understanding personal information [14,50]), which is an important factor in human conversation. Existing datasets cover only few conversational skills (see Section 3.2). (3) Generalization. Given the same dialogue and context, people can share different types of images. For example, for an utterance of "I love pets," one can share an image of a dog, and the other can share an image of a cat. Nonetheless, as shown in Figure 1 (# images / dialog and # images / utterances), existing datasets consist of less than the average 3 images per dialogue and the average 1.7 images per utterance. A model trained with such datasets can be overfitted by memorizing those pairs of images and dialogues, resulting in a lack of generalization.
The objective of this work is to create a large-scale multimodal dialogue dataset in order to train a well-generalized multi-modal dialogue model for open-domain conversation.
To this end, we present an automatic pipeline for creating a multi-modal dialogue dataset and propose a large-scale multi-modal dialogue dataset, DialogCC, created by the automatic pipeline. The pipeline consists of two main filtering steps: source data filtering and multi-modal dialogue filtering. These filtering steps eliminate inappropriate images from large-scale images and dialogues based on CLIP similarity. As illustrated in Figure 1, DialogCC achieves better statistics compared to the existing datasets in terms of scalability, diversity, and generalization. In addition, extensive experiments demonstrate that a model trained with DialogCC achieves state-of-the-art performance in image and text retrieval tasks with enhanced generalization performance.
In summary, our main contributions are as follows: 1) We present an automatic pipeline for creating a multi-modal dialogue dataset that can create a large-scale dataset without human effort. 2) We propose a large-scale multi-modal dialogue dataset named DialogCC, which contains diverse images and dialogues consisting of various images per dialogue. 3) Extensive experiments demonstrate the effectiveness of our dataset, which achieves state-of-the-art performances.
Related Work
Image-Dialogue Dataset. In the visual dialogue domain, most previous studies are divided into two categories depending on whether the image is grounded or sharing in the dialogue. The image-grounded dialogue task aims to answer questions [2,6,18,36] or generate natural conversations [27,29,38,46, 51] about given images. These datasets require machines to perceive and understand the given images, but we sometimes share images relevant to dialogue contexts in daily conversations. Hence, it is difficult to train dialogue agents to retrieve an appropriate image based on dialogue contexts in image-grounded dialogue task.
Recently the image-sharing dialogue task has been proposed to overcome such limitation, which predicts images semantically relevant to given dialogue contexts. Since there were no existing datasets for image-sharing task, previous studies have focused on construction of the dataset. One of the existing datasets, named PhotoChat [49], is manually constructed through a crowd-sourcing platform with Open Image Dataset V4 [19] as source images. This dataset can provide a high-quality dialogue dataset, but the manual construction is time-consuming and expensive, which makes it hard to be expanded to a large-scale dataset. Another line of work [20] creates a 45k multi-modal dialogue dataset through an automatic pipeline composed of the Visual Semantic Reasoning Network [21] (VSRN). They replace utterances with semantically relevant images based on the similarity between the image and utterance obtained from VSRN. They also remove contextually incoherent images based on the threshold obtained through human annotation. However, both datasets are small and cover limited image objects and topics, as demonstrated in Figure 1. To this end, we construct a large-scale multi-modal dialogue dataset through the automatic pipeline.
Multi-Modal Dialogue Model. The multi-modal dialogue model is mainly categorized into retrieval and generative models. The retrieval model is to retrieve proper texts or images from the candidates given the dialogue contexts. The generative model is to generate responses given the dialogue contexts. For the retrieval model, most existing studies have adopted the dual encoder architecture consisting of a text encoder and image encoder [20,38,49]. For the generative model, many works are based on the encoderdecoder architecture [25,39,44,47]. In this paper, for fair comparisons, we evaluate our dataset by adopting the text retrieval model [20] and image retrieval model [49] as our baselines.
Multi-Modal Dialogue Dataset Creation
In this section, we propose DialogCC, which is a largescale multi-modal dialogue dataset. In order to construct DialogCC, we introduce an automatic pipeline, which consists of two steps: (1) source data filtering, (2) multi-modal dialogue filtering. Besides, we conduct a comprehensive analysis of our dataset with respect to scalability, diversity, and generalization by comparing two existing datasets, MMDD [20] and PhotoChat [49].
CLIP-based Automatic Pipeline
We present an automatic pipeline for constructing Di-alogCC. The key idea of our pipeline is considering two types of similarities: utterance-image similarity and utterance-caption similarity. The overall pipeline is illus- Figure 2. An overview of the automatic pipeline for multi-modal dialogue dataset creation. We collect source text-only dialogues and images, which survived by thresholding based on the similarity score between two modalities obtained from the CLIP model. Next, we combine two similarity types for each computed by utterance-image and utterance-caption. We then remove unrelated images from the matched results. trated in Figure 2. In the following part of this section, we provide details about DialogCC construction pipeline.
Source Data Filtering
Source Dialogue. As a source data, we collect five multiturn text-only dialogue datasets, which are publicly available online. Five dialogue datasets are Persona-Chat [50], EmpatheticDialogues [33], Wizard-of-Wikipedia [8], Dai-lyDialog [22], and BlendedSkillTalk [40]. They are manually constructed via a crowd-sourcing platform, and each dataset is specialized in specific conversational skills. Persona-Chat dataset contains the ability to get to know each other based on given personal information. Empa-theticDialogues dataset contains the ability to understand and interpret interlocutors' emotional situations and express emotional reactions adequately. Wizard-of-Wikipedia contains the ability to generate specific responses using knowledge or topic. DailyDialog contains daily life conversations with aspects, such as emotion, topic, and dialog acts. Lastly, in the BlendedSkillTalk, multiple skills (i.e., persona, empathy and knowledge) are integrated into one conversation, as humans do. We incorporate five dialogue datasets into one large dialogue dataset by removing duplicated dialogues.
Source Image-Caption Pairs. We choose Conceptual Captions 3M [37] (CC3M), which is widely used in multi-modal modeling [26,43] and creating multi-modality dataset [30]. We obtain 2,712,320 image-caption pairs for the training and validation set. The detailed collection process is described in Appendix. The duplicated images and images with image-caption similarity smaller than the threshold of 0.185 are filtered out. After the filtering, 2,440,485 image-caption pairs are obtained, which are divided into the training / validation / test set with a ratio of 5:1:1, resulting in 1.7M / 0.3M / 0.3M of unique images. Note that our pipeline can work with any image-caption datasets, such as Conceptual Captions 12M [4] and Red-Caps [7].
Multi-Modal Dialogue Filtering
CLIP-based Similarity Calculation. In order to find images semantically relevant to given utterance, we should get meaningful textual and visual features through a multimodal feature extractor f (·). The previous work [20] used a pre-trained Visual Semantic Reasoning Network [21] as f (·). In this work, we leverage CLIP [32] model as f (·), [20] vs. DialogCC. In the first and second rows, both datasets contain semantically relevant images to the given utterances, while our dataset contains more and various images with different views or objects (e.g., dog breeds or backgrounds). In the last two rows, unlike the MMDD, our dataset is highly relevant for a given utterance because of matching from large-scale and diverse source data and considering image captions. Images in the green box are from MMDD, images in the red box are from DialogCC, and the blue box is utterances. More examples are in the Appendix.
which is widely used in previous studies [3,5,10,11] because of a well-generalized open-domain model. We first extract utterance feature vector (v u = f (u)), caption feature vector (v c = f (c)), and image feature vector (v i = f (i)). We then calculate the utterance-image similarity following [20] by computing the cosine similarity of v u and v i .
Besides, in order to enhance the quality of utteranceimage matching by additionally adopting the information provided by image captions, we also calculate the utterance-caption similarity. The results from CLIP-based similarity calculation are shown in Figure 3. For example, in the last two rows of Figure 3, MMDD overlooks "hawaiian" or "play" in the utterances, which are important words, and only focuses on the words "dance" or "keyboard". This is because MMDD does not consider the image captions.
However, there is one problem that we have to consider about how to combine these two similarity types. As reported in [23,41], there is a phenomenon called modality gap in multi-modal modeling, where two different modalities (e.g., image and text) are separately distributed in shared embedding space. Such phenomenon causes scale differences between utterance-image and utterance-caption similarities, so combining them directly would be biased to the larger scaled similarity. To alleviate this problem, the z-score normalization is conducted on both types of similarities where the mean and standard deviation values for each similarity type are calculated using training set. The normalized similarities are linearly combined as follows:
S = αf Z (s c (v u , v i )) + (1 − α)f Z (s c (v u , v c )) ,(1)
where s c (x, y) denotes the cosine similarity and f Z represents z-score normalization. In this paper, we set α as 0.5 to reflect two similarities equally. During the utterance-image matching process, the similarity matrix S of size of N × M is computed, where N and M are the number of utterances and images respectively.
Filtering. We have found out that there still exist unsuitable cases among the matched images found by CLIP-based similarity. To improve the quality of our dataset, we present two more simple filtering after similarity-based matching. In the first filtering stage, images whose scores are lower than a threshold τ 1 , where τ 1 is the median value of all scores. Median is used instead of heuristically determined threshold so that our pipeline can be applied to arbitrary datasets. Besides, we have observed that certain images are frequently matched with many utterances. As shown in Figure 4, the frequently matched images mostly contain general semantics (e.g., meme, questions), which goes along with general utterances, rather than object-centric or eventcentric semantics (e.g., "playing a soccer game"). This phenomenon can result in the overfitting of the model to such frequently matched images, which is harmful to the generalization performance. Therefore, we propose to filter out such unsuitable images based on the frequency of being matched. In our method, a relatively determined threshold τ 2 is used to filter out a specific ratio of images. For example, using the threshold of τ 2 = p25 denotes that only bottom 25% of images by frequency of being matched are included in the constructed dataset. We conduct an ablation study on the text retrieval performance (i.e., current turn prediction task) on our dataset by differentiating the relative determined threshold τ 2 as shown in Figure 5. The ablation study shows that the value of τ 2 = p75 is the best for both the efficient training and performance. Unless otherwise noted, all experiments on DialogCC are conducted with τ 2 = p75.
Analysis of DialogCC
(1) Scalability: Table 1 shows the overall statistics of Di-alogCC compared to existing datasets MMDD [20] and PhotoChat [49]. In general, DialogCC comprises 92k unique dialogues in total, which is roughly 3.7× and 7.7× larger than MMDD and PhotoChat, respectively. While the average dialogue turn are shorter than that of other datasets, the average length of utterances is longer. Especially, Di-alogCC contains 651k unique images, which is approximately 54× larger than MMDD and PhotoChat. Our dataset is large-scale in both the number of dialogue and images.
(2) Diversity: In Table 2, we compare the diversity of datasets with the number of unique hypernyms from Word-Net [28] and words in dialogues and image captions. As WordNet covers nouns, verbs, adjectives, and adverbs, we only count nouns with filtering out the hypernyms appearing less than ten times. In dialogue, compared to PhotoChat and MMDD, our dataset includes 3.7× and 1.4× more hy- Table 2. Diversity comparison. We count the number of unique hypernyms from WordNet [28] and words in dialogues and image captions. We filter out a hypernym if it appears less than ten times in both dialogues and image captions. # hyp and # word denote the number of hypernyms and the number of unique words, respectively. pernyms; 3.4× and 3.2× more words, which implies that our dataset covers more variety of open-domain topics. In image captions, compared to PhotoChat and MMDD, our dataset includes 2.4× and 21.2× more hypernyms; 2.0× and 10.5× more words. Such statistics of image captions shows the diversity of images.
Moreover, our dataset also shows better diversity in terms of conversational skills. Conversational skill is a general ability to lead a conversation, which includes empathetic listening, giving knowledgable responses, getting to know each other, talking about daily life topics, and blending all of these skills. As shown in Figure 7, MMDD covers three types of conversational skills mainly related to persona, while our dataset contains five types of conversational skills without overly focusing on one particular skill. Those skills enable a multi-modal dialogue model trained from our dataset to create engaging, vivid, and various dialogues.
(3) Generalization: In real-life scenarios, people can share images with different styles, views, or objects for the same dialogue and context. However, as shown in Table 1, the existing datasets include few images per dialogue and utterance. This does not reflect real-life scenarios and can Figure 7. Conversational Skills in DialogCC vs. MMDD [20]. We count the number of dialogues corresponding to the conversational skills. DialogCC covers two more various conversational skills (knowledge and blended skills) than MMDD. cause an overfitting problem by forcing a model to memorize the pairs of images and dialogues. To handle this problem, our dataset has many and various images per dialogue and utterance, which is shown in Figure 3 and Figure 6. There are an average of 34 images per dialogue and 4.1 images per utterance. Training a model with our dataset would enhance the generalization performance, which is experimentally shown in Section 4.2.3.
Experiments
To explore how our dataset affects both text and image retrieval tasks, we evaluate two baseline models: text retrieval model used to evaluate MMDD [20] and image retrieval model used to evaluate PhotoChat [49].
Experimental Setting
Task Definition
We explain the formulation of two main tasks -text retrieval [20] and image retrieval [49]. In the text retrieval task, two different sub-tasks exist which are current turn prediction and next turn prediction. Let us assume that we have a multi-modal dialogue D = {(u j , i j , c j )} N 1 where N denotes the number of dialogue turns, and j = t is the turn that an image sharing behavior occurs. Then, each task is formulated as follows. Current turn prediction is to predict the current response at turn t given the dialogue history ({u j } t−1 1 ) and image i t . Next turn prediction is to predict the next utterance at turn t + 1 given the dialogue history ({u j } t 1 ) and image i t . Image retrieval is to retrieve relevant image at turn t given the dialogue history ({u j } t−1 1 ).
Baseline Model
As a baseline, we choose three models -BM25, text retrieval, and image retrieval. Followings are brief descriptions of each baseline model and more detailed information is provided in Appendix. Table 3. Text retrieval performance. We compare the Recall@1 (%) performance of text retrieval model trained on DialogCC (when the maximum number of image is 10) and MMDD.
BM25
[34] retrieves response for the text retrieval task and image for the image retrieval task using captions.
Text retrieval [20] consists of a dialogue encoder, response encoder, image encoder, and multi-modal encoder. We use the sum module that adds two vectors element-wise for the multi-modal encoder. Table 4. Image retrieval performance. We report the Re-call@1(%) performance on PhotoChat and DialogCC datasets. with the hardest negatives, and we set the margin parameter to 0.2. We set the batch size to 128. We also use the Adam optimizer with initial learning rate to 5e-5 and decaying 0.1% at every 1,000 steps. We truncate the length of dialogue context and photo description longer than 128.
Training. Since our dataset contains several images per utterance, we randomly choose one image in each batch.
For the text retrieval, we do not update the parameter of the image encoder as it helps achieve the best performance on the text retrieval task [20]). On the other hand, we update the parameters of all encoders in the image retrieval model. In the validation steps, for the memory efficiency and fast computation speed, we constitute the number of candidates as 100 for all retrieval tasks, which is the same setting in [20].
Inference. The settings of inference stage is almost the same as those of validation steps, except that the inference stage uses the entire test set as candidates rather than using only 100 of them.
Experimental Results
Text Retrieval Performance
We conduct an experiment by differentiating training and evaluation datasets to observe whether our dataset can boost performance in text retrieval tasks. As shown in Table 3 in open-domain conversation, which is benefited from large scalability, diversity, and images per dialogue.
Image Retrieval Performance
We also observe that training the image retrieval model with our dataset achieves the best performance on Pho-toChat dataset, as shown in Table 4. However, the model trained on the PhotoChat dataset achieves lower performance when evaluated on DialogCC. This result indicates that our dataset also improves the performance in the image retrieval task, which is benefited from the largest number of images per dialogue and utterance.
Effect of Maximum Number of Images
We conduct an experiment to verify if learning with multiple images per utterance would be beneficial to model performance. Thus, we further evaluate the text retrieval performance by varying the maximum number of images to 1, 5, and 10. As shown in Figure 8, we confirm that the overall tendency of the model performance to increase the maximum number of images mostly increases across metrics (R@{1,5,10}). This demonstrates that showing various images per utterance enables the model to learn the semantic relationship between images and utterances rather than memorizing a pair of an image and an utterance, resulting in better generalization performance.
Robustness on Image Augmentation
To become a more generalized text retrieval model, the model should keep its performance as much as possible, even if the input image is distorted. Thus, we evaluate the performance of models trained on MMDD and DialogCC on the augmented MMDD dataset. We distort the input image of MMDD with several augmentation techniques, such as shearing and blurring provided by the imgaug [13]. In Ta performance reduction compared to the model trained on MMDD dataset. This result indicates that our dataset makes the model robust to the input image distortion.
Robustness on Text Augmentation
To augment the dialogue, we replace randomly selected words (except stopwords) with synonyms [48]. Table 6 shows the performance of the model trained on our dataset are reduced less than before applying augmentation to input dialogue history. This results indicates that even if our dataset contains some noisy samples due to the automatic pipeline, the model trained on our dataset is more robust to the text distortion.
Conclusion
In this paper, we present the automatic pipeline for creating a multi-modal dialogue dataset that involves filtering with CLIP similarity. We also propose a large-scale multimodal dialogue dataset, DialogCC, which is constructed by leveraging the automatic pipeline with five text-only dialogue datasets and an image-text pair CC3M dataset. In a comprehensive analysis, compared to existing datasets MMDD and PhotoChat, DialogCC contains a larger number of unique hypernyms, words, and conversational skills, which indicates better diversity. Moreover, our dataset consists of many and various images per dialogue that can be beneficial in model generalization performance. Extensive experiments demonstrate that a model trained with Di-alogCC achieves state-of-the-art performance in image and text retrieval tasks while increasing model robustness.
Societal Impact. As reported in [45], even if we give the gender-neutral query to CLIP [32] model, the CLIP model sometimes retrieves images causing gender-bias issues. We are concerned that this problematic issue may exist in our dataset because we use the CLIP model to match relevant images to given utterances. For example, most utterances related to the "hair designer" usually match images of women cutting hair. Therefore, the image retrieval model trained on our dataset may sometimes retrieve biased images. We should consider this problem important when building a multimodal search model. [
A. Details of Multi-Modal Dialogue Dataset Creation
A.1. Source Data Collection
This section describe how we collect the source dialogue and image-caption pairs used in creating DialogCC.
A.1.1 Source Dialogue Collection
We collect five text-only dialogue datasets (i.e., Wizardof-Wikipedia [6], Persona-Chat [31], EmpatheticDialogues [20], DailyDialog [13], and BlendedSkillTalk [25]) through the ParlAI [16] framework, which provides many dialogue datasets online.
A.1.2 Source Image-Caption Collection
We download the Conceptual Captions 3M [24] (CC3M) dataset in here 1 . Since the CC3M dataset provides image URLs, we download images using img2dataset 2 library, which is a helpful library for quick downloading large-scale images based on URLs. We store downloaded images as a webdataset 3 format for efficiently extracting visual features by the CLIP [19] model. Note that because each image URL has the copyright, we only use opened URLs as source image-caption data when we create DialogCC.
A.2. Detailed Analysis of DialogCC
In this section, we provide a detailed analysis of Di-alogCC in terms of the scalability, diversity, and generalization. Table A shows the full statistics of DialogCC compared to existing datasets MMDD [12] and PhotoChat [30]. As we aforementioned, DialogCC comprises of 92k unique dialogues in total, which is roughly 3.7× and 7.7× larger than MMDD and PhotoChat, respectively. Although the number of dialogue turn on average is shorter than existing datasets, the utterance length is longer than others, which indicates that our dataset contains more specific utterances that can increase the interestingness and engaingness [23]. For example, given the utterance "how are you?", the response "I'm really good, because I'm going to paris today!" is more close to specific utterance than the response of "I'm good". Thus, the multi-modal dialogue generative model trained on our dataset can generate more specific responses, making conversation more attractive. Figure A. Problematic Examples in MMDD [12]. Even if the two given utterances have different semantic meanings, MMDD matches the same image. Hence, training a multi-modal dialogue model with MMDD will likely memorize this green image rather than a deep understanding of input utterances. Images in the green box are from MMDD, images in the red box are from DialogCC, and the blue box is utterances. Furthermore, we provide the detailed statistics of Di-alogCC according to the source dialogue datasets, as shown in Table B. Overall, the KnowledgeCC dataset includes the largest number of images per utterance and dialogue than other source dialogue datasets and the EmpathyCC has the smallest number. This result indicates that the CLIP similarity between image and utterance, which contains more object information (e.g., dog, bus, soccer), is relatively higher than the similarity between image and utterance related to the emotional situation.
A.2.1 Scalability
A.2.2 Diversity
As shown in Section 3.2, DialogCC contains the largest number of unique hypernyms and unique words in both dialogues and image captions. In addition, we show that our dataset covers various conversational skills with balanced distribution, as illustrated in Figure 7 in our main paper. We furthermore compare the diversity of datasets in terms of the part-of-speech (POS) by using the pos tagger provided by the spaCy 4 . In total, compared to PhotoChat and MMDD, our dataset includes 4.5× and 2.2× more noun words; 13.4× and 3.2× more verb words; 1.7× and 2.7× more adjective words, which suggests that our dataset covers more variety of words. Table C. Part-of-Speech (POS) Comparison Results. We count the number of the unique noun, verb, and adjective words in dialogues and image captions. # noun, # verb, and # adj denote the number of the unique noun word, the number of the unique verb words, and the number of the unique adjective words, respectively.
A.2.3 Generalization
In real-life conversations, various images can be shared even with the same utterance, depending on who shares the image. As shown in Table A, DialogCC contains larger images per utterance and dialogue than existing datasets, which indicates that our dataset can successfully reflect this phenomenon. For example, as shown in Figure B, our dataset includes various images with similar semantic meanings to the given utterance, which can induce the model to be more robust than a model trained with MMDD. In addition, MMDD contains the same images on two different utterances, as illustrated in Figure A. For example, in the first row of Figure A, the image in our dataset is relevant to the keyword "disney races" in the utterance, while MMDD cannot. This result will induce the degradation of generalization performance because the model trained on MMDD may prefer to memorize specific images rather than Figure B. Comparison Examples of Relevant Images in MMDD [12] vs. DialogCC. DialogCC contains various forms of images that are semantically similar to a given utterance. For example, in the top three rows, our dataset includes different kinds of "landscape", "a pair of shoes", and "two horses". Besides, in the fifth row, unlike the MMDD, our dataset contains highly relevant images to the given utterance, which benefitted from the utterance-caption similarity. Images in the green box are from MMDD, images in the red box are from DialogCC, and the blue box is utterances. understanding the dependency between image and utterance. We show that our dataset improves the generalization performance in Section 4.2.4, Section 4.2.5, Section C.3, and Section C.4.
A.3. Case Studies of DialogCC
A.3.1 Comparison examples to MMDD
Unlike MMDD [12], we create DialogCC by using utterance-caption similarity as well as utterance-image similarity to improve the quality of DialogCC. To compare the different quality of multi-modal dialogue dataset, we present more comparison examples to MMDD in Figure B. In the last row of Figure B, our dataset includes more diverse images related to the "sneakers", which are in the given utterance.
A.3.2 Examples of DialogCC
We present more examples of DialogCC in Figure G However, we sometimes found poor-quality examples of DialogCC, as illustrated in Figure F. On the second turn in the red box with a dotted line, the images are related to the given utterance that includes the word or phrase "pretty animal" or "arctic". However, when we look at the previous utterance "Yes, I love huskys. They are a very pretty dog.", we can easily recognize that these matched images are inappropriate to the given utterance considering the whole dialogue context. This is because when we calculate two similarities between utterance-caption and utterance-image by using CLIP model, we do not extract dialogue feature vectors or calculate the similarity by considering the whole dialogue context. Such a problem also exists in MMDD, which can be regarded as a typical problem of automatic methods [2,12,17] that can be utilized in creating several datasets. Nevertheless, recent studies have shown largescale multi-modal pretraining models [1,9,11,15,19] have achieved enormous performance on various downstream tasks, which benefitted from the noisy large-scale imagecaption datasets [3,4,21,22,26]. Therefore, this paper aims to create a large-scale multi-modal dialogue dataset, even if there is some noise, rather than creating a cleaned dataset.
B. Details of Experimental Settings
B.1. Baseline Models
As illustrated in Figure C, we present the architecture of baseline models, which is the text retrieval model [12] and image retrieval model [30]. We provide a detailed description of baseline models below.
Text Retrieval Model [12]. The text retrieval model consists of the dialogue, response, image, and multi-modal encoder. The dialogue encoder encodes the whole dialogue history into a fixed-size representation using the BERT [5] model. The response encoder converts the response into a fixed-size representation using the BERT model, the same BERT used in the dialogue encoder. They use the pooled vector from the BERT model to get the representations for dialogue history and response. When they encode the dialogue history, they use up to three turns just before the current turn by concatenating each turn with [SEP] special token. The image encoder is to extract image feature vectors using ResNeXt-101 [29]. After extracting text and image features, these features are fed into the fully-connected layer with the ReLU [18] activation function. They use the sum and attention modules to make a fused representation between the dialogue history and image. The sum module adds two vectors in an element-wise manner. They first concatenate the dialogue history vector and image feature vector for the attention module. They then pass the concatenated vector into the Transformer [27] to obtain contextualized multi-modal representation. Lastly, they calculate the dot product between the response feature vector and the multi-modal feature vector to get the loss.
Image Retrieval Model [30]. The image retrieval model consists of a dialogue encoder, a caption encoder 5 , and an image encoder. The dialogue encoder and caption encoder leverage two different BERT models, which means the parameters of the two BERT models are not shared. They use the contextualized representation vector corresponding to the position of [CLS] for dialogue history and a caption. For the image encoder, they use ResNet-152 [8]. They concatenate two feature vectors for image and caption to obtain a meaningful multi-modal representation. They then pass the concatenated vector into the fully-connected layers with the ReLU function. They encode the dialogue history into a fixed-size representation and pass it to the fully-connected layers with the ReLU function. They calculate the cosine similarity by computing the dot product of L2 normalized features for dialogue history and multi-modal.
B.2. Implementation Details
As we described the implementation details in our main paper, we explain additional implementation details for the extended experiments. Since our dataset is much larger than (a) An architecture of text retrieval model (b) An architecture of image retrieval model Figure C. Architectures of Baseline Models. We show the architecture of the text retrieval model [12] and image retrieval model [30], which are used in the experiment. We color the part corresponding to getting text representations as a light orange, the part corresponding to getting visual representation as a light green, and the part corresponding to getting a multi-modal representation as a light blue. The light red is the dot product.
the PhotoChat dataset, we use the cosine annealing schedule [14] with the same step size of 1,000. In all extended experiments, we train the models longer to achieve the best performance. We use the RandomResized cropping technique for image augmentation in both image retrieval and text retrieval experiments.
Image Augmentation Techniques For the experiment of robustness on image augmentation, we use the imgaug [10] library for image augmentation. We adopt eight image augmentation techniques, rotation, zoom-in, zoom-out, cutout, dropout, shearing, gaussian noise, and gaussian blur. We follow the same setting of each technique of the previous work [7].
Text Augmentation Techniques For the experiment of robustness on text augmentation, we use the synonym replacement techniques introduced in EDA [28].
C. Extended Experiments
In this section, we show the extended experiments of the text retrieval and image retrieval models. Table D shows the text retrieval performance across all evaluation metrics, which is reported in our main paper. We train the model with more epochs to achieve improved performance. In addition, we conduct the ablation studies by applying different multi-modal encoders (i.e., sum and attention) and by changing model inputs, such as only providing the image or dialogue to the model, as shown in Table G. Even we further training, we observe a similar tendency that the model trained on our dataset improves the performance of MMDD. However, training the model with MMDD achieves considerably poor performance on our dataset. This result indicates that our dataset is more challenging than MMDD due to the most significant number of images per utterance and dialogue. Moreover, such lack of diversity in MMDD induces the model to memorize seen images in the training process. The following paragraph explains the effect of different multi-modal encoders and model inputs.
C.1. Text Retrieval
Effect of Multi-Modal Encoder. In the model trained on MMDD, the sum module achieves better performance on two text retrieval tasks than the attention module, similar to the results reported in [12]. However, in the model trained on DialogCC, the attention module performs better on current turn prediction task. This result indicates that the attention module benefitted from the scalability and diversity of our dataset. In addition, the next turn prediction task is more challenging than the current turn prediction task due to the overall lower performance.
Effect of Model Inputs. To understand which modality is important in the text retrieval task, we conduct another experiment by giving different model inputs, such as only the dialogue history, only the image, or both the dialogue history and the image. Overall, using both modalities as input achieves the best performance. We also observe that considering dialogue history is important in the multi-modal dialogue retrieval tasks due to the lower performance of "Image Only". Table E shows the image retrieval performance across all evaluation metrics, which is reported in Table 4 in our main paper. Since our dataset is larger and more diverse than PhotoChat, we need to train the image retrieval model with more epochs and vary the learning rate to achieve the best performance. Thus, we adopt the cosine annealing scheduler with a step size of 1,000. As shown in Table F However, the model trained on PhotoChat achieves better on R@5 and R@10. There are two possible reasons. The first reason is that the correlation between the training and test sets is larger in PhotoChat than in DialogCC. If the model trained on PhotoChat is well-generalized, it should have performed well in DialogCC, but it does not. On the other hand, training the model with our dataset shows high performance in both PhotoChat and DialogCC. The second reason is that the covered images in training, validation, and test set have similar domains in PhotoChat. PhotoChat only covers commonly shared objects, such as people, food, animal, and product. However, DialogCC is an open domain, which means that DialogCC covers many topics and conversational skills described in Section 3.2. This diversity would have decreased performance compared to the model trained on PhotoChat, which focuses on a specific domain. As shown in Figure E and Table F, the model trained on Figure E. Performance Gap with Text Distortion. We show the robustness performance of both models trained on our dataset or trained on PhotoChat by increasing the value of α. The y-axis denotes the performance gap between the baseline score (without augmentation) and the score (with augmentation). The higher the value of the y-axis, the less robust the trained model is. This graph shows that the model trained on our dataset mostly achieves better robustness across all α values. DialogCC achieves better generalization performance than PhotoChat.
C.2. Image Retrieval
C.3. Robustness on Image Augmentation
To explore whether our dataset can make a wellgeneralized model, we conduct a robust experiment by distorting image input. As shown in Table H, the model trained on our dataset shows a more robust performance with a lower reduction than the model trained on the MMDD dataset. This result implies that our dataset induces the model to be more robust to image distortion, which benefitted from the diversity of our dataset.
C.4. Robustness on Text Augmentation
We evaluate the robustness of the model trained on our dataset by distorting input dialogue history. We replace randomly chosen words (except stopwords) with synonyms by adjusting the ratio of replacement α. As shown in Figure E, the model trained on our dataset shows a more robust performance when the value of α increases. This result indicates that while our dataset contains some noisy samples, our dataset makes the model more robust, which benefitted from the scalability and diversity of our dataset. Therefore, it is important to build a large-scale dataset to achieve improved generalization performance. In the red box with a dotted line, DialogCC contains diverse images that are relevant to the utterance. However, given the previous utterance, images should relate more to the "huskys" than many other animals, such as rabbits, polar bears, and penguins. Besides, it is more natural to share images related to "eskimo" or "arctic".
Figure 3 .
3Comparison examples of relevant images in MMDD
Figure 4 .
4Examples of frequently matched images. We show representative examples of images matched with various utterances by CLIP-based similarity. The number under each image indicates the count of how many utterances are matched.
Figure 5 .
5Ablation study on the threshold τ2. We show the effect of the current turn prediction performance on DialogCC according to the threshold τ2.
Figure 6 .
6An example of DialogCC. We present an example of DialogCC with resized images. More examples are in Appendix.
Figure 8 .
8Effect of maximum number of images. We show the performance of the current turn prediction task by training the model with multiple images per utterance. In general, training with multiple images mostly improves performance.
and Figure D. Both Figure G and Figure D show that DialogCC covers a variety of images, including various objects or scenes which are relevant to the given utterance, regardless of whether the conversation is short or long. For example, there are diverse images of graduation ceremonies, christmas, traffic jams, and hiking with a dog.
Figure F .
FCase 3: Example of DialogCC that Unrelated to the Given Utterance.
Figure G .
GCase 2: Examples of DialogCC. We present examples of DialogCC. Both examples also treat various images relevant to the utterance. For example, in the left figure, our dataset includes images related to an utterance in which the motion is revealed "go hiking with my dog".
Mutli-Modal Dialogue Dataset.# unique
dialogs
# unique
images
# unique
words
# unique
hypernyms
# images
/utterance
# images
/dialog
Scalability
Diversity
Generalization
93k
652k
12k
26k
11k
13k
1
3
34
1.1
114k
1.7
4.1
41k 27k
8k
5k
1k
DialogCC (ours) is a large-scale multi-modal dialogue dataset created by the CLIP-based automatic pipeline described in Sec. 3. MMDD[20] contains 45k multi-modal dialogues, where each utterance is replaced into a relevant image matched by their automatic pipeline. PhotoChat [49] contains 10k multi-modal dialogues, where the dialogue is constructed via a crowd-sourcing platform.We implement baseline models based on PyTorch Lightning. All experiments are conducted on two A100 GPUs (40GB). To accelerate the training time, we apply distributed training to baselines. We follow the hyperparameter settings similar to the previous works[20, 49], which are described as follows: Text retrieval. For the text encoder, we use the BERTbased architecture (12 layers, 12 attention heads, 768 dimensions, uncased version). For the image encoder, we use the ResNeXt-101 model (2048 dimensions). We use a negative log likelihood loss with in-batch negative samples, same as[20, 38]. In our experiment, we set the batch size to 32, the learning rate to 5e-5, and the gradient clipping value to 2.0. We use the Adam optimizer[16] without any learning rate scheduler. The maximum length of dialogue context and response is 150 and 30. Image retrieval. We use the same BERT-based architecture and ResNet-152 model (2048 dimensions) as the image encoder. We use a hinge-based triplet ranking loss[9, 21, 49] Image retrieval [49] has a dual-encoder structure which
consists of a dialogue encoder, photo description encoder,
and image encoder.
4.1.3 Datasets
4.1.4 Implementation Details
Models ↓
Eval →
PhotoChat DialogCC
Train ↓
R@1
R@1
BM25
-
6.8
0.36
Image Retrieval
PhotoChat
7.15
1.35
DialogCC
7.58
14.85
, the model trained on MMDD performs poorly when evaluated on our dataset, implying that MMDD cannot help the model to understand various forms of images with similar semantic information. On the other hand, the model trained with our DialogCC achieves the best performance than the model trained with MMDD in all text retrieval tasks. This result indicates that our dataset improves the performanceTable 5. Robustness comparisons on image augmentation. We show the degree of decreased Recall@1(%) performance of the text retrieval model compared to the baseline score. We present detailed information on applied augmentation techniques in Appendix.Current Turn Prediction
Next Turn Prediction
Train →
MMDD
DialogCC
MMDD
DialogCC
Aug ↓
R@1 (∆)
R@1 (∆)
R@1 (∆)
R@1 (∆)
Baseline
5.28
6.64
2.93
4.31
Shearing
3.75 (1.53) 5.79 (0.85) 1.01 (1.92) 4.26 (0.05)
Gaussian noise 3.40 (1.88) 4.89 (1.75) 1.05 (1.88) 4.30 (0.01)
Gaussian blur
3.91 (1.37) 5.56 (1.08) 1.05 (1.88) 4.30 (0.01)
Table 6 .
6Robustness comparisons on text augmentation.We
27] Yuxian Meng, Shuhe Wang, Qinghong Han, Xiaofei Sun, Fei Wu, Rui Yan, and Jiwei Li. Openvidial: A large-scale, opendomain dialogue dataset with visual contexts. George A Miller. Wordnet: a lexical database for english. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207, 2018. 2, 3 [34] Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Shuhe Wang, Yuxian Meng, Xiaofei Sun, Fei Wu, Rongbin Ouyang, Rui Yan, Tianwei Zhang, and Jiwei Li. Modeling text-visual mutual dependency for multi-modal dialog generation. arXiv preprint arXiv:2105.14445, 2021. 1, 2 [48] Jason Wei and Kai Zou. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243, 2018. 2, 3 [51] Yinhe Zheng, Guanyi Chen, Xin Liu, and Ke Lin. Mmchat: Multi-modal chat dataset on social media. DialogCC: Large-Scale Multi-Modal Dialogue Dataset A.1. Source Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A.1.1 Source Dialogue Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A.1.2 Source Image-Caption Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A.2. Detailed Analysis of DialogCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A.2.1 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A.2.2 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 A.2.3 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 A.3. Case Studies of DialogCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 A.3.1 Comparison examples to MMDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 A.3.2 Examples of DialogCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 B.1. Baseline Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 B.2. Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 C.1. Text Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 C.2. Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 C.3. Robustness on Image Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 C.4. Robustness on Text Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8arXiv preprint
arXiv:2012.15015, 2020. 1, 2
[28] Communications of the ACM, 38(11):39-41, 1995. 5, 6
[29] Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel
Galley, Jianfeng Gao, Georgios P Spithourakis, and Lucy
Vanderwende. Image-grounded conversations: Multimodal
context for natural question and response generation. arXiv
preprint arXiv:1701.08251, 2017. 1, 2
[30] Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja
Hauth, Santiago Manen, Chen Sun, and Cordelia Schmid.
Learning audio-video modalities from image captions. arXiv
preprint arXiv:2204.00679, 2022. 3
[31] Ramakanth Pasunuru and Mohit Bansal. Game-based video-
context dialogue. arXiv preprint arXiv:1809.04560, 2018.
1
[32] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn-
ing transferable visual models from natural language super-
vision. In International Conference on Machine Learning,
pages 8748-8763. PMLR, 2021. 1, 3, 8
[33] Foundations and
Trends® in Information Retrieval, 3(4):333-389, 2009. 7
[35] Christoph Schuhmann, Richard Vencu, Romain Beaumont,
Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo
Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m:
Open dataset of clip-filtered 400 million image-text pairs.
arXiv preprint arXiv:2111.02114, 2021. 1
[36] Paul Hongsuck Seo, Andreas Lehrmann, Bohyung Han, and
Leonid Sigal. Visual reference resolution using attention
memory for visual dialog. Advances in neural information
processing systems, 30, 2017. 2
[37] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu
Soricut. Conceptual captions: A cleaned, hypernymed, im-
age alt-text dataset for automatic image captioning. In Pro-
ceedings of the 56th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pages
2556-2565, 2018. 1, 3
[38] Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason
Weston. Image chat: Engaging grounded conversations.
arXiv preprint arXiv:1811.00945, 2018. 1, 2, 7
[39] Kurt Shuster, Eric Michael Smith, Da Ju, and Jason We-
ston. Multi-modal open-domain dialogue. arXiv preprint
arXiv:2010.01082, 2020. 1, 2
[40] Eric Michael Smith, Mary Williamson, Kurt Shuster, Ja-
son Weston, and Y-Lan Boureau. Can you put it all to-
gether: Evaluating conversational agents' ability to blend
skills. arXiv preprint arXiv:2004.08449, 2020. 3
[41] Junhyuk So, Changdae Oh, Minchul Shin, and Kyungwoo
Song. Multi-modal mixup for robust fine-tuning. arXiv
preprint arXiv:2203.03897, 2022. 4
[42] Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael
Bendersky, and Marc Najork. Wit: Wikipedia-based image
text dataset for multimodal multilingual machine learning.
In Proceedings of the 44th International ACM SIGIR Confer-
ence on Research and Development in Information Retrieval,
pages 2443-2449, 2021. 1
[43] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu
Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-
linguistic representations. arXiv preprint arXiv:1908.08530,
2019. 3
[44] Qingfeng Sun, Yujing Wang, Can Xu, Kai Zheng, Yam-
ing Yang, Huang Hu, Fei Xu, Jessica Zhang, Xiubo Geng,
and Daxin Jiang. Multimodal dialogue response generation.
arXiv preprint arXiv:2110.08515, 2021. 2
[45] Jialu Wang, Yang Liu, and Xin Eric Wang. Are gender-
neutral queries really gender-neutral? mitigating gender bias
in image search. arXiv preprint arXiv:2109.05433, 2021. 8
[46] Shuhe Wang, Yuxian Meng, Xiaoya Li, Xiaofei Sun, Rong-
bin Ouyang, and Jiwei Li. Openvidial 2.0: A larger-scale,
open-domain dialogue generation dataset with visual con-
texts. arXiv preprint arXiv:2109.12761, 2021. 1, 2
[47] arXiv preprint arXiv:1901.11196, 2019. 8
[49] Xiaoxue Zang, Lijuan Liu, Maria Wang, Yang Song, Hao
Zhang, and Jindong Chen. Photochat: A human-human di-
alogue dataset with photo sharing behavior for joint image-
text modeling. arXiv preprint arXiv:2108.01453, 2021. 1, 2,
5, 6, 7
[50] arXiv preprint
arXiv:2108.07154, 2021. 1, 2
Supplementary Material
github.com/passing2961/DialogCC
Contents
A. Details of Multi-Modal Dialogue Dataset Creation
2
B. Details of Experimental Settings
5
C. Extended Experiments
6
Table A .
AFull Statistics of Datasets. Compared to existing datasets MMDD and PhotoChat, DialogCC includes the largest number of unique dialogues and images.Datasets
Type
# Unique
Dialog
# Utter
Avg.
# Utter/Dialog
Avg.
# Token/Utter
# Image
# Unique
Image
Avg.
# Image/Dialog
Avg.
# Image/Utter
Avg.
# Utter/Image
KnowledgeCC
train
35,252
2,368,960
8.32
16.5
1,386,041 305,656
39.32
4.87
4.53
valid
1,952
131,556
8.33
16.53
132,398
42,352
67.83
8.39
3.13
test
1,917
127,684
8.29
16.53
129,317
41,580
67.46
8.39
3.11
total
39,121
2,628,200
8.32
16.5
1,647,756 389,574
42.12
5.22
4.23
PersonaCC
train
8,763
1,924,764
14.92
11.67
268,399
145,069
30.63
2.08
1.85
valid
1,000
244,500
15.67
11.92
54,333
31,169
54.33
3.48
1.74
test
967
234,096
15.6
11.77
45,409
27,148
46.96
3.03
1.67
total
10,730
2,403,360
15.06
11.71
368,141
203,382
34.31
2.31
1.81
EmpathyCC
train
11,830
209,280
4.26
13.73
130,441
84,072
11.03
2.65
1.55
valid
2,151
38,280
4.27
14.62
27,739
19,597
12.9
3.1
1.42
test
1,993
35,412
4.27
15.59
25,991
18,438
13.04
3.14
1.41
total
15,974
282,972
4.26
14.08
184,171
122,104
11.53
2.77
1.51
DailyCC
train
16,934
1,240,428
9.69
14.11
550,047
161,406
32.48
4.3
3.41
valid
1,845
128,612
9.4
13.67
68,686
24,760
37.23
5.02
2.77
test
1,807
119,484
9.2
14.14
72,060
28,214
39.88
5.55
2.55
total
20,586
1,488,524
9.62
14.07
690,793
214,375
33.56
4.47
3.22
BlendedCC
train
4,575
619,412
11.83
13.34
157,392
111,702
34.4
3.01
1.41
valid
992
131,760
11.75
13.48
58,492
37,133
58.96
5.22
1.58
test
964
128,292
11.77
13.85
52,850
34,590
54.82
4.85
1.53
total
6,531
879,464
11.81
13.44
268,734
183,418
41.15
3.61
1.47
Table B .
BDetailed Statistics of DialogCC. We show the statistics on the results of matching each source dialogue dataset and CC3M.KnowledgeCC, PersonaCC, EmpathyCC, DailyCC, and BlendedCC denote the multi-modal dialogue dataset created from the Wizard-of-
Wikipedia [6], Persona-Chat [31], EmpatheticDialogues [20], DailyDialog [13], and BlendedSkillTalk [25], respectively.
Dialog
Image Caption
Total
# noun # verb
# adj
# noun # verb # adj # noun # verb
# adj
MMDD
6,696 13,369 4,138
6,808
1,749 3,279 13,504 15,118 7,417
PhotoChat
5,240
3,270 10,918 1,529
412
377
6,769
3,682 11,295
DialogCC (Ours) 15,976 35,522 11,652 14,822 13,921 8,567 30,798 49,443 20,219
, weTable D. Text Retrieval Performance. We compare the performance of text retrieval model trained on DialogCC (when the maximum number of image is 10) and MMDD across all metrics.Figure D. Case 1: Examples of DialogCC. We present examples of DialogCC, and a gray dotted line separates each example. As illustrated in both examples, our dataset covers diverse images which are related to the utterance.achieve the better performance than 7.58 (inTable E), which indicates that our dataset can make the model's training difficult due to the scalability and diversity. It will be helpful to improve the generalization performance.Table E. Image Retrieval Performance. We report the image retrieval performance on PhotoChat and DialogCC datasets, which is reported in Table 4 in our main paper. DialogCC (10) 7.44 18.08 26.34 2.95 11.17 18.04 DialogCC (15) 8.99 19.42 28.20 3.37 11.61 18.12 DialogCC (20) 8.68 18.70 26.14 2.92 11.15 18.01 DialogCC (30) 8.16 18.60 26.96 3.12 11.18 17.75Table F. Extended Image Retrieval Performance. We report the image retrieval performance on PhotoChat and DialogCC dataset, where we train the image retrieval model with DialogCC by adopting the cosine annealing schedule. The number in parenthesis denotes the maximum number of images used in the training process.Models↓
Task →
Current Turn Prediction
Next Turn Prediction
Eval →
MMDD
DialogCC
MMDD
DialogCC
Train ↓
R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10
BM25
-
3.62
7.91
10.46
0.72
4.48
8.05
3.72
8.34
12.58
0.83
4.61
8.53
Text Retrieval
MMDD
5.28 16.18 23.34
2.65
7.73
11.29
2.93
9.6
12.13
1.56
4.93
7.23
DialogCC 6.64 21.57 30.16 10.17 28.74 39.17
4.31 12.58 17.69 11.07 29.08 37.18
Models ↓
Eval →
PhotoChat
DialogCC
Train ↓
R@1 R@5 R@10 R@1 R@5 R@10
BM25
-
6.8
15.9
22.5
0.36
1.21
1.77
Image Retrieval
PhotoChat 7.15 25.02 37.20
1.35
5.49
8.70
DialogCC
7.58 17.52 25.00 14.85 36.33 48.93
Eval →
PhotoChat
DialogCC
Train ↓
R@1 R@5 R@10 R@1 R@5 R@10
PhotoChat
8.06 23.76 38.02
0.17
0.80
1.38
DialogCC (1)
7.03 18.70 26.86
2.45
9.30
15.19
DialogCC (5)
8.68 18.70 26.55
3.02 11.16 17.89
MME↓ Model Inputs↓Table G. Extended Text Retrieval Performance. We report the text retrieval performance on MMDD and DialogCC. MME denotes the multi-modal encoder.Table H. Robustness Performance on Image Distortion. We report the robustness performance when input images are distorted.Task→
Current Turn Prediction
Next Turn Prediction
Eval→
MMDD
DialogCC
ACL
DialogCC
Train↓
R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10
Sum
Image Only
MMDD
5.37 20.88 33.29
0.25
1.20
2.16
0.86
3.12
5.48
0.09
0.37
0.65
DialogCC 3.90 15.67 24.90
1.01
4.27
6.94
1.67
5.82
9.07
0.23
0.92
1.61
Dialogue Only
MMDD
8.55 19.53 26.53
2.64
9.47
14.91
7.02 15.79 21.22
3.52 10.45 14.96
DialogCC 16.79 35.92 45.55
5.22 20.28 29.47 16.26 32.65 40.52
7.02 21.94 30.15
Dialogue + Image
MMDD
13.72 35.68 49.48
0.95
3.95
6.70
6.80 15.28 20.63
2.41
8.15
12.25
DialogCC 15.16 37.51 49.64
6.31 21.63 30.90 13.35 30.81 39.62
6.64 21.22 29.51
Attention
Image Only
MMDD
5.01 20.13 33.37
0.21
1.12
2.10
0.86
3.47
5.73
0.09
0.40
0.72
DialogCC 4.97 16.99 26.93
1.08
2.94
6.74
1.37
6.29
9.76
0.25
1.13
2.04
Dialogue Only
MMDD
8.83 19.81 26.93
1.90
7.57
12.01
6.76 15.32 20.92
3.28 10.45 15.55
DialogCC 15.39 33.17 42.64
5.40 20.03 29.80 16.09 34.28 42.88
7.17 22.76 31.34
Dialogue + Image
MMDD
12.97 33.13 45.55
0.92
3.57
6.23
4.66 12.11 17.37
2.24
7.23
10.54
DialogCC 19.49 44.03 55.69
6.87 22.26 31.58 13.61 30.51 38.55
5.04 15.76 22.71
Train →
MMDD
DialogCC
Aug ↓
R@1 (∆)
R@5 (∆)
R@10 (∆)
R@1 (∆)
R@5 (∆)
R@10 (∆)
Baseline
13.72
35.68
49.48
14.96
37.12
48.17
Rotation
10.46 (3.26) 30.27 (5.41)
41.89 (7.59) 13.25 (1.71)
32.62 (4.5)
43.16 (5.01)
Zoom-In
7.52 (6.2)
22.43 (13.25) 31.5 (17.98)
9.83 (5.13) 25.18 (11.94) 34.05 (14.12)
Zoom-Out
10.98 (2.74) 31.07 (4.61)
42.32 (7.16)
13.6 (1.36)
32.7 (4.42)
43.28 (4.89)
Cutout
11.14 (2.58) 31.11 (4.57)
43.56 (5.92) 12.85 (2.11) 32.54 (4.58)
43.44 (4.73)
Dropout
9.03 (4.69) 25.54 (10.14) 36.2 (13.28)
10.5 (4.46)
27.73 (9.39) 37.79 (10.38)
Shearing
9.67 (4.05)
27.69 (7.99) 38.27 (11.21) 12.25 (2.71) 31.19 (5.93)
41.25 (6.92)
Gaussian noise 10.46 (3.26)
28.4 (7.28)
39.62 (9.86) 11.54 (3.42) 30.11 (7.01)
40.37 (7.8)
Gaussian blur
10.78 (2.94) 29.44 (6.24)
40.69 (8.79) 12.61 (2.35) 32.34 (4.78)
42.8 (5.37)
https : / / ai . google . com / research / ConceptualCaptions/download 2 https://github.com/rom1504/img2dataset 3 https://github.com/webdataset/webdataset
https://spacy.io/
The original paper calls it a photo description encoder because Pho-toChat contains the photo description per photo (image). In this paper, we call it the caption encoder.
Flamingo: a visual language model for few-shot learning. Jeff Jean-Baptiste Alayrac, Pauline Donahue, Antoine Luc, Iain Miech, Yana Barr, Karel Hasson, Arthur Lenc, Katie Mensch, Malcolm Millican, Reynolds, arXiv:2204.14198arXiv preprintJean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a vi- sual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022. 1
Vqa: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision1Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425- 2433, 2015. 1, 2
Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-Mclaughlin, Huisheng Wang, Shrikanth Narayanan, arXiv:2210.110652022. 4Movieclip: Visual scene recognition in movies. arXiv preprintDigbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, and Shrikanth Narayanan. Movieclip: Visual scene recognition in movies. arXiv preprint arXiv:2210.11065, 2022. 4
Conceptual 12m: Pushing web-scale image-text pretraining to recognize long-tail visual concepts. Soravit Changpinyo, Piyush Sharma, Nan Ding, Radu Soricut, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition13Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre- training to recognize long-tail visual concepts. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558-3568, 2021. 1, 3
Jaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Dernoncourt, Trung Bui, Mohit Bansal, arXiv:2205.131152022. 4Fine-grained image captioning with clip reward. arXiv preprintJaemin Cho, Seunghyun Yoon, Ajinkya Kale, Franck Der- noncourt, Trung Bui, and Mohit Bansal. Fine-grained image captioning with clip reward. arXiv preprint arXiv:2205.13115, 2022. 4
and Dhruv Batra. Visual dialog. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, M F José, Devi Moura, Parikh, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition1Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Ba- tra. Visual dialog. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 326-335, 2017. 1, 2
Redcaps: Web-curated image-text data created by the people, for the people. Karan Desai, Gaurav Kaul, Zubin Aysola, Justin Johnson, arXiv:2111.1143113arXiv preprintKaran Desai, Gaurav Kaul, Zubin Aysola, and Justin John- son. Redcaps: Web-curated image-text data created by the people, for the people. arXiv preprint arXiv:2111.11431, 2021. 1, 3
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston, arXiv:1811.01241Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprintEmily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241, 2018. 3
Vse++: Improving visual-semantic embeddings with hard negatives. Fartash Faghri, J David, Jamie Ryan Fleet, Sanja Kiros, Fidler, arXiv:1707.05612arXiv preprintFartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. Vse++: Improving visual-semantic embeddings with hard negatives. arXiv preprint arXiv:1707.05612, 2017. 7
Clipdraw: Exploring text-to-drawing synthesis through language-image encoders. Kevin Frans, Lisa B Soros, Olaf Witkowski, arXiv:2106.14843arXiv preprintKevin Frans, Lisa B Soros, and Olaf Witkowski. Clipdraw: Exploring text-to-drawing synthesis through language-image encoders. arXiv preprint arXiv:2106.14843, 2021. 4
Clipscore: A reference-free evaluation metric for image captioning. Jack Hessel, Ari Holtzman, Maxwell Forbes, Yejin Ronan Le Bras, Choi, arXiv:2104.08718arXiv preprintJack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation met- ric for image captioning. arXiv preprint arXiv:2104.08718, 2021. 4
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, PMLR, 2021. 1International Conference on Machine Learning. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representa- tion learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021. 1
. Alexander B Jung, Kentaro Wada, Jon Crall, Satoshi Tanaka, Jake Graving, Christoph Reinders, Sarthak Yadav, Joy Banerjee, Gábor Vecsei, Adam Kraft, Zheng Rui, Jirka Borovec, Christian Vallentin, Semen Zhydenko, Kilian Pfeiffer, Weng, Abner Ayala-Acevedo, Raphael Meudec, Matias LaporteBen Cook, Ismael Fernández, François-Michel De Rainvilleet al. imgaug. https: //github.com/aleju/imgaug, 2020. Online; accessed 01Alexander B. Jung, Kentaro Wada, Jon Crall, Satoshi Tanaka, Jake Graving, Christoph Reinders, Sarthak Ya- dav, Joy Banerjee, Gábor Vecsei, Adam Kraft, Zheng Rui, Jirka Borovec, Christian Vallentin, Semen Zhydenko, Kil- ian Pfeiffer, Ben Cook, Ismael Fernández, François-Michel De Rainville, Chi-Hung Weng, Abner Ayala-Acevedo, Raphael Meudec, Matias Laporte, et al. imgaug. https: //github.com/aleju/imgaug, 2020. Online; ac- cessed 01-Feb-2020. 8
Will i sound like me? improving persona consistency in dialogues through pragmatic self-consciousness. Hyunwoo Kim, Byeongchang Kim, Gunhee Kim, arXiv:2004.05816arXiv preprintHyunwoo Kim, Byeongchang Kim, and Gunhee Kim. Will i sound like me? improving persona consistency in dia- logues through pragmatic self-consciousness. arXiv preprint arXiv:2004.05816, 2020. 2
Perspective-taking and pragmatics for generating empathetic responses focused on emotion causes. Hyunwoo Kim, Byeongchang Kim, Gunhee Kim, arXiv:2109.08828arXiv preprintHyunwoo Kim, Byeongchang Kim, and Gunhee Kim. Perspective-taking and pragmatics for generating empa- thetic responses focused on emotion causes. arXiv preprint arXiv:2109.08828, 2021. 2
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 7
Large-scale bilingual language-image contrastive learning. Byungsoo Ko, Geonmo Gu, arXiv:2203.144632022arXiv preprintByungsoo Ko and Geonmo Gu. Large-scale bilin- gual language-image contrastive learning. arXiv preprint arXiv:2203.14463, 2022. 1
Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. Satwik Kottur, M F José, Devi Moura, Dhruv Parikh, Marcus Batra, Rohrbach, arXiv:1903.031661arXiv preprintSatwik Kottur, José MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. Clevr-dialog: A diagnostic dataset for multi-round reasoning in visual dialog. arXiv preprint arXiv:1903.03166, 2019. 1, 2
The open images dataset v4. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, International Journal of Computer Vision. 1287Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Ui- jlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4. International Journal of Computer Vision, 128(7):1956-1981, 2020. 2
Constructing multi-modal dialogue dataset by replacing text with semantically relevant images. Nyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, Sung-Hyun Myaeng, arXiv:2107.0868567arXiv preprintNyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, and Sung-Hyun Myaeng. Constructing multi-modal dia- logue dataset by replacing text with semantically relevant images. arXiv preprint arXiv:2107.08685, 2021. 1, 2, 3, 4, 5, 6, 7
Visual semantic reasoning for image-text matching. Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, Yun Fu, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer vision27Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, and Yun Fu. Visual semantic reasoning for image-text matching. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4654-4662, 2019. 2, 3, 7
Dailydialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, arXiv:1710.03957arXiv preprintYanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957, 2017. 3
Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. Weixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Yeung, James Zou, arXiv:2203.02053arXiv preprintWeixin Liang, Yuhui Zhang, Yongchan Kwon, Serena Ye- ung, and James Zou. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. arXiv preprint arXiv:2203.02053, 2022. 4
Moel: Mixture of empathetic listeners. Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, Pascale Fung, arXiv:1908.07687arXiv preprintZhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. Moel: Mixture of empathetic listeners. arXiv preprint arXiv:1908.07687, 2019. 2
Hua Lu, Zhen Guo, Chanjuan Li, Yunyi Yang, Huang He, Siqi Bao, arXiv:2203.03835Towards building an open-domain dialogue system incorporated with internet memes. arXiv preprintHua Lu, Zhen Guo, Chanjuan Li, Yunyi Yang, Huang He, and Siqi Bao. Towards building an open-domain dialogue system incorporated with internet memes. arXiv preprint arXiv:2203.03835, 2022. 2
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in neural information processing systems. 323Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019. 1, 3
References 11. References 11
Flamingo: a visual language model for few-shot learning. Jeff Jean-Baptiste Alayrac, Pauline Donahue, Antoine Luc, Iain Miech, Yana Barr, Karel Hasson, Arthur Lenc, Katie Mensch, Malcolm Millican, Reynolds, arXiv:2204.14198arXiv preprintJean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, et al. Flamingo: a vi- sual language model for few-shot learning. arXiv preprint arXiv:2204.14198, 2022. 5
Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-Mclaughlin, Huisheng Wang, Shrikanth Narayanan, arXiv:2210.11065Movieclip: Visual scene recognition in movies. arXiv preprintDigbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, and Shrikanth Narayanan. Movieclip: Visual scene recognition in movies. arXiv preprint arXiv:2210.11065, 2022. 5
Conceptual 12m: Pushing web-scale image-text pretraining to recognize long-tail visual concepts. Soravit Changpinyo, Piyush Sharma, Nan Ding, Radu Soricut, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSoravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre- training to recognize long-tail visual concepts. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558-3568, 2021. 5
Redcaps: Web-curated image-text data created by the people, for the people. Karan Desai, Gaurav Kaul, Zubin Aysola, Justin Johnson, arXiv:2111.11431arXiv preprintKaran Desai, Gaurav Kaul, Zubin Aysola, and Justin John- son. Redcaps: Web-curated image-text data created by the people, for the people. arXiv preprint arXiv:2111.11431, 2021. 5
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 5
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, Jason Weston, arXiv:1811.01241Wizard of wikipedia: Knowledge-powered conversational agents. 23arXiv preprintEmily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241, 2018. 2, 3
Proxy synthesis: Learning with synthetic classes for deep metric learning. Geonmo Gu, Byungsoo Ko, Han-Gyu Kim, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Geonmo Gu, Byungsoo Ko, and Han-Gyu Kim. Proxy syn- thesis: Learning with synthetic classes for deep metric learn- ing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1460-1468, 2021. 6
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 5
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, PMLR, 2021. 5International Conference on Machine Learning. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representa- tion learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR, 2021. 5
. Alexander B Jung, Kentaro Wada, Jon Crall, Satoshi Tanaka, Jake Graving, Christoph Reinders, Sarthak Yadav, Joy Banerjee, Gábor Vecsei, Adam Kraft, Zheng Rui, Jirka Borovec, Christian Vallentin, Semen Zhydenko, Kilian Pfeiffer, Weng, Abner Ayala-Acevedo, Raphael Meudec, Matias LaporteBen Cook, Ismael Fernández, François-Michel De Rainvilleet al. imgaug. https: //github.com/aleju/imgaug, 2020. Online; accessed 01Alexander B. Jung, Kentaro Wada, Jon Crall, Satoshi Tanaka, Jake Graving, Christoph Reinders, Sarthak Ya- dav, Joy Banerjee, Gábor Vecsei, Adam Kraft, Zheng Rui, Jirka Borovec, Christian Vallentin, Semen Zhydenko, Kil- ian Pfeiffer, Ben Cook, Ismael Fernández, François-Michel De Rainville, Chi-Hung Weng, Abner Ayala-Acevedo, Raphael Meudec, Matias Laporte, et al. imgaug. https: //github.com/aleju/imgaug, 2020. Online; ac- cessed 01-Feb-2020. 6
Large-scale bilingual language-image contrastive learning. Byungsoo Ko, Geonmo Gu, arXiv:2203.14463arXiv preprintByungsoo Ko and Geonmo Gu. Large-scale bilin- gual language-image contrastive learning. arXiv preprint arXiv:2203.14463, 2022. 5
Constructing multi-modal dialogue dataset by replacing text with semantically relevant images. Nyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, Sung-Hyun Myaeng, arXiv:2107.0868556arXiv preprintNyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, and Sung-Hyun Myaeng. Constructing multi-modal dia- logue dataset by replacing text with semantically relevant images. arXiv preprint arXiv:2107.08685, 2021. 2, 3, 4, 5, 6
Dailydialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, arXiv:1710.0395723arXiv preprintYanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957, 2017. 2, 3
Sgdr: Stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, arXiv:1608.03983arXiv preprintIlya Loshchilov and Frank Hutter. Sgdr: Stochas- tic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 6
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in neural information processing systems. 32Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems, 32, 2019. 5
A H Miller, W Feng, A Fisch, J Lu, D Batra, A Bordes, D Parikh, J Weston, arXiv:1705.06476Parlai: A dialog research software platform. arXiv preprintA. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476, 2017. 2
Learning audio-video modalities from image captions. Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid, arXiv:2204.00679arXiv preprintArsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, and Cordelia Schmid. Learning audio-video modalities from image captions. arXiv preprint arXiv:2204.00679, 2022. 5
Rectified linear units improve restricted boltzmann machines. Vinod Nair, Geoffrey E Hinton, Icml. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Icml, 2010. 5
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, PMLRInternational Conference on Machine Learning. 25Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021. 2, 5
Eric Michael Hannah Rashkin, Margaret Smith, Y-Lan Li, Boureau, arXiv:1811.00207Towards empathetic open-domain conversation models: A new benchmark and dataset. 23arXiv preprintHannah Rashkin, Eric Michael Smith, Margaret Li, and Y- Lan Boureau. Towards empathetic open-domain conversa- tion models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207, 2018. 2, 3
Laion-5b: An open large-scale dataset for training next generation image-text models. Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, arXiv:2210.08402arXiv preprintChristoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Worts- man, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022. 5
Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, Aran Komatsuzaki, arXiv:2111.02114arXiv preprintChristoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. 5
Douwe Kiela, and Jason Weston. Abigail See, Stephen Roller, arXiv:1902.08654What makes a good conversation? how controllable attributes affect human judgments. arXiv preprintAbigail See, Stephen Roller, Douwe Kiela, and Jason We- ston. What makes a good conversation? how control- lable attributes affect human judgments. arXiv preprint arXiv:1902.08654, 2019. 2
Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. Piyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, im- age alt-text dataset for automatic image captioning. In Pro- ceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, 2018. 2
Can you put it all together: Evaluating conversational agents' ability to blend skills. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, Y-Lan Boureau, arXiv:2004.0844923arXiv preprintEric Michael Smith, Mary Williamson, Kurt Shuster, Ja- son Weston, and Y-Lan Boureau. Can you put it all to- gether: Evaluating conversational agents' ability to blend skills. arXiv preprint arXiv:2004.08449, 2020. 2, 3
Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, Marc Najork, Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalKrishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. In Proceedings of the 44th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 2443-2449, 2021. 5
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 5
Jason Wei, Kai Zou, arXiv:1901.11196Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprintJason Wei and Kai Zou. Eda: Easy data augmentation tech- niques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196, 2019. 6
Aggregated residual transformations for deep neural networks. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSaining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 5
Photochat: A human-human dialogue dataset with photo sharing behavior for joint imagetext modeling. Xiaoxue Zang, Lijuan Liu, Maria Wang, Yang Song, Hao Zhang, Jindong Chen, arXiv:2108.0145356arXiv preprintXiaoxue Zang, Lijuan Liu, Maria Wang, Yang Song, Hao Zhang, and Jindong Chen. Photochat: A human-human di- alogue dataset with photo sharing behavior for joint image- text modeling. arXiv preprint arXiv:2108.01453, 2021. 2, 3, 5, 6
Personalizing dialogue agents: I have a dog. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, arXiv:1801.0724323arXiv preprintdo you have pets tooSaizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243, 2018. 2, 3
| [
"https://github.com/rom1504/img2dataset",
"https://github.com/webdataset/webdataset"
] |
[
"\"Hang in there\": Lexical and visual analysis to identify posts warranting empathetic responses",
"\"Hang in there\": Lexical and visual analysis to identify posts warranting empathetic responses"
] | [
"Mimansa Jaiswal \nInstitute of Engineering and Technology\nIndoreIndia\n",
"Sairam Tabibu \nIndian Institute of Technology\nVaranasiIndia\n",
"Erik Cambria \nNanyang Technological University\nSingapore\n"
] | [
"Institute of Engineering and Technology\nIndoreIndia",
"Indian Institute of Technology\nVaranasiIndia",
"Nanyang Technological University\nSingapore"
] | [] | Social media, in the past few years has risen as a platform where people express and share personal incidences about abuse, violence and mental health issues. There is a need to pinpoint such posts and learn the kind of response expected. For this purpose, we understand the sentiment that a personal story elicits on different posts present on different social media sites, on the topics of abuse or mental health. In this paper, we propose a method supported by hand-crafted features to judge if the discourse or statement requires an empathetic response. The model is trained upon posts from various web-pages and corresponding comments, on both the captions and the images. We were able to obtain 80% accuracy in tagging posts requiring empathetic responses. | null | [
"https://arxiv.org/pdf/1903.05210v1.pdf"
] | 5,897,790 | 1903.05210 | 0121ca21a16107b601abec5acbe4d75a4878d748 |
"Hang in there": Lexical and visual analysis to identify posts warranting empathetic responses
Mimansa Jaiswal
Institute of Engineering and Technology
IndoreIndia
Sairam Tabibu
Indian Institute of Technology
VaranasiIndia
Erik Cambria
Nanyang Technological University
Singapore
"Hang in there": Lexical and visual analysis to identify posts warranting empathetic responses
Social media, in the past few years has risen as a platform where people express and share personal incidences about abuse, violence and mental health issues. There is a need to pinpoint such posts and learn the kind of response expected. For this purpose, we understand the sentiment that a personal story elicits on different posts present on different social media sites, on the topics of abuse or mental health. In this paper, we propose a method supported by hand-crafted features to judge if the discourse or statement requires an empathetic response. The model is trained upon posts from various web-pages and corresponding comments, on both the captions and the images. We were able to obtain 80% accuracy in tagging posts requiring empathetic responses.
Introduction
Artificial companions, though a hype are still pretty much limited in their capabilities as support systems. They are unable to establish meaningful relationships with users, widely used just as personal assistants that executes tasks rather than to talk to or communicate with. It is widely believed that a key aspect of establishing such rapport relies on empathy which is often seen as the basis of social cooperation and pro-social behavior.
Empathy is often defined as the verbal or non-verbal gestures that evoke a sense of understanding of others' state of mind in a particular situation. Empathy encompasses several human interaction abilities, especially those that require the competence to reconstruct other person's words or actions and their perceived consequences.
Previous research has widely shown that agents without empathy are less preferred as compared to those who are empathetic, the latter being considered caring and likeable [1,2]. Empathy involves perspective taking, developing sensitivity to the other's affective state and communication of a feeling of care [3]. As such, empathy is often related to helping behavior and friendship: people tend to feel more empathy for friends than for strangers, and vice-a-versa. With the penetration of voice assistants such as Siri, Google Voice and Cortana, and more people turning to in times of health crisis, it is alarming to see the apathy with which they respond. Research shows that how someone responds to you when you are feeling low and disclosing a private crisis, can affect how you act and feel.
General health disclosures can be divided primarily into four categories: Such cases of disclosures have been mapped and tracked through psychology. While there are many applications that address the health point of view, mental health remains a distasteful subject, often hidden and sidelined as a case of continuous worry, or laziness or similar sense of disapproval towards the victim. We refer to disclosures in the category 1, 2 and 3 as "empathy-seekers" thereafter.
(
The main contributions of the paper are as follows: (1) We propose a novel way to approach binary classification of empathy seekers. (2) We propose a generalized list of features that could work on different categories of posts.
(3) We develop a standard corpus for empathy seekers (using search queries such as soul-stirring, depression from web-pages explained in the later categories)
The rest of the paper is structured as follows: Section II lists related works in empathy and affect; Section III presents the dataset development technique; Section IV provides the proposed method; Section V elucidates the experiments and results and finally, Section VI concludes the paper and offers pointers for future work.
Related Work
Prior research in psychology has examined the role of support from peers and society in combating mental health issues such as depression [26]. It is observed that nature of social support and individual's perception of support is often important and indispensable to achieve a timely recovery. Exploring research on social media, an increasing amount of work has shown that many people use the system to communicate around different health concerns [27].
Because many people go to social media to discuss and display events from their personal life, it is useful to examine how this kind of disclosure should be (a) perceived (b) marked (c) responded to. This information is especially helpful in case of mental health issues where the problem is socially stigmatic.
Because our work forms a part of psycholinguistics, it has been demonstrated that the use of linguistic patterns can reveal important social and psychological aspects of an individual [16]. Previous works have dealt with use similar psycholinguistic cues to measure a specific area of health issue such as depression [25], suicide [29] and bullying as well [28]. As far as we know, not any major success has been achieved in finding a model that fits across all categories, which is what usually social media platforms deal with. Previous works have studied discourse on reddit, which substantially marks the subreddits as decisive groups in disclosure, whereas tweets or facebook posts don't have such an ability.
In the area of empathetic response or social support, there are some primary categories of work being performed. Some of the studies base their dataset on professionally trained psychologist responses, and measure their success [30,31]. Other works perform their evaluation on forums that do not involve content other than support messages, and hence classification of message type, in that case is a primary task [25], and we hope to incorporate this into our future tasks. We have therefore refrained from obtaining our dataset from specialized forums, to be able to weed out the apathetic responses as well. Many studies cater to empathetic/non-empathetic judgement with the aid of audio transcripts [31] of interviewing dataset which we do not consider as a modal in our study, for it is infrequently available in case of social-media disclosures.
Dataset Development
The dataset was developed by sourcing images and captions with their respective response from various social media websites, namely tumblr, Facebook, Instagram and Buzzfeed. We skip twitter, which has been the natural choice for all social media analysis research, for its tendency to incline towards textual posts more than a combination of visual and textual, both. Also, on twitter, the distribution among retweets with response lies at a dismal 20%.
The main of dataset can be described as follows:
1) Develop a dataset which can be used to mark context that warrant an empathetic response.
2) Develop a dataset which can be used to identify empathetic and non-empathetic responses.
We utilize the Facebook API for storing posts from "Humans of New York" social page. This page posts stories from various topics such as war, refugee procedure, Syria bombing, wherein people disclose a part of their personal story. The story is usually accompanied by a picture which usually relates to the caption. The same API was then used to retrieve the top 3 corresponding empathetic comments from the post. Some stories on the page followed a distributed format where the story was spread out across various posts. For our purpose, we have merged those stories later, while keeping the original spread intact, to be used later.
The second set of data is collected from image sharing websites such as Instagram and tumblr. The usual format of a post on such websites is a caption, usually followed by one or several hashtags and then the responses. Because of the unavailability of a stable API, we hand-curate this dataset.
Lastly, we scrape images from social websites such as Buzzfeed and other listicles, which, though usually stated as click-bait, often echo the sentiments of the common public. Because the listicles are usually a group of images, with common comments, we presently copy these comments, corresponding to all these images for response testing.
The comments, because they were hand-scraped, were also marked for gender, time duration from the post and the likes they received from the social media community. Our final database comprised of 1000 context-response pairs of positive examples that we use for this study. Because, privacy is a huge issue in social-media related research, the dataset is anonymized and stripped off of data that could be used for reverse-identification.
We add negative examples to our dataset by sourcing images related to happy events such as festivals, and by using search queries such as food, education and technology. We thoroughly check these added negative examples to verify our assumption that they do not seek an empathetic response. To add non-empathetic responses, we ask ten people to reply to the post, as if they were trying to belittle the author or to hold them as a suspect. We also study comments from various sources, especially for mental health issues that were mentioned as demeaning and vile and added them to corresponding posts. For the purpose of keeping our dataset unbiased, we thereby use interannotator agreement between four annotators to decide, if the post was marked correctly, i.e. ES/NES (Empathyseeker or Non-Empathy seeker) and ER/NER (Empathetic response or Non-empathetic response). We also try to weed out malicious and spam content from our dataset, by removal of common phrases and rebuttals that often occur on social media posts, especially on sites such as Buzzfeed.
The final dataset has the following distribution of context-response pairs: 330 of mental health issues, 283 of violence related issues and the remaining belonged to those requiring support.
Proposed Method
We model the task of empathy-seeker detection as a supervised classification problem in which each post is either classified as empathy-seeking or non-empathy seeking. We use six sets of lexical features and three sets of visual features to build our model. In the following subsections, we detail the features used and the classifiers that have been tried and compared.
Verbal Features
The verbal/textual features are used for two purposes: to classify the post as emotion seeker and to judge whether a response is empathetic or not.
Baseline Features
n-grams have been known as the best task-independent features for textual classification [4]. Therefore, we choose n-grams as the baseline feature. We retrieved word ngrams, usually called as bag of words as bi-grams and trigrams and skip-grams (bi-grams) which after tf-idf transformation form our corpus. We filtered all the n-grams whose frequency was less than five, in order to use only those n-grams that were essential to our model. This set of feature would henceforth be called as baseline. These ngram features are also used to identify temporal features such as today, weeks etc which are then used to identify the posts falling into the temporal issue category from the three categories we mentioned above. These temporal features have been known to be good linguistic attribute in identifying self-disclosure posts [25].
Lexical Features
To model sentiment, we used emotional information from SenticNet [5], a concept-level knowledge base for sentiment analysis that provides both semantic and affec-Response 1: I feel you. Especially the thinking ahead... Anxiety and depression are no fun and I hope you find a way to overcome them and be happy! Response 1: Such a beautiful thing.. Gives me a little more faith in the world.
Figure 1 Sample images from Humans of New York and
Buzzfeed that form a part of the database tive information associated to words and multiword expressions by means of commonsense computing [6,7] and sentic computing [8]. SenticNet has been shown to model emotions such as satire [10], deception [11] and mood [12] appropriately in previous research tasks, and hence we believe that it would present an appropriate representation.
Sentiment Amplification
As a general trend, it can be observed that almost all empathetic responses on social media make use of smileys, or specific punctuations. The use of quotation (" ") [9] has been mentioned as an indication of inverse sentiment. Sentiment amplifiers are elements that draw attention to the sentiment conveyed in a statement by either intensifying the sentiment value or by negating it. They have been used successfully to model satirical texts [10], which is another form of emotion expression. Phrases such as "Oh Please!" and acronyms, were added to the feature list. The presence or absence of an amplifier is used to form the feature vector.
Speech act Features
A speech act has a performative function in the context of language and communication, i.e. it performs the function of apology, appreciation, gratitude etc. [13]. In our study, we use 7 kind of speech act features, as stated: apology, appreciation, response acknowledgment, opinioned response, non-opinioned response, gratitude, other.
We build a speech oriented classifier from SPAAC [14] using the above-mentioned features as input. The classifier built had an accuracy of 72%, which we then use to find the speech act distribution over our corpus. Because the feature takes only one sentence at a time, to map a whole context or a response, we normalize the probability of occurrence of each speech act over the number of statements present. This is especially useful for posts that have long captions or those responses that are of more than a few lines.
Literary Device features
Because many of the expressions are metaphorical/indirect in nature or convey a sense of urgency or a feeling of helplessness, we use the following literary features as well.
Hyperbole: Hyperbole is referred to as statements that tend to exaggerate the actual sentiment. This is usually mapped by occurrence of multiple positive or negative words consecutively [15].
Imagery:
These are the words that create a visual understanding in mind of the reader. For example, "He took me to a close dark cabin", would be an example of imagery.
Psycholinguistic Features
To extract psycholinguistic features, we utilize the Linguistic Word Count (LIWC) [16], which is a knowledge based system that has been developed upon in the past decade. The utility of such features has been studied in various areas such as personality, age, deception, health. The types of LIWC features we use are:
• General: word count, average words, word length • Psycholinguistic: affect, cognition • Personal concerns: work, achievement, home
Visual Features
Most of the websites allow addition of images along with posts. These images are often personal images, though they could also be images with text or commonly used stock images. We use visual features to model personal images and know which of these warrant an empathetic response.
Facial presence
The first feature we use if based upon whether there is a presence of a face in the image or not. The model is run on both, the data from pages other than Humans of New York, and that combined. This distinction is maintained because the images on the page are professionally captured and don't reflect the other variables into account (such as filters, angles etc.) The feature vector models the presence of image, and if present, how many of them were there. We believe that self-focus extends to photographs too [17], while measuring isolation. We use an elementary face detection script based on an open source demonstration.
Gaze and facial sentiment
The second set of feature took into account the gaze, if face was present, whether the participant was directly looking into the camera or away, and the facial angle from the vertical line. We also use OpenFace to measure Facial Action units and classify the sentiment projected by the face in the image, or average of the sentiments projected by the faces in the image. The sentiment probability was measured across 8 categories namely, anger, contempt, disgust, fear, happiness, neutral, sadness, surprise. These three criteria were used to correlate introvert nature, social anxiety and isolation and hence form a part of our feature vector.
Hue and color
We take into account the image properties namely Hue, Saturation and Value. These three-color properties are commonly used in image analysis. Hue describes the coloring on light spectrum ranging from red to blue, where low value lies on left and the high values lie on the right. Saturation refers to the radiance of the image, whereas value refers to image brightness, where lower the score, darker the image.
It has been observed that the happy individuals prefer vivid colors, while those feeling low or in need of support prefer darker colors [18,19]. We calculate pixel level averages to obtain HSV for our feature set, which have been previously noted as satisfactory markers for mental health issues [20].
Experiments
We used three different classification methods to test the accuracy of our features, namely Logistic Regression (LR), Random forest (RF) and an ensemble of both of them. We perform an ensemble of LR and RF based on majority voting scheme. We use these two classifiers because they have the minimum relation amongst them i.e. one models linear features while the other models non-linear ones.
We model ensemble classifier as follows:
Ensemble_Classifier = w1*LR + w2*RF
where w1+w2=1.0 and w1, w2 belong to {0.1, 0.9}.
We iterated over all possible combinations of w1 and w2 for the minimum cross entropy and settled upon 0.7 and 0.3 respectively. Individually, logistic regression produced the best result with an accuracy of over 76% with 99% confidence, while random classifiers averaged over 70% overall, probably due to overfitting on images. But a simple ensemble shoots the prediction accuracy of our model significantly, raising it up to 80.2% for overall classification and also 84% in some cases.
Conclusion
We have proposed a method for identifying posts that require empathetic response. We have also tried to classify responses as empathetic or non-empathetic using a suite of classifiers. We believe this is one of the preliminary works that attempts to incorporate both text and images, without the aid of speech, which has been known as the primary method for support detection in various affect based conversational models.
Our model performs significantly well on classifying empathetic and non-empathetic responses, the f-scores averages to 79%, which, though cannot be compared to a benchmark due to lack of work in this area in the same context, beats the score of empathetic classification in callcenter context of 70% [24]. We believe this could be an important aspect in marking spam or hurtful responses or those that violate Be Nice, Be Respectful policy in social media forums. Identifying types of social support in reddit commentary involved characterizing the content of comments into thematic clusters which we aim to explore with LDA in our future work.
We observe that ensemble classifiers perform the best and our use of gaze and HSV values in images combine with verbal features to give a superior performance. We believe that this performance would be enhanced if we took into account the photos that do not contain faces, but rather text or are stock images.
In future, we aim to deploy neural network for learning features other than our hand-crafted ones for they have been found to reduce the size of feature vector immensely [23,24]. Also, the descriptors we derived in this paper could be used to develop user-adaptive response systems, especially in conversational agents, by recognizing if a context warrants support and replying accordingly. We aim to experiment with Dual Encoded LSTM for dialogue generation for the above-mentioned purpose.
a) Mental Health: Issues related to stress, depression, feeling low, restless"I am feeling low. I want to commit suicide"
(b) Violence: General acts of abusive behavior such
as domestic violence, rape
"Today, I was raped"
(c) Needing support: Posts about losing a family
member, tragedies which are temporal and not
clinical
"I lost Pluto today. He was the sweetest dog I
had ever known."
(d) Physical Health: Cases of physical discomfort
such as sweating, pain, heart attack
"Please help me. I think I am having a heart at-
tack."
Table 1
1represents the f-scores of our model using different classification techniques with different feature sets on partitioned datasets.Features
LR
RF
LR+RF
Empathy Seekers Classification
Verbal
BF
65%
60%
70.2%
BF+LF+SA
66.23%
62.11%
73.03%
BF+LF+SA +LD
65.34%
63.72%
73.59%
BF+LF+SA+SF+LD+PF [a]
69.87%
65.93%
76.24%
Visual
FP
58%
50.1%
70%
FP+ HSV
64%
61.2%
73%
FP+GFS
66%
61%
73.2%
FP+GFS+HSV [b]
68%
63.2%
74.33%
Verbal + Visual ([a]+[b])
Mental Health (MH)
73.3%
69.40%
80%
Temporal Support (TS)
76.77%
69.78%
84%
Violence and Abuse (VA)
70.23%
65.18%
76.6%
MH + TS + VA
73.43%
68.12%
80.2%
Empathetic Response Classification (Only verbal features)
Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent. S Brave, C Nass, K Hutchinson, Int. J. Human Computer. Stud. 622S. Brave, C. Nass, and K. Hutchinson, "Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent", Int. J. Human Computer. Stud., 62(2):161-178, 2005.
Caring for agents and agents that care: Building empathic relations with synthetic agents. J Paiva, D Dias, R Sobral, P Aylett, AAMAS '04: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems. DC, USAIEEE Computer SocietyJ. Paiva, D. Dias, R. Sobral, P. Aylett et al., "Caring for agents and agents that care: Building empathic relations with synthetic agents", In AAMAS '04: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, pages 194-201, Wash- ington, DC, USA, IEEE Computer Society, 2004.
Empathy: development, training, and consequences. P Goldstein, G Y Michaels ; Arnold, P Goldstein, Gerald Y Michaels, New American Library. P. Goldstein and G. Y. Michaels. Empathy: development, training, and consequences. Arnold P. Goldstein, Gerald Y. Michaels. New American Library, 1985.
A study using n-gram features for text categorization. J Furnkranz, Austrian Research Institute for Artifical Intelligence. 3J. Furnkranz, "A study using n-gram features for text cat- egorization," Austrian Research Institute for Artifical In- telligence, vol. 3, no. 1998, pp. 1-10, 1998.
Sentic-Net 4: A semantic resource for sentiment analysis based on conceptual primitives. E Cambria, S Poria, R Bajpai, B Schuller, proceedings of COLING. COLINGOsakaE. Cambria, S. Poria, R. Bajpai, and B. Schuller, "Sentic- Net 4: A semantic resource for sentiment analysis based on conceptual primitives", in: proceedings of COLING, Osaka, 2016.
Common sense computing: From the society of mind to digital intuition and beyond. E Cambria, A Hussain, C Havasi, C Eckl, Lecture Notes in Computer Science. Biometric ID Management and Multimodal Communication (J. Fierrez, J. Ortega, A. Esposito, A. Drygajlo, and M. Faundez-Zanuy5707SpringerE. Cambria, A. Hussain, C. Havasi, and C. Eckl, "Com- mon sense computing: From the society of mind to digital intuition and beyond," in Biometric ID Management and Multimodal Communication (J. Fierrez, J. Ortega, A. Es- posito, A. Drygajlo, and M. Faundez-Zanuy, eds.), vol. 5707 of Lecture Notes in Computer Science, pp. 252-259, Berlin Heidelberg: Springer, 2009.
Sentic medoids: Organizing affective common sense knowledge in a multi-dimensional vector space. E Cambria, T Mazzocco, A Hussain, C Eckl, Advances in Neural Networks. D. Liu, H. Zhang, M. Polycarpou, C. Alippi, and H. HeBerlinSpringer-Verlag6677E. Cambria, T. Mazzocco, A. Hussain, and C. Eckl, "Sen- tic medoids: Organizing affective common sense knowledge in a multi-dimensional vector space. In: Ad- vances in Neural Networks", (D. Liu, H. Zhang, M. Poly- carpou, C. Alippi, and H. He, eds.), vol. 6677 of Lecture Notes in Computer Science, (Berlin), pp. 601-610, Springer-Verlag, 2011.
Sentic computing: Exploitation of common sense for the development of emotion-sensitive systems. E Cambria, A Hussain, C Havasi, C Eckl, Development of Multimodal Interfaces: Active Listening and Synchrony. A. Esposito, N. Campbell, C. Vogel, A. Hussain, and A. NijholtBerlinSpringerE. Cambria, A. Hussain, C. Havasi, and C. Eckl, "Sentic computing: Exploitation of common sense for the devel- opment of emotion-sensitive systems," in Development of Multimodal Interfaces: Active Listening and Synchrony (A. Esposito, N. Campbell, C. Vogel, A. Hussain, and A. Nijholt, eds.), Lecture Notes in Computer Science, pp. 148-156, Berlin: Springer, 2010.
An impact analysis of features in a classification approach to irony detection in product reviews. K Buschmeier, P Cimiano, R Klinger, Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisBaltimore, MarylandAssociation for Computational LinguisticsK. Buschmeier, P. Cimiano, and R. Klinger, "An impact analysis of features in a classification approach to irony detection in product reviews," in Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, (Baltimore, Mary- land), pp. 42-49, Association for Computational Linguis- tics, June 2014.
Modelling Satire in English Text for Automatic Detection. A N Reganti, T Maheshwari, U Kumar, A Das, R Bajpai, Proceedings of Workshop on SENTIRE. Workshop on SENTIREBarcelona), ICDMA.N. Reganti, T. Maheshwari, U. Kumar, A. Das and R. Bajpai, "Modelling Satire in English Text for Automatic Detection", in Proceedings of Workshop on SENTIRE, (Barcelona), ICDM, December 2016.
The truth and nothing but the truth: Multimodal analysis for deception detection. M Jaiswal, S Tabibu, R Bajpai, Proceedings of Workshop on SENTIRE. Workshop on SENTIREBarcelonaM. Jaiswal, S. Tabibu, R. Bajpai, "The truth and nothing but the truth: Multimodal analysis for deception detec- tion", in Proceedings of Workshop on SENTIRE, (Barce- lona), ICDM, December 2016.
The Social Mood of News: Self-reported Annotations to Design Automatic Mood Detection Systems. F Alam, F Celli, Evgeny A Stepanov, Osakain proceedings of COLINGF. Alam, F. Celli, Evgeny A. Stepanov et al., "The Social Mood of News: Self-reported Annotations to Design Au- tomatic Mood Detection Systems", in proceedings of COLING, Osaka (2016)
Dan sperber and deirdre wilson, relevance: Communication and cognition, oxford: Basil blackwell. R E Sanders, Language in Society. 1704R. E. Sanders, "Dan sperber and deirdre wilson, rele- vance: Communication and cognition, oxford: Basil blackwell, 1986. pp. 265.," Language in Society, vol. 17, no. 04, pp. 604-609, 1988.
Generic speech act annotation for taskoriented dialogues. G Leech, M Weisser, in proceedings of of theG. Leech and M. Weisser, "Generic speech act annotation for taskoriented dialogues," in proceedings of of the 2003
Centre for Computer Corpus Research on Language Technical Papers. Corpus Linguistics Conference. Lancaster UniversityCorpus Linguistics Conference, pp. 441Y446. Centre for Computer Corpus Research on Language Technical Pa- pers, Lancaster University, 2003.
Irony in Language and Thought: A Cognitive Science Reader. R Gibbs, H Colston, Lawrence Erlbaum AssociatesR. Gibbs and H. Colston, "Irony in Language and Thought: A Cognitive Science Reader", Lawrence Erl- baum Associates, 2007.
Linguistic inquiry and word count: Liwc. J W Pennebaker, M E Francis, R J Booth, Mahway: Lawrence Erlbaum Associates. 71J.W. Pennebaker, M.E. Francis, and R.J. Booth, "Linguis- tic inquiry and word count: Liwc 2001", Mahway: Law- rence Erlbaum Associates, 71, 2001.
Language use of depressed and depression-vulnerable college students. Cogn Emotion. S Rude, Gortner, Pennebaker, 18S. Rude, EM Gortner, JW Pennebaker, "Language use of depressed and depression-vulnerable college students. Cogn Emotion", 18(8): 1121-1133, 2004.
Children's emotional associations with colors. C J Boyatzis, R Varghese, J Genet Psychol. 1551C.J. Boyatzis, R. Varghese, "Children's emotional associ- ations with colors". J Genet Psychol 155(1): 77-85.
The Manchester Color Wheel: development of a novel way of identifying color choice and its validation in healthy, anxious and depressed individuals. J Hr Carruthers, N Morris, Tarrier, BMC Med Res Methodol. 1012HR Carruthers, J. Morris, N. Tarrier, PJ Whorwell, "The Manchester Color Wheel: development of a novel way of identifying color choice and its validation in healthy, anx- ious and depressed individuals", BMC Med Res Methodol 10: 12, 2010.
Instagram photos reveal predictive markers of depression. A G Reece, C Danforth, arXiv:1608.03282v2pre-printA.G. Reece, C. Danforth, "Instagram photos reveal pre- dictive markers of depression", pre-print: arXiv:1608.03282v2.
Twitter sentiment analysis with deep convolutional neural networks. A Severyn, A Moschitti, Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 38th International ACM SIGIR Conference on Research and Development in Information RetrievalACMA. Severyn and A. Moschitti, "Twitter sentiment analysis with deep convolutional neural networks," in Proceedings of the 38th International ACM SIGIR Conference on Re- search and Development in Information Retrieval, ACM, 2015.
Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. S Poria, E Cambria, A Gelbukh, EMNLP. S. Poria, E. Cambria, and A. Gelbukh, "Deep convolu- tional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analy- sis," in EMNLP, 2015.
Aspect extraction for opinion mining with a deep convolutional neural network. S Poria, E Cambria, A Gelbukh, Knowledge-Based Systems. 108S. Poria, E. Cambria, and A. Gelbukh, "Aspect extraction for opinion mining with a deep convolutional neural net- work," Knowledge-Based Systems, vol. 108, 2016.
Can We Detect Speakers' Empathy? A Real-Life Case Study. F Alam, M Danieli, G Riccardi, proceedings of COLING, IEEE International Conference Cognitive InfoCommunications. COLING, IEEE International Conference Cognitive InfoCommunicationsF. Alam, M. Danieli and G. Riccardi., "Can We Detect Speakers' Empathy? A Real-Life Case Study", in pro- ceedings of COLING, IEEE International Conference Cognitive InfoCommunications, 2016.
Mental Health Discourse on reddit: Self-disclosure, Social Support, and Anonymity. M D Choudhary, S De, proceedings of the Eight International AAAI Conference on Weblogs and Social Media. the Eight International AAAI Conference on Weblogs and Social MediaM.D. Choudhary, S. De, "Mental Health Discourse on reddit: Self-disclosure, Social Support, and Anonymity", in proceedings of the Eight International AAAI Confer- ence on Weblogs and Social Media, 2014.
Social support and the outcome of major depression. Lk George, Blazer, Hughes, The British J. of Psychiatry. 1544LK George, DG Blazer, DC Hughes et al., "Social sup- port and the outcome of major depression." The British J. of Psychiatry,154(4), 478-485, 1989.
You are what you tweet: analyzing Twitter for public health. M Paul, M Dredze, in proceedings ICWSMM. Paul and M. Dredze, "You are what you tweet: analyz- ing Twitter for public health", in proceedings ICWSM, 2011.
Fast Learning for Sentiment Analysis on Bullying. J Xu, X Zhu, A Bellmore, Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining. the First International Workshop on Issues of Sentiment Discovery and Opinion MiningJ. Xu, X. Zhu and A. Bellmore, "Fast Learning for Senti- ment Analysis on Bullying", in Proceedings of the First International Workshop on Issues of Sentiment Discovery and Opinion Mining, 2012.
Sentiment analysis of suicide notes: A shared task. J Pestian, P Matykiewicz, M Linn-Gust, proceedings of Biomedical informatics insights. Biomedical informatics insightsJ. Pestian, P. Matykiewicz, M. Linn-Gust et al. "Senti- ment analysis of suicide notes: A shared task", in pro- ceedings of Biomedical informatics insights, 2012.
Natural Language Processing for Mental Health: Large Scale Discourse Analysis of Counseling Conversations. T Althoff, K Clark, J Leskovec, arXiv:1605.04462v1pre-printT. Althoff, K. Clark, J. Leskovec, "Natural Language Processing for Mental Health: Large Scale Discourse Analysis of Counseling Conversations", pre-print: arXiv: 1605.04462v1
A Deep Learning Approach to Modeling Empathy in Addiction Counseling. J Gibson, B Xiao, Z Imel, in proceedings of InterspeechJ. Gibson, B. Xiao, Z. Imel et al., "A Deep Learning Ap- proach to Modeling Empathy in Addiction Counseling", in proceedings of Interspeech, 2016.
| [] |
[
"Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization",
"Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization"
] | [
"Meng Cao meng.cao@mail ",
"Yue Dong yue.dong2@mail ",
"Jackie Chi ",
"Kit Cheung jcheung@cs.mcgill.ca ",
"\nSchool of Computer Science\nMcGill University\nMontrealQCCanada\n",
"\nMILA\nMontrealQCCanada\n"
] | [
"School of Computer Science\nMcGill University\nMontrealQCCanada",
"MILA\nMontrealQCCanada"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | State-of-the-art abstractive summarization systems often generate hallucinations; i.e., content that is not directly inferable from the source text. Despite being assumed incorrect, we find that much hallucinated content is factual, namely consistent with world knowledge. These factual hallucinations can be beneficial in a summary by providing useful background information. In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities. Our method utilizes an entity's prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively. Empirical results suggest that our approach outperforms five baselines and strongly correlates with human judgments. Furthermore, we show that our detector, when used as a reward signal in an off-line reinforcement learning (RL) algorithm, significantly improves the factuality of summaries while maintaining the level of abstractiveness. 1 | 10.18653/v1/2022.acl-long.236 | [
"https://www.aclanthology.org/2022.acl-long.236.pdf"
] | 244,909,449 | 2109.09784 | e4706b609495f5bf6d197e0ddeaf85745cd8b304 |
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Meng Cao meng.cao@mail
Yue Dong yue.dong2@mail
Jackie Chi
Kit Cheung jcheung@cs.mcgill.ca
School of Computer Science
McGill University
MontrealQCCanada
MILA
MontrealQCCanada
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
State-of-the-art abstractive summarization systems often generate hallucinations; i.e., content that is not directly inferable from the source text. Despite being assumed incorrect, we find that much hallucinated content is factual, namely consistent with world knowledge. These factual hallucinations can be beneficial in a summary by providing useful background information. In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities. Our method utilizes an entity's prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively. Empirical results suggest that our approach outperforms five baselines and strongly correlates with human judgments. Furthermore, we show that our detector, when used as a reward signal in an off-line reinforcement learning (RL) algorithm, significantly improves the factuality of summaries while maintaining the level of abstractiveness. 1
Introduction
State-of-the-art abstractive summarization systems can generate fluent summaries with high automatic evaluation scores in terms of ROUGE (Lin, 2004). However, recent studies have shown that these systems are prone to hallucinate content that is not supported by the source document (Maynez et al., 2020;Kang and Hashimoto, 2020;Durmus et al., 2020;Filippova, 2020;Kryscinski et al., 2020). For instance, Maynez et al. (2020) discovered that 64.1% of the summaries generated by a BERT-based abstractive summarization model on XSUM (Narayan et al., 2018a) contain hallucinations.
Previous studies commonly assume that hallucination is an undesirable behavior in abstractive summarization systems. They investigate the 1 https://github.com/mcao516/EntFA
Source:
Under the proposals, 120,000 additional asylum seekers will be distributed among EU nations, with binding quotas. (...) Mr Juncker told the European Parliament it was "not a time to take fright". (...) He said tackling the crisis was "a matter of humanity and human dignity". "It is true that Europe cannot house all the misery in the world. But we have to put it into perspective." (...) Generation: European Commission President Jean-Claude Juncker has set out his proposals for dealing with the migrant crisis in a speech to MEPs, saying Europe cannot house all the misery in the world. Table 1: Example of factual hallucinations in a BART generated summary on XSUM. Neither the title "European Commission President" nor the first name "Jean-Claude" is mentioned in the document but both are factual.
cause of model hallucination (Kang and Hashimoto, 2020;Wang and Sennrich, 2020) and propose methods that reduce the frequency of all hallucinations (Filippova, 2020;Nan et al., 2021;Narayan et al., 2021).
Our stance in this paper is that hallucinations are not always undesirable: many factual hallucinations provide additional world knowledge that is important for summary comprehension. Table 1 presents one such example from XSUM: the hallucinated content European Commission President provides additional background information on the role of Mr. Juncker. Factual hallucinations refer to content that is verifiable by world knowledge but not inferable from source text.
We thus argue that not all hallucinations should be treated equally; in particular, factual hallucinations may be less deleterious or even potentially beneficial to to be included in a summary, as opposed to non-factual ones. We propose a method to classify entities according to whether they are hallucinations and whether they are factual (if hallucinated). We focus on entities (e.g., persons, locations, dates, cardinal numbers) because they are necessary to express the most salient pieces of in-formation in a summary. Moreover, entity hallucinations are common in generated summaries. As we will show later in our work, about 30% of entities generated by BART (Lewis et al., 2020) on XSUM test set are hallucinated.
Our approach is inspired by the observation that many hallucinated entities are generated with low probabilities. This observation suggests that the summarization model's confidence correlates with the factuality statuses of generated entities. In other words, the uncertainty is indicative of the likelihood of whether generated entities are hallucinated and non-factual.
We refer to the probability of an entity being in a summary without considering the source document as its prior probability, and its probability given the document as its posterior probability. Our assumption is that if an entity in a generated summary results in a factual error, giving the source should not provide more evidence for it, resulting in a small change in probability between the prior and the posterior. Based on this assumption, we propose to use the prior and posterior probabilities as the key features in a simple classifier that predicts an entity's hallucination status and factuality.
Due to the lack of fine-grained hallucination annotation, we create an entity-level hallucination and factuality annotation on the XSUM dataset. We evaluate our detection method on this annotated dataset as well as annotations from Maynez et al. (2020). On both datasets, our approach outperforms five baseline models at identifying nonfactual hallucinations. In addition, our approach has a strong correlation with the factuality scores given by human judges. Besides, we show that our detector, when used as a reward signal in training neural-based summarizers with the off-line RL algorithm, significantly improves the factuality of generated summaries even when the underlying dataset is noisy.
Our contributions are the following: (i) We demonstrate that an entity's prior and posterior probabilities can be used to infer whether it is hallucinated and factual. Based on this hypothesis, we propose a novel approach for entity-level hallucination detection and factuality checking. Our approach outperforms five baselines from previous work on two human-annotated datasets, in addition to having a strong correlation with summarylevel factuality scores given by human judges. (ii) We empirically demonstrate that our classifier can provide reliable reward signals for RL algorithms, leading to improved factuality while maintaining the level of abstractiveness in generated summaries. (iii) We create a set of entity-level hallucination annotations.
Related Work
The correctness of summarization systems' outputs has been evaluated as one aspect of content selection in the past, for example using the Pyramid method (Nenkova and Passonneau, 2004). As neural abstractive summarizers have become popular, their issues with correctness have sparked much recent work that focus specifically on model hallucinations and summary factuality (Kryscinski et al., 2020).
Model Hallucination
Maynez et al. (2020) conducted a large-scale human evaluation of several neural abstractive summarization systems, and found that hallucinations are common among the outputs of different summarization models.
Recently, many methods have been proposed to reduce model hallucination. Kang and Hashimoto (2020) propose a "loss truncation" training algorithm that filters out noisy training samples which may lead a model to hallucinate. use a verification system to recognize non-factual quantities in summaries and adopt a re-ranking system to reduce the number of hallucinated quantities in the final output summary. Narayan et al. (2021) use entity chains to mitigate the hallucination problem in the generation of abstractive summaries. Nan et al. (2021) show that data filtering and use a summary-worthy entity classification task as an auxiliary training objective can help improve model's entity-level factuality.
Filippova (2020) proposed a method for controlling hallucination in data-to-text generation task. They suggest that a conditional language model (CLM) will put more probability mass on a nonhallucinated entity than an unconditional language model (LM). Our work differs in that we focus on both hallucination and factuality. Also, our method works at the entity-level rather than the sentencelevel, and is geared towards text summarization.
Summary Factuality
Another line of work focuses on evaluating the factual consistency of abstractive summarization systems. Kryscinski et al. (2020) train models on an artificially corrupted dataset for factual errors detection. Cao et al. (2020) induce artificial perturbations in text to train a summary error correction system, but find that there is a large gap between such artificial perturbations and the type of hallucinations that are generated by abstractive summarizers. (Goyal and Durrett, 2020) measure factual consistency by checking whether the semantic relationship manifested by individual dependency arcs in the generated summary is supported by the source document. ; Dong et al. (2020);Durmus et al. (2020) measure and improve the factual consistency of summaries by asking and answering questions based on generated summaries and input documents.
Method
In this section, we propose a novel detection approach that separates factual from non-factual hallucinations of entities (Section 3.2), and present a factuality-aware training framework for summarization models trained on noisy dataset (Section 3.3).
Problem Statement
Let (S, R) be a pair of a source document and a reference summary, where S = (s 1 , ..., s M ) is the source document with M tokens, and R = (r 1 , ..., r L ) is the reference summary with L tokens. Let G = (g 1 , ..., g N ) be the model-generated summary with N tokens. For each named entity e k , which we assume to be a span of tokens g i k , ..., g i k +|e k |−1 (|e k | ≥ 1) starting at position i k in G, the task is to determine whether e k is hallucinated, and whether it is factual. We define an entity as hallucinated if it is not directly inferable in its generated context given the input document S. If an entity is hallucinated, we further classify it into two subtypes: factual hallucinations and non-factual hallucinations. Factual hallucinations cannot be directly entailed in their generated context from the source document but can be based on world knowledge (see Table 1). Non-factual hallucinations are entities that are neither inferable from the source nor based on world knowledge.
The Prior & Posterior Probability of an Entity
We now define the prior and posterior probabilities of an entity, which we will use to predict its hallucination and factuality statuses.
For entity e k , we define its prior probability p prior (e k ) as the probability of its generation by a language model that does not have access to the source text. If e k spans multiple tokens, we compute its probability auto-regressively. Let c k be the context of entity e k in G, excluding the tokens in e k . Then:
p prior (e k ) = f P MLM (e k | c k ) (1) = |e k | t=1 P MLM (e t k | e 1...t−1 k , c k )(2)
which we compute using a masked language model P MLM . The posterior probability p pos (e k ) of entity e k is the conditional probability of the entity given the context and the source text:
p pos (e k ) = P CMLM (e k | c k , S) (3) = |e k | t=1 P CMLM (e t k | e 1...t−1 k , c k , S),(4)
where CMLM is a conditional masked language model. CMLM is an encoder-decoder model that is trained with a masked language model objective on a parallel dataset. Specifically, a CMLM predicts a target sequence T given a source text S and part of the target T masked , where T masked is the target sequence with a random entity being masked. In order to correctly generate the missing part of the sentence, the model needs to condition on both T masked and S. Alternatively, we can calculate the entity's posterior probability using a conditional language model (CLM) instead of a CMLM. In this case, the entity's posterior probability is defined as P CLM (e k | c e k , S) where c e k = g 1 , ..., g i−1 . Note that CLM is only conditioned on the left context.
Training a Discriminator
To classify the hallucination and factuality statuses of a given entity, we need to train a discriminator model. We use the K-Nearest Neighbors (KNN) algorithm since it requires no training and makes minimal assumptions about the form of the decision boundary, as a non-parametric method. It also offers adequate interpretability. The KNN classifier is trained using the prior and posterior probabilities as features on our labeled dataset. Since the classifier is used for entity hallucination and factuality assessment, we refer to it as ENTFA. Besides using the prior/posterior probability as features, we also add a binary overlap feature that indicates whether the entity appears in the document. We train two classifiers for hallucination detection and factuality checking tasks respectively.
Improving the Factuality of Abstractive Summarization Systems
We now propose a factuality-aware training approach for summarization systems that combines our factuality assessment model with the latest offline RL technique.
RL for Text Generation Sequence generation of the tokens in the summary text can be viewed as a finite Markov Decision Process (MDP). At each time-step t, the state s t consists of the source text x and the previously generated tokens y <t , s t = (y <t , x). The agent, which is the summarization model, takes an action by generating a new token a t . Depending on the action taken, the agent gets a reward r t = R(s t , a t ) and deterministically transitions to the next state s t+1 = (y <t+1 , x). The probability of each action (i.e., token) is specified by the policy π θ (a t |s t ). The goal of the agent is to maximize the discounted cumulative reward throughout the trajectory: J(θ) = E τ ∼π T t=0 γ t r t . When training the summarization model with human-written reference summaries, we can frame the training process as an off-line RL problem with expert demonstrations (i.e., the reference summaries). In this setting, since we are sampling trajectories from a behavior policy, we need an importance sampling term w t to correct the gradient estimation. Following Pang and He (2021)'s work, we approximate w t with π θ (a t |s t ) and this gives us the following gradient expression of the objective function:
∇ θ J(θ) = E τ ∼π b t=0 π θ (a t |s t )∇ θ log π θ (a t | s t )Q(a t , s t )(5)
whereQ(a t , s t ) = T t ′ =t γ t ′ −t r t ′ is the estimated return from state s t and π b is the behavior policy from which we draw trajectories τ . In our case, π b is the (noisy) summarization dataset.
Training with a Factuality-based Reward One problem in the off-line RL setting is that expert demonstrations, which in our case are the reference summaries, are often noisy and contain content that cannot be inferred from the source document. The commonly used teacher forcing training encourages the model to blindly imitate the training data, which leads to model hallucination at inference time (Kang and Hashimoto, 2020).
To discourage the model from overfitting to the noise in the training set, we use the predictions from our classifier as factuality reward signals to guide the training of the summarization model. In the off-policy learning stage, we use our factuality classifier to label all the entities in the training set. If an entity is classified by our classifier as "non-factual", we consider it noise and give it a negative reward −r nfe . For factual entities and other tokens, we use the posterior probability from a MLE-trained model as token-level rewards, as in (Pang and He, 2021). Formally, we have:
R(s t , a t ) = −r nfe , if a t is non-factual p MLE (a t |s t ), otherwise 4 Dataset 4.1 XENT dataset
To study entity hallucination and factuality in abstractive summarization, we need annotations of entity-or token-level hallucination. To the best of our knowledge, there is no such dataset available. Therefore, we create a dataset ourselves, which we call the XENT dataset.
We 2 annotate 800 summaries generated by BART, which is one of the state-of-the-art abstractive summarization models. The input documents are randomly selected from XSUM test set. We choose XSUM because it is more abstractive than other summarization datasets. We extract 2,838 entities from the 800 generated summaries. We randomly select 30% of the samples as our test set.
We manually labeled each entity with one of the following three tags: non-hallucinated, factual hallucination, and non-factual hallucination. First, we extract entities from the given summary using automatic NER tools (Honnibal and Montani, 2017). Then, we check whether each property associated with the identified entity can be directly entailed using the information from the source document. If so, then the property is non-hallucinated. For instance, consider the entity "European Commission President Jean-Claude Juncker" in Table 1. The last name "Juncker" can be directly entailed from Category Source Document Generated Summary
Non-hallucinated (...) Tian Tian has had cubs in the past in China, before she came on loan to Edinburgh. (...) The panda enclosure at Edinburgh Zoo is due to close to visitors from Saturday ahead of a possible birth.
Edinburgh Zoo's giant panda, Tian Tian, could give birth at the end of the month.
Factual Hallucination
The couple, who have been dating since 2011, wed in front of about 10 people in Mazan, Provence -close to where the bride's family has a holiday home. (...) Knightley, 28, announced her engagement to Righton, 29, last year. "Keira was a charming bride, very modest and simple in her attitude, as was James," (...)
Oscar-winning actress Keira Knightley and British musician James Righton have married in a small ceremony in France.
Non-factual Hallucination
The city was brought to a standstill on 15 December last year when a gunman held 18 hostages for 17 hours. Family members of victims Tori Johnson and Katrina Dawson were in attendance. (...) Prime Minister Malcolm Turnbull gave an address saying a "whole nation resolved to answer hatred with love". (...) Sydney has marked the first anniversary of the siege at the Waverley cafe in which two women were killed by a gunman in the Australian city.
Intrinsic Hallucination
Christopher Huxtable, 34, from Swansea, had been missing since the collapse in February. His body was found on Wednesday and workers who carried out the search formed a guard of honour as it was driven from the site in the early hours of the morning. (...)
The body of a man whose body was found at the site of the Swansea Bay Power Station collapse has been removed from the site. Table 2: Examples of four types of hallucinations. In the second example, the nationality of the groom and the country where the wedding took place are not directly stated in the source. According to information online both entities are factual. In the third example, the terrorist attack described in the news took place at a place called "Lindt Cafe" according to Wikipedia. Therefore, "the Waverley cafe" in the generated summary is non-factual. the source document. Therefore, it is not a hallucination. However, the first name "Jean-Claude" and the position information "European Commission President" are not mentioned in the source. In the next step, we need to decide whether these information is factual or not using world knowledge. This often requires external resources such as Wikipedia or Google Search. In this case, "European Commission President" and "Jean-Claude" are both factual. If there is no information found online to prove or disprove the hallucinated entity, it is labeled as non-factual. There is a special case where the entity misrepresents information from the document. For instance, the summary might include a number from the document but that number is actually related to a different event. In this case, the entity is considered as an intrinsic hallucination (Maynez et al., 2020). In this work, we will focus on extrinsic hallucinations, so we discarded all intrinsic hallucinations in our experiments. Table 3 shows the distribution of entities by hallucination and factuality status in our labeled dataset. We show an example for each hallucination type in Table 2.
Inter-Annotator Agreement We report Fleiss's Kappa (κ) to access the reliability of agreement between annotators. We compute agreement on a subset of 800 entities and obtain almost perfect agreement (0.80 ≤ κ ≤ 1.00) with κ = 0.809. Following Pagnoni et al. (2021), we also report the percentage µ of annotators that agree with the majority class. We obtain µ = 0.931 of annotators agreeing with the majority class on the fourcategory annotation which shows substantial agreement.
MENT Dataset
Recently, Maynez et al. (2020) released a set of factuality and hallucination annotations for XSUM. For each generated summary, they labeled the hallucinated spans as well as the overall factuality of the summary. Compared with our labeling approach, their annotation has a lower granularity and does not distinguish between factual and non-factual hallucination. Therefore, we have to convert their dataset first before using it for evaluation.
To perform entity-level factuality checking on their dataset, we do the following: First, we extract entities from the annotated summaries. For entities that are extracted from factual summaries, we label them as factual entities. For each entity from non-factual summary, if it is inside an extrinsic hallucinated span, then we assume the entity is non-factual. Otherwise the entity is labeled as a factual. This process gives us a new dataset that has the same format as ours for entity-level factuality evaluation. We refer to this new dataset as the MENT dataset.
However, it is worth pointing out that the converted dataset is noisy. For instance, in Maynez et al. (2020)'s annotation, the entire generated summary is often labeled as a hallucinated span if it does not capture the meaning of the document well. In this case, the hallucinated span could still contain faithful entities with respect to the source document. This could result in false-positive non-factual entities after the conversion. Therefore, we filter out entities in the extrinsic hallucination span that also appear in the source document.
Evaluation Tasks
Entity-level Hallucination & Factuality Classification
We evaluate our method on entity-level hallucination and factuality classification tasks on XENT and MENT. For each entity in the summary, the model predicts a hallucination label and a factuality label. We will conduct factual and hallucination assessments separately for comparison with the baselines. We compare our method with five baselines models, which are discussed in detail in Section 6.1.
Correlation with Human Judgments of Factuality
In addition to entity-level classification performance, we also evaluate our methods by correlating them against human judgments of factuality. Previous work has collected summary-level judgments of factuality from human annotators, which are then correlated with automatic evaluation measures applied to those summaries. To apply our entity-level method, we use the lowest classifier confidence for the factual class among its entities as the factuality score for the entire summary. We evaluate correlation on two datasets by Pagnoni et al. (2021) and .
Evaluating the Factuality of Summarization Systems
To evaluate our factuality-aware training approach proposed in Section 3.3, we train a summarization model with factuality rewards and evaluate model's predictions on XSUM test set. To evaluate the faithfulness of generated summaries, we use automatic faithfulness evaluation tools FEQA (Durmus et al., 2020) and DAE (Goyal and Durrett, 2020) 3 . We also calculate ROUGE scores, and the percentage of n-grams and percentage of entities in the generated summaries that are not found in the source document (ENFS). The percentage of novel n-grams
Classification Experiments
Baselines We compare with five baseline methods:
(1) The overlap-based method checks the word overlap between the summary and the source document. In our case, we check whether a given entity in the generated summary also exist in the source document. If it does not, the entity is clas-sified as both hallucinated and non-factual.
(2)
The synonym-based baseline extends the overlapbased baseline by checking the overlap of summary synonyms and source synonyms. See Zhou et al. (2020) for more details.
(3) The alignmentbased baseline is based on the unsupervised word alignment method SimAlign by Jalili Sabet et al.
(2020). SimAlign extracts word alignments from similarity matrices induced from pretrained embeddings. In our task, we treat all unaligned entities in summaries as hallucinated and non-factual. (4) The LM-based method is proposed by Filippova (2020). The LM-based method uses LM and CLM to compute the token's prior and posterior probability. In Filippova (2020)'s work, they compare the value of p prior and p pos . If the generated token does not match the reference and p prior is greater than p pos , the token is classified as hallucinated.
Since we are evaluating the generated summary but not the reference, we modify their method to the following: if the entity is not found in the source and p prior > p pos , then the entity is classified as non-factual and hallucinated. (5) Zhou et al. (2020) frame the hallucination detection task as a sequence labeling task. They train a hallucination labeling model on synthetic data. We adapt their model to our task by finetuning their model on XENT.
Evaluation Results on XENT Table 4 shows the evaluation results of our classifiers and baselines in terms of both entity factuality and hallucination status classification. The results show that our approach outperforms five baselines on the factuality classification task. To show that our model is statistically better than the baselines, we run a 10-fold cross-validated paired t-test comparing our model with five baselines. The results show that our model is better than the baseline models with p-value less than 3.27e − 5. On the hallucination detection task, the overlap-based and synonym-based baselines achieve relatively high accuracy. However, these methods cannot distinguish between factual and non-factual hallucinations. This is the reason for their performance degradation on factuality classification task. For hallucination classification, the reason computing word overlap with the source does not completely solve the hallucination detection problem is that hallucination is defined based on the semantic relationship between the source and the summary. There can exist words that are not in the source document but which can nevertheless be inferred from it.
Acc. F1
Overlap Table 5 shows the evaluation results on MENT. ENTFA are learned on our annotated training set with k set to 20. The performance of all models is lower on this dataset. This may be due to fact that the converted dataset is noisier than the XENT dataset (see Section 4.2). For the factuality classification task, our model outperforms five baseline models. This demonstrates the generalizability of our approach.
Evaluation Results on MENT Dataset
Correlation Experiments
Factuality Evaluation Results of Summarization Systems
Baselines We compare our approach with four baselines: a teacher forcing trained summarizer (MLE), a RL-based summarizer (RL) (Pang and He, 2021) and a summarizer trained with the loss truncation technique from Kang and Hashimoto (2020). We also replace our factuality assessment model ENTFA with Filippova (2020)'s approach (LM-based) for entity factuality labeling as another baseline model (see Section 3.3). Table 7 shows the evaluation results on XSUM. The results show that our approach outperforms all baselines with fewer non-factual entities and higher faithfulness scores. Note that our approach has the lowest ENFS rate while having the highest percentage of factual hallucinations. Compared with the loss truncation baseline, our method also produces more novel n-grams. These show that our method does not improve the factuality of the model by simply making the model more extractive. Figure 1 shows the factuality and abstractiveness trade-off curves of our model compared to the loss truncation baseline. At the same level of ROUGE performance, our method can obtain a higher factuality score. This further proves that our model can generate both factual and high-quality summaries compared with the loss truncation baseline.
Analysis
Ablation Studies
To explore the effect of each feature, we conduct an ablation study by training the KNN classifier with fewer features. The results are illustrated in Table 8 and show that all the proposed features are useful. For factuality classification, The performance w/o posterior drops significantly from 81.82 to 70.30. This result suggests that the posterior probability is crucial for factuality classification. For hallucination classification, the overlap-based feature has the most significant impact on model performance. In Figure 2b, the KNN classifier separates the factual and non-factual entities with clear boundaries. A large part of the factual hallucinated entities are correctly identified by CMLM XSUM with relatively high posterior probabilities. This explains our model's superior performance on factuality checking. The top and right histograms in Figure 2b show the entity distribution over prior and posterior probability value respectively. As shown in 2b's histogram, factual entities have significantly higher posterior probability than that of non-factual entities on average. Figure 3 shows histograms of the prior and posterior probabilities of entities from MLM and CMLM XSUM , separated by their class (i.e., whether they are hallucinated and/or factual). Nonhallucinated entities have higher posterior probability than factual and non-factual hallucinations on average.
Prior/Posterior Probabilities
Conclusion
In this paper, we investigate the hallucination and factuality problems in abstractive summarization. We show that about 30% of entities generated by state-of-the-art summarization model are hallucinated. More interestingly, more than half of the hallucinated entities are factual with respect to the source document and world knowledge. We propose a novel method based on the entity's prior and posterior probabilities according to masked language models. Our approach outperforms five base- Table 7: Comparison of different summarization models. Results are evaluated on XSUM's official test set. "% Factual Ent" and "% Factual Hal" are the percentage of factual entities and factual hallucinations classified by ENTFA model respectively. "% ENFS" is the percentage of entities in generated summary that not found in source document. For the loss truncation baseline, c is the percentage of data to be dropped. line models on both factuality classification and hallucination detection tasks on human-annotated datasets. In addition, using our classifier as a reward signal vastly improves the factuality of summarization systems. Our approach is limited to entity-level hallucination and factuality classification. In the future, we are interested in extending our work to arbitrary text spans.
A Appendix
A.1 Dataset Annotation Guidelines and Process
Before annotating the dataset at full-scale, we conducted a pilot study with the annotators on a small evaluation set that contains 10 document and summary pairs. We then discussed with the annotators and had them explain the labels they were given to ensure they fully understood the task and followed the guidelines. The guidelines can be summarized as follows:
(1) Read the source documentation and generated abstract. If the article is incomprehensible (e.g. too short or in a language other than English), mark it as corrupted.
(2) For each entity in the summary (identified by NER tool), check whether the entity can be directly entailed in the summary context using only the information within the source document. If the answer if yes, label the entity as non-hallucinated. If the entity has multiple properties, annotate each property separately.
(3) If the source does not contain sufficient information to entail the entity, use Wikipedia or Google Search to determine the factuality of the entity. If no information can be found to prove or disprove the factuality of the entity. Label it as non-factual hallucination.
(4) If the entity is mentioned in the source document, but it is used in the wrong context and misrepresents information from the document. Label the entity as intrinsic hallucination.
We also ask the annotators to mark and annotate entities missed by automatic NER tools. We will then update the identified entities to ensure that the samples are consistent for all annotators. Annotators are paid 20$ an hour for their work, which is above the minimum wage in their country of residence. Table 9 shows the patterns of hallucinated entities. For factual hallucinations, Person, GPE, and ORG are the three most common types. Among nonfactual hallucinations, Date is the most common type (31.65%). Cardinal numbers are also easily hallucinated by summarization model. Note that the proportion of Date and GPE type of entities in non-factual hallucinations is much higher than their proportion in all entities. (Paszke et al., 2017). For the CMLM training, we initialize the model with the checkpoint of the large BART model. The max sequence length is set to 1024 for both the encoder and decoder modules. We fine-tuned the model for 15,000 steps with the warm-up steps set to 500. We use the standard cross-entropy loss as our objective function with 0.1 label-smoothing (Szegedy et al., 2016). The Adam optimizer (Kingma and Ba, 2015) with ϵ = 1e-8 and an initial learning rate 3e-5 are used for training. The dropout rate in each layer is set to 0.1. These hyperparameter values are based on the recommended values from the fairseq (Ott et al., 2019) library All experiments are conducted on 4 Tesla V100 GPUs with 32GB of memory.
A.2 Patterns of Annotated Entities
RL Training
In the off-line RL experiment, we initialize the model using the BART large model finetuned on XSUM dataset 4 . The discount factor γ is set to 1 and the learning rate r is set to 1e − 5.
We update the model for 30,000 steps in total with 1000 warm-up steps. We use polynomial decay to update the learning rate after each training step. No reward-shaping is used.
To make the training more stable, we use another policy networkπ θ to compute the importance weight w.π θ is kept as a slow copy of π θ with the same model architecture. We use Polyak updates to slowly update the weight ofπ θ in the direction to match π θ every step. The update rate ofπ θ is set to 0.01. Table 10 shows the three-class classification results of our model on XENT dataset. Since we are the first work (to the best of our knowledge) that distinguishes between factual and non-factual hallucinations, we did not have a baseline model to compare with right now. We compare with other models separately in terms of factuality and hallucination classification in Section 6.1.
A.4 Classification Results on XENT Dataset
A.5 Evaluating Entity Factuality on Noisy
Training Data Figure 4 shows the evaluation results of summarization models trained on these datasets. We can see that the model generates fewer factual entities as the training set gets noisier. Also, it shows that ROUGE score is not a favorable metric in terms The y-axis on the right indicates ROUGE-1 scores. X-axis = 0 and x-axis = 1.0 means that the model is trained on 50k clean samples and 50k noisy samples respectively; x-axis = 0.5 represents the model trained on a mix of 25k clean samples and 25k noisy samples. X-axis = 2.0 represents a model that is trained on 100k noisy samples. All models are tested on XSUM's official test set. We observe a similar trend with the PEGASUS model ( Figure 5).
of factuality evaluation. Since with the training set size fixed, the model seems to achieve higher ROUGE score at the expense of entity factuality. In addition, this indicates that if the system is optimized only for ROUGE, they may inadvertently harm factual consistency. We also observe that the word overlap method predicts much lower entity factuality rate than ENTFA. This is due to the fact that the word overlap method cannot identify factual hallucinations and introduce many false-negative samples. To verify this, we extracted all entities from summaries generated by the model trained on 50k noisy samples (x-axis = 1.0). Among these entities, there are 7,358 entities that do not appear in the source but are predicted as factual by our model. We find that 50.5% of these entities can be found in the reference summary. As a contrast, only 12.7% entities predicted as non-factual by our model can be found in the reference. Figure 5 shows the evaluation result of PEGA-SUS model (Zhang et al., 2020) follows the evaluation set up in Section A.5. Both figures show a similar trend that the models get higher ROUGE score when trained on noisier dataset with the cost of generating more non-factual entities. Compared with BART model, PEGASUS gen- erates more hallucinated entities and has higher ROUGE score overall. For instance, when both trained on 50k clean data, PEGASUS has ROUGE-1 score 0.450 compared with BART's 0.406. The predicted factual entity rate for PEGASUS and BART is 84.79% and 91.81% respectively. This may be due to the fact that PEGASUS is pretrained on a much larger corpus than BART. We leave the study of this phenomenon to future work.
A.6 Where Does the Model Learn to Hallucinate? Table 3 shows that 30% of the entities in the summaries generated by BART are hallucinated, including 15% factual hallucinated entities. To generate factual hallucinated entities, the summarization model needs to integrate background knowledge into the summary. One interesting problem is investigate where the model learns that knowledge. Since the BART is pre-trained on a large text corpus and fine-tuned on XSUM, the knowledge of hallucinated entities could come from either the pre-training corpus or the XSUM training set. To investigate this, we trained a separate CMLM on the CNN/DM dataset. Figure 6 shows the entity distribution from the two CMLM models. For non-hallucinated entities, the distributions are similar; for factual hallucinations, we can find that a large portion of them has very low posterior probabilities under CMLM CNN/DM , but high posterior under CMLM XSUM . This pattern suggests that the knowledge of many factual hallucinations comes from the XSUM training set.
We define σ(e k ) = log P CMLM XSUM (e k ) P CMLM CNN/DM (e k ) . If σ(e k ) ≥ 0, it suggests that CMLM XSUM is more confident that e k is factual than CMLM CNN/DM . For a factual hallucination e k , we can infer that the knowledge of e k is in XSUM if σ(e k ) is large. To further verify this, we retrieve the 10 most similar documents from XSUM and CNN/DM for each factual hallucinated entity using TF-IDF. Then, we count the number of times each entity appears in those similar training samples. For entities with σ(e k ) ≥ 5, the average number of appearances is 2.19 on XSUM and 0.77 on CNN/DM. For entities with σ(e k ) ≤ 0, the average number of appearances becomes 2.85 and 2.46 on XSUM and CNN/DM respectively. This further confirms that the knowledge of factual hallucinations with large σ(e k ) comes from XSUM.
A.7 Compare with Filippova (2020)'s Work Filippova (2020)'s work on data-to-text generation shows that low posterior probability from a CLM during decoding indicates hallucination. Take the summarization model as an example, if an entity is generated with very low posterior probability, it is likely that the generated entity is hallucinated and non-factual. However, compared with CMLM, CLM has more uncertainty during decoding since the right context of the entity is not determined. The uncertainty of the CLM comes from both content selection (text content and structure) and lexical choice (Xu et al., 2020). For CMLM though, the uncertainty is mostly reduced to the latter. Figure 7 show the entity posterior probabilities from CLM and CMLM model. As shown in the figure, we can find that most factual entities (blue points) are above the x = y line. This means CMLM gives more certainty to the same factual entity than CLM. The ROC curve in Figure 8 further shows this. As the lines get closer to the origin, the threshold becomes larger, and CMLM has a higher TPR than CLM. This means CMLM will classify more entities as factual. The higher AUC value of CMLM further demonstrates that CMLM is a better choice for factuality checking than CLM.
Figure 1 :
1The factuality and ROUGE score trade-off curve on XSUM. We use different reward value r nfe for our approach and different drop rate c for the loss truncation baseline.
Figure 2
2plots entities in the XENT dataset according to their prior and posterior probabilities and shows the KNN classification boundaries of ENTFA w/o overlap. InFigure 2a, we find that the non-factual hallucinated entities are clustered around the origin. This is in line with our expectations since non-factual hallucinations have lower prior and posterior probabilities. Both factual hallucinated and non-hallucinated entities are gathered in the top area with high posterior probabilities.
Figure 2 :
2The distribution of entities over prior/posterior probability. Each point in the figure represents an entity (p prior (e k ), p pos (e k )) and shading indicates the confidence of the classifier. (a) The distribution of entities; (b) The entity factuality classification results with KNN (k = 20) classifier. Both factual hallucinated and non-hallucinated entities are colored blue; (c) The KNN (k = 20) classification boundaries of hallucinated and non-hallucinated entities.
Figure 3 :
3Normalized histogram of model prediction probability for three classes of entities. The first row shows the entities' posterior probability calculated using CMLM. The second row shows the prior probability from MLM.
Figure 4 :
4Evaluation of an abstractive summarization model (BART) trained on datasets with different levels of noise. The y-axis on the left represents the percentage of factual entities classified as factual by (ENTFA) or the word overlap baseline.
Figure 5 :Figure 6 :
56Evaluation of PEGASUS LARGE trained on datasets with different levels of noises. Entity distribution over posterior probabilities from CMLM XSUM and CMLM CNN/DM . The shading shows the classification boundaries of the classifier.
Figure 7 :Figure 8 :
78Posterior probabilities calculated from CLM and CMLM. Both models are trained on XSUM dataset. ROC curve of entity's posterior probability and factuality.
Table 3 :
3Statistics of labeled dataset. See Appendix A.2 for more details.Hallucination
Factuality
Acc.
F1
Acc.
F1
Overlap-based
92.93 91.73 81.25 74.19
Synonym-based
90.76 89.42 81.30 74.79
Alignment
78.35 71.10 81.65 66.03
LM-based
74.18 54.99 84.54 57.80
Zhou et al. (2020) 86.66 81.71 85.76 75.07
ENTFA (ours)
92.93 91.73 90.95 81.82
Table 4 :
4Entity's factuality and hallucination status evaluation results on XENT. We report the accuracy and (macro) F1 score on the test set. The number of neighbors k is set to 30 for both tasks.reflects the extractiveness of summarization model.6 Experiments
Training CMLM & MLM For training the
CMLM, we use both XSUM, Narayan et al.
(2018b)) and the CNN/Dailymail dataset (Hermann
et al., 2015) dataset. To build a training corpus for
CMLM, we randomly select one entity in each ref-
erence summary and mask it with a special [MASK]
token. We append a [S] token at the beginning of
each summary. The document and summary are
concatenated together (separated by [\S] token) as
CMLM's input. The training target is the reference
summary without any masking. If there is no speci-
fication, we use the CMLM trained on XSUM. For
the MLM, we use the large BART model. BART is
pre-trained on five different reconstruction tasks in-
cluding token masking and text infilling. For more
experimental setup and hyper-parameter setting de-
tails, see Appendix A.3.
Table 5 :
5Entity-level factuality evaluation results on
converted MENT Dataset (Maynez et al. (2020)).
Metric
FRANK
(Partial Pearson's ρ)
Wang et al.
(PCC)
BLUE
0.139
0.118
ROUGE-1
0.155
0.132
BERTScore
-0.0359
0.025
QAGS
-0.0225
0.175
FEQA
0.0242
-
DAE
0.0444
-
ENTFA (ours)
0.183
0.268
Table 6 :
6Summary-level Pearson correlation coefficients between various automatic metrics and human judgments of factuality for XSUM datasets. In the middle column, we use the FRANK benchmark for factual-ity evaluation metrics from Pagnoni et al. (2021); In the
right column, we use the human judgments collected by
Wang et al. (2020). All baselines' coefficient values are
cited from their papers.
Table 6
6presents the correlation evaluation results.
On Pagnoni et al. (2021)'s benchmark dataset, our
approach has the highest partial Pearson correlation
coefficient ρ = 0.183 (p < 1e −8 ). On Wang et al.
(2020)'s dataset (right column), our approach out-
performs all other automatic metrics significantly.
These results indicate that our model can be used
for automatic factuality evaluation of summaries at
both the entity and sentence levels.
RL ↑ unigrams ↑ bigrams ↑ % ENFS ↓ FEQA ↑ DAE ↑ % Factual Ent ↑ % Factual Hal ↑ROUGE
% of novel n-gram
Faithfulness
ENTFA
System
R1 ↑ MLE
45.1 37.3
27.86
74.47
42.0
25.9
34.6
82.8
21.4
RL
45.8 37.6
28.14
74.73
43.2
25.6
33.3
82.8
21.6
LM-based
43.2 34.6
29.75
75.86
38.2
24.2
31.3
87.4
21.7
Loss trunc (c=0.3) 44.1 36.0
26.82
73.39
41.3
26.3
36.4
83.9
21.3
Loss trunc (c=0.7) 42.7 34.8
26.61
73.19
40.6
26.7
38.8
84.1
20.7
Ours (r nfe = 2.0)
44.6 36.2
27.71
74.90
37.2
26.5
37.3
90.1
24.0
Ours (r nfe = 4.0)
43.0 34.9
26.87
74.11
32.8
27.3
40.8
92.5
22.4
Table 8 :
8Ablation studies of different feature combination. We report the F1 score on XENT test set.
Chi Kit Cheung, and Jingjing Liu. 2020. Multifact correction in abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9320-9331, Online. Association for Computational Linguistics.Esin Durmus, He He, and Mona Diab. 2020. FEQA: A
question answering evaluation framework for faith-
fulness assessment in abstractive summarization. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 5055-
5070, Online. Association for Computational Lin-
guistics.
Katja Filippova. 2020. Controlled hallucinations:
Learning to generate faithfully from noisy data. In
Findings of the Association for Computational Lin-
guistics: EMNLP 2020, pages 864-870, Online. As-
sociation for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2020. Evaluating factu-
ality in generation with dependency-level entailment.
In Findings of the Association for Computational Lin-
guistics: EMNLP 2020, pages 3592-3603, Online.
Association for Computational Linguistics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend. In Advances in Neural Information
Processing Systems, volume 28, pages 1693-1701.
Curran Associates, Inc.
Matthew Honnibal and Ines Montani. 2017. spaCy 2:
Natural language understanding with Bloom embed-
dings, convolutional neural networks and incremental
parsing. To appear.
Masoud Jalili Sabet, Philipp Dufter, François Yvon,
and Hinrich Schütze. 2020. SimAlign: High qual-
ity word alignments without parallel training data
using static and contextualized embeddings. In Find-
ings of the Association for Computational Linguistics:
EMNLP 2020, pages 1627-1643, Online. Association
for Computational Linguistics.
Daniel Kang and Tatsunori Hashimoto. 2020. Improved
natural language generation via loss truncation. In
Proceedings of the 58th Annual Meeting of the Associ-
ation for Computational Linguistics, pages 718-731,
Online. Association for Computational Linguistics.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In 3rd Inter-
national Conference on Learning Representations,
ICLR 2015, San Diego, CA, USA, May 7-9, 2015,
Conference Track Proceedings.
Wojciech Kryscinski, Bryan McCann, Caiming Xiong,
and Richard Socher. 2020. Evaluating the factual
consistency of abstractive text summarization. In
Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 9332-9346, Online. Association for Computa-
tional Linguistics.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training
for natural language generation, translation, and com-
prehension. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 7871-7880, Online. Association for Computa-
tional Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for auto-
matic evaluation of summaries. In Text Summariza-
tion Branches Out, pages 74-81, Barcelona, Spain.
Association for Computational Linguistics.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and
Ryan McDonald. 2020. On faithfulness and factu-
ality in abstractive summarization. In Proceedings
of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 1906-1919, On-
line. Association for Computational Linguistics.
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero
Nogueira dos Santos, Henghui Zhu, Dejiao Zhang,
Kathleen McKeown, and Bing Xiang. 2021. Entity-
level factual consistency of abstractive text summa-
rization. In Proceedings of the 16th Conference of
the European Chapter of the Association for Compu-
tational Linguistics: Main Volume, pages 2727-2733,
Online. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018a. Don't give me the details, just the summary!
topic-aware convolutional neural networks for ex-
treme summarization. In Proceedings of the 2018
Conference on Empirical Methods in Natural Lan-
guage Processing, pages 1797-1807, Brussels, Bel-
gium. Association for Computational Linguistics.
Shashi Narayan, Shay B. Cohen, and Mirella Lapata.
2018b. Don't give me the details, just the summary!
Topic-aware convolutional neural networks for ex-
treme summarization. In Proceedings of the 2018
Conference on Empirical Methods in Natural Lan-
guage Processing, Brussels, Belgium.
Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo
Simoes, and Ryan McDonald. 2021. Planning with
entity chains for abstractive summarization. arXiv
preprint arXiv:2104.07606.
Ani Nenkova and Rebecca J. Passonneau. 2004. Evalu-
ating content selection in summarization: The pyra-
mid method. In Proceedings of the Human Language
Technology Conference of the North American Chap-
ter of the Association for Computational Linguistics:
HLT-NAACL 2004, pages 145-152.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan,
Sam Gross, Nathan Ng, David Grangier, and Michael
Auli. 2019. fairseq: A fast, extensible toolkit for
sequence modeling. In Proceedings of NAACL-HLT
2019: Demonstrations.
Table 9 :
9Percentage of each type of entity in the XENT
dataset. GPE stands for geopolitical entity, i.e. coun-
tries, cities, states. ORG includes companies, agencies,
institutions.
A.3 Experimental Setup
Dataset We use both XSUM, Narayan et al.
(2018b)) and the CNN/Dailymail dataset (Her-
mann et al., 2015) in this work. CNN/DailyMail
is a widely used summarization benchmark with
287,227 training samples, 13,368 validation sam-
ples, and 11,490 test samples. XSUM dataset con-
tains 226,711 British Broadcasting Corporation
(BBC) articles. Each article is paired with a sin-
gle sentence summary written by the BBC journal-
ists. The dataset is split into three subsets: training
(204,045, 90%), validation (11,332, 5%), and test
(11,334, 5%) sets.
Language Model Hyperparameters All lan-
guage models used in this paper are based on the
Transformer encoder-decoder architecture from the
Fairseq library (Ott et al., 2019) that is written in
PyTorch
Table 10 :
10Evaluation results on XENT. We report the leave-one-out error of our ENTFA model with prior, posterior probability and word overlap as features.
Recent work(Narayan et al., 2021; Nan et al., 2021) has shown that filtering out noisy training samples in the XSUM dataset can mitigate the hallucination issue. Therefore, we divide the XSum training set into clean samples and potentially noisy samples. Potentially noisy samples are samples where the reference summary contains entities that does not appear in the source. This gives us around 150k potentially noisy training samples and 50k clean training samples. Then, we mix the clean samples with noisy samples at different proportions to create training sets with different levels of noise.
Two coauthors and three graduate students. The data collection process was approved by institution ethics committee.
In this work, we define the faithfulness of the summary as whether it is faithful with respect to the source. Factuality as whether is factual with respect to world knowledge.
https://github.com/pytorch/fairseq/ tree/master/examples/bart
AcknowledgmentsThis research was supported by the Canada CIFAR AI Chair program and Samsung Electronics. We would also like to thank Compute Canada for providing us computing resources.
Factual error correction for abstractive summarization models. Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung, 10.18653/v1/2020.emnlp-main.506Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsMeng Cao, Yue Dong, Jiapeng Wu, and Jackie Chi Kit Cheung. 2020. Factual error correction for abstrac- tive summarization models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6251-6258, Online. Association for Computational Linguistics.
. Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie
Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. Artidoro Pagnoni, Vidhisha Balachandran, Yulia Tsvetkov, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Mexico CityArtidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstrac- tive summarization with FRANK: A benchmark for factuality metrics. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (NAACL-HLT), Mexico City.
Richard Yuanzhe Pang and He He. 2021. Text generation by learning from demonstrations. International Conference on Learning Representations. Richard Yuanzhe Pang and He He. 2021. Text gener- ation by learning from demonstrations. In Interna- tional Conference on Learning Representations.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern RecognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Re- thinking the inception architecture for computer vi- sion. In Proceedings of IEEE Conference on Com- puter Vision and Pattern Recognition,.
Asking and answering questions to evaluate the factual consistency of summaries. Alex Wang, Kyunghyun Cho, Mike Lewis, 10.18653/v1/2020.acl-main.450Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5008-5020, Online. Asso- ciation for Computational Linguistics.
On exposure bias, hallucination and domain shift in neural machine translation. Chaojun Wang, Rico Sennrich, 10.18653/v1/2020.acl-main.326Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsChaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural ma- chine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 3544-3552, Online. Association for Computational Linguistics.
Understanding neural abstractive summarization models via uncertainty. Jiacheng Xu, Shrey Desai, Greg Durrett, 10.18653/v1/2020.emnlp-main.508Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsJiacheng Xu, Shrey Desai, and Greg Durrett. 2020. Un- derstanding neural abstractive summarization models via uncertainty. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 6275-6281, Online. As- sociation for Computational Linguistics.
PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Pro- ceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
Reducing quantity hallucinations in abstractive summarization. Zheng Zhao, Shay B Cohen, Bonnie Webber, 10.18653/v1/2020.findings-emnlp.203Findings of the Association for Computational Linguistics: EMNLP 2020. Zheng Zhao, Shay B. Cohen, and Bonnie Webber. 2020. Reducing quantity hallucinations in abstractive sum- marization. In Findings of the Association for Com- putational Linguistics: EMNLP 2020, pages 2237- 2249, Online. Association for Computational Lin- guistics.
Detecting hallucinated content in conditional neural sequence generation. Chunting Zhou, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer, Marjan Ghazvininejad, arXiv:2011.02593arXiv preprintChunting Zhou, Jiatao Gu, Mona Diab, Paco Guz- man, Luke Zettlemoyer, and Marjan Ghazvinine- jad. 2020. Detecting hallucinated content in con- ditional neural sequence generation. arXiv preprint arXiv:2011.02593.
| [
"https://github.com/mcao516/EntFA",
"https://github.com/pytorch/fairseq/"
] |
[
"Entailment Relation Aware Paraphrase Generation",
"Entailment Relation Aware Paraphrase Generation"
] | [
"Abhilasha Sancheti \nUniversity of Maryland\nCollege Park\n",
"♦ ♣ Balaji \nUniversity of Maryland\nCollege Park\n",
"Vasan Srinivasan \nUniversity of Maryland\nCollege Park\n",
"Rachel Rudinger rudinger@umd.edu \nUniversity of Maryland\nCollege Park\n"
] | [
"University of Maryland\nCollege Park",
"University of Maryland\nCollege Park",
"University of Maryland\nCollege Park",
"University of Maryland\nCollege Park"
] | [] | We introduce a new task of entailment relation aware paraphrase generation which aims at generating a paraphrase conforming to a given entailment relation (e.g. equivalent, forward entailing, or reverse entailing) with respect to a given input. We propose a reinforcement learning-based weaklysupervised paraphrasing system, ERAP, that can be trained using existing paraphrase and natural language inference (NLI) corpora without an explicit task-specific corpus. A combination of automated and human evaluations show that ERAP generates paraphrases conforming to the specified entailment relation and are of good quality as compared to the baselines and uncontrolled paraphrasing systems. Using ERAP for augmenting training data for downstream textual entailment task improves performance over an uncontrolled paraphrasing system, and introduces fewer training artifacts, indicating the benefit of explicit control during paraphrasing. | 10.1609/aaai.v36i10.21376 | [
"https://arxiv.org/pdf/2203.10483v1.pdf"
] | 247,594,090 | 2203.10483 | c44a192fc5bd431f0b30691b92f6c72207c8b8eb |
Entailment Relation Aware Paraphrase Generation
Abhilasha Sancheti
University of Maryland
College Park
♦ ♣ Balaji
University of Maryland
College Park
Vasan Srinivasan
University of Maryland
College Park
Rachel Rudinger rudinger@umd.edu
University of Maryland
College Park
Entailment Relation Aware Paraphrase Generation
We introduce a new task of entailment relation aware paraphrase generation which aims at generating a paraphrase conforming to a given entailment relation (e.g. equivalent, forward entailing, or reverse entailing) with respect to a given input. We propose a reinforcement learning-based weaklysupervised paraphrasing system, ERAP, that can be trained using existing paraphrase and natural language inference (NLI) corpora without an explicit task-specific corpus. A combination of automated and human evaluations show that ERAP generates paraphrases conforming to the specified entailment relation and are of good quality as compared to the baselines and uncontrolled paraphrasing systems. Using ERAP for augmenting training data for downstream textual entailment task improves performance over an uncontrolled paraphrasing system, and introduces fewer training artifacts, indicating the benefit of explicit control during paraphrasing.
Introduction
Paraphrase is "an alternative surface form in the same language expressing the same semantic content as the original form" (Madnani and Dorr 2010). Although the logical definition of paraphrase requires strict semantic equivalence (or bi-directional entailment (Androutsopoulos and Malakasiotis 2010)) between a sequence and its paraphrase, data-driven paraphrasing accepts a broader definition of approximate semantic equivalence (Bhagat and Hovy 2013). Moreover, existing automatically curated paraphrase resources do not align with this logical definition. For instance, pivot-based paraphrasing rules extracted by Ganitkevitch, Van Durme, and Callison-Burch (2013) contain hypernym or hyponym pairs, e.g. due to variation in the discourse structure of translations, and unrelated pairs, e.g. due to misalignments or polysemy in the foreign language.
While this flexibility of approximate semantic equivalence allows for greater diversity in expressing a sequence, it comes at the cost of the ability to precisely control the semantic entailment relationship (henceforth "entailment relation") between a sequence and its paraphrase. This trade-off severely limits the applicability of paraphrasing systems or resources to a variety of downstream natural language understanding (NLU) tasks (e.g. machine translation, question Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Entailment-unaware system might output approximately equivalent paraphrases. Label preserving augmentations generated using such system for textual entailment task can result in incorrect labels (red). Explicit entailment relation control in entailment-aware system helps in reducing such incorrectly labeled augmentations (green).
answering, information retrieval, and natural language inferencing (Pavlick et al. 2015)) ( Figure 1). For instance, semantic divergences in machine translation have been shown to degrade the translation performance (Carpuat, Vyas, and Niu 2017;Pham et al. 2018).
Existing works identify directionality (forward, reverse, bi-directional, or no implication) of paraphrase and inference rules (Bhagat, Pantel, and Hovy 2007), and add semantics (natural logic entailment relationships such as equivalence, forward or reverse entailment, etc.) to data-driven paraphrasing resources (Pavlick et al. 2015) leading to improvements in lexical expansion and proof-based RTE systems, respectively. However, entailment relation control in paraphrase generation is, to our knowledge, a relatively unexplored topic, despite its potential benefit to downstream applications (Madnani and Dorr 2010) such as Multi-Document Summarization (MDS) (or Information Retrieval (IR)) wherein having such a control could allow the MDS (or IR) system to choose either the more specific (reverse entailing) or general (forward entailing) sentence (or query) depending on the purpose of the summary (or user needs).
To address the lack of entailment relation control in para-phrasing systems, we introduce a new task of entailment relation aware paraphrase generation: given a sequence and an entailment relation, generate a paraphrase which conforms to the given entailment relation. We consider three entailment relations (controls) in the spirit of monotonicity calculus (Valencia 1991): (1) Equivalence (≡) refers to semantically equivalent paraphrases (e.g. synonyms) where input sequence entails its paraphrase and vice-versa; (2) Forward Entailment (⊏) refers to paraphrases that loose information from the input or generalizes it (e.g. hypernyms) i.e. input sequence entails its paraphrase; (3) Reverse Entailment (⊐) refers to paraphrases that add information to the input or makes it specific (e.g. hyponyms) i.e. input sequence is entailed by its paraphrase. The unavailability of paraphrase pairs annotated with such a relation makes it infeasible to directly train a sequence-to-sequence model for this task. Collecting such annotations for existing large paraphrase corpora such as ParaBank (Hu et al. 2019b) or ParaNMT is expensive due to scale. We address this challenge in 3 ways: (1) by building a novel entailment relation oracle based on natural language inference task (NLI) (Bowman et al. 2015a;Williams, Nangia, and Bowman 2018) to obtain weak-supervision for entailment relation for existing paraphrase corpora; (2) by recasting an existing NLI dataset, SICK (Marelli et al. 2014), into a small supervised dataset for this task, and (3) by proposing Entailment Relation Aware Paraphraser (ERAP) which is a reinforcement learning based (RL-based) weakly-supervised system that can be trained only using existing paraphrase and NLI corpora, with or without weak-supervision for entailment relation.
Intrinsic and extrinsic evaluations show advantage of entailment relation aware (henceforth "entailment-aware") paraphrasing systems over entailment-unaware (standard uncontrolled paraphrase generation) counterparts. Intrinsic evaluation of ERAP (via a combination of automatic and human measures) on recasted SICK ( §3) dataset shows that generated paraphrases conform to the given entailment relation with high accuracy while maintaining good or improved paraphrase quality when compared against entailment-unaware baselines. Extrinsic data-augmentation experiments ( §5) on textual entailment task show that augmenting training sets using entailment-aware paraphrasing system leads to improved performance over entailmentunaware paraphrasing system, and makes it less susceptible to making incorrect predictions on adversarial examples.
Entailment Relation Aware Paraphraser
Task Definition Given a sequence of tokens X = [x 1 , . . . , x n ], and an entailment relation R ∈{Equivalence (≡), Forward Entailment (⊏), Reverse Entailment (⊐)}, we generate a paraphrase Y = [y 1 , . . . , y m ] such that the entailment relationship between X and Y is R.Ŷ is the generated paraphrase and Y is the reference paraphrase.
Neural paraphrasing systems (Prakash et al. 2016;Li et al. 2018) employ a supervised sequence-to-sequence model to generate paraphrases. However, building a supervised model for this task requires paraphrase pairs with entailment rela- Figure 2: ERAP: Generator takes in a sequence X, an entailment relation R, and outputs a paraphraseŶ .Ŷ is scored by various scorers in the evaluator and a combined score (known as reward) is sent back to train the generator. Hypothesis-only adversary is adversarially trained onŶ and predictions from the entailment relation consistency scorer. tion annotations. To address this, we propose an RL-based paraphrasing system ERAP which can be trained with existing paraphrase and NLI corpora without any additional annotations. ERAP (Figure 2) consists of a paraphrase generator ( §2.1) and an evaluator ( §2.2) comprising of various scorers to assess the quality of generated paraphrases for different aspects. Scores from the evaluator are combined ( §2.3) to provide feedback to the generator in the form of rewards. Employing RL allows us to explicitly optimize the generator over measures accounting for the quality of generated paraphrases, including the non-differentiable ones.
Paraphrase Generator
The generator is a transformer-based (Vaswani et al. 2017) sequence-to-sequence model which takes (X, R) and gener-atesŶ . We denote the generator as G(Ŷ |X,R; θ g ), where θ g refers to parameters of the generator. We incorporate the entailment relation as a special token prepended to the input sequence. This way, entailment relation receives special treatment (Kobus, Crego, and Senellart 2017) and the generator learns to generate paraphrases for a given X, and R.
Paraphrase Evaluator
The evaluator comprises of several scorers to asses the quality of the generated paraphrase for three aspects: semantic similarity with the input, expression diversity from the input, and entailment relation consistency. It provides rewards to the paraphrases generated by the generator as feedback which is used to update the parameters of the generator. We describe the various scorers below.
Semantic Similarity Scorer provides reward which encourages the generated paraphraseŶ to have similar meaning as the input sequence X. We use MoverScore (Zhao et al. 2019) to measure the semantic similarity between the generated paraphrase and the input, denoted as r s (X,Ŷ ). MoverScore combines contextualized representations with word mover's distance (Kusner et al. 2015) and has shown high correlation with human judgment of text quality.
Expression Diversity Scorer rewards the generated paraphrase to ensure that it uses different tokens or surface form to express the input. We measure this aspect by computing n-grams dissimilarity (inverse BLUE (Papineni et al. 2002)),
r d (X,Ŷ ) = 1 − BLEU(Ŷ , X)(1)
Following Hu et al. (2019b), we use modified BLEU without length penalty to avoid generating short paraphrases which can result in high inverse BLEU scores.
Entailment Relation Consistency Scorer is a novel scorer designed to reward the generated paraphrase in such a way that encourages it to adhere to the given entailment relation R. To compute the reward, we build an oracle O(X, Y ) (details in §3) based on natural language inferencing (NLI) and use likelihood of the given entailment relation from the Oracle as the score. r l (X,Ŷ , R) = O(l = R|X,Ŷ ). As will be discussed further in §4.3, we found that entailment relation consistency scorer can result in generator learning simple heuristics (e.g. adding same adjective such as 'desert', or trailing tokens like 'and says' or 'with mexico' for ⊐ or short outputs for ⊏) leading to degenerate paraphrases having high consistency score. Inspired by the idea of hypothesis-only baselines (Poliak et al. 2018) for NLI task, we build a novel RoBERTa-based Hypothesis-only Adversary, A(l|Ŷ ), to penalize the generated paraphrases resorting to such heuristics. The adversary is a 3-class classifier trained on the paraphrases generated during the training phase with the oracle prediction for (X,Ŷ ) pair as the ground-truth. The Adversary loss L(A) is
L(A) = − |C| c=1 O(X,Ŷ ) log(A(l = c|Ŷ )),(2)
where |C| = 3 is the number of entailment relations. Training the adversary in this way helps in adapting to the heuristics taken by the generator during the course of training. The generator and the adversary are trained alternatively, similar to a GAN (Goodfellow et al. 2014) setup. The penalty is computed as the likelihood of entailment relation being R using the adversary, p l (Ŷ , R) = A(l = R|Ŷ ). We only penalize those generated paraphrases for which predicted relation is same as the input relation because incorrect prediction denotes no heuristic is taken by the generator.
Reinforcement Learning Setup
The output paraphrases from the generator are sent to the scorers for evaluation. The various scores from the scorers are combined to give feedback (in the form of reward) to the generator to update its parameters and to improve the quality of the generated paraphrases conforming to the given relation. We emphasize that although the scores from our scorers are not differentiable with respect to θ g , we can still use them by employing RL (the REINFORCE algorithm) to update the parameters of the generator (Williams 1992).
In RL paradigm, state at time t is defined as s t = (X, R,Ŷ 1∶t−1 ) whereŶ 1∶t−1 refers to the first t − 1 tokens that are already generated in the paraphrase. The action at time t is the t th token to be generated. Let V be the vocabulary, and T be the maximum output length. The total expected reward of the current generator is then given by
J(G) = ∑ T t=1 EŶ 1∶t−1 ∼G [∑ y t ∈V P(y t |s t )Q(s t , y t )],
where P(y t |s t ) is the likelihood of token y t given the current state s t , and Q(y t , s t ) is the cumulative discounted reward for a paraphrase extended fromŶ 1∶t−1 . The total reward, Q, is defined as the sum of the token level rewards.
Q(s t , y t ) = T τ =t γ τ −t r(s τ , y τ ),(3)
where r(s τ , y τ ) is the reward of token y τ at state s τ , and γ ∈ (0, 1) is a discounting factor so that the future rewards have decreasing weights, since their estimates are less accurate. If we consider thatŶ 1∶t−1 has been given then for every y t , the total expected reward becomes
J(G) = T t=1 y t ∈V P(y t |s t )Q(s t , y t ).(4)
Sequence Sampling To obtain r(s t , y t ) at each time step t, we need scores for each token. However, by design these scorers only evaluate complete sequences instead of single token or partial sequences. We therefore use the technique of rolling out (Yu et al. 2017), where the generator "rolls out" a given sub-sequenceŶ 1∶t to generate complete sequence by sampling the remaining part of the sequenceŶ t+1∶T . Following Gong et al. (2019), we use a combination of beam search and multinomial sampling to balance reward estimation accuracy at each time step and diversity of the generated sequence. We first generate a reference paraphraseŶ The combined score f (s t , y t ) for an action y t at state s t is computed by averaging the score of the complete sequences rolled out fromŶ
ref 1∶t defined as f (s t , y t ) = 1 n n i=1 α ⋅ (r l (X,Ŷ i , R) − p l (Ŷ i , R))+ β ⋅ r s (X,Ŷ i ) + δ ⋅ r d (X,Ŷ i ),(5)
where α, β, δ, and n are hyperparameters empirically set to 0.4, 0.4, 0.2, and 2, respectively. These parameters control the trade-off between different aspects for this multiobjective task. Following Siddique, Oymak, and Hristidis (2020), we threshold 1 the scorers' scores so that the final reward maintains a good balance across various scores. For example, generating diverse tokens at the expense of losing too much on the semantic similarity is not desirable. Similarly, copying the input sequence as-is to the generation is clearly not a paraphrase (i.e., r s (X,Ŷ ) = 1). We define reward r(s t , y t ) for action y t at state s t as
r(s t , y t ) = f (s t , y t ) − f (s t−1 , y t−1 ), t > 1, f (s 1 , y 1 ), t = 1(6)
The discounted cumulative reward Q(s t , y t ) is then computed from the rewards r(s τ , y τ ) at each time step using Eq. 3 and the total expected reward is derived using Eq. 4
The generator loss L(G) is defined as −J(G).
Training Details
Pre-training Pre-training has been shown to be critical for RL to work in unsupervised settings (Siddique, Oymak, and Hristidis 2020;Gong et al. 2019) therefore, we pre-train the generator on existing large paraphrase corpora e.g. ParaBank (Hu et al. 2019b) or ParaNMT (Wieting and Gimpel 2018) in two ways; (1) Entailment-aware uses Oracle ( §3) to obtain entailment relation for paraphrase pairs in the train set of paraphrase corpora, filter the semantically-divergent ( §3) pairs, upsample or downsample to have balanced data across relations, and train the generator with weak-supervision for entailment relation and goldparaphrases, and (2) Entailment-unaware trains the generator on paraphrase pairs as-is without any entailment relation. Pre-training is done in a supervised manner with the cross-entropy loss and offers immediate benefits for generator to learn paraphrasing transformations and have warmstart leading to faster model training.
RL-based Fine-tuning We fine-tune the generator using feedback from the evaluator on recasted SICK dataset (details in §3). For any practical purposes, our RL-finetuning approach only requires input sequences without any annotations for entailment relation or ground-truth paraphrases. However, for a fair comparison against supervised or weakly-supervised baselines ( §4.1), we use the goldentailment relation for recasted SICK during RL fine-tuning.
Collecting Labeled Paraphrase Data
Entailment-aware paraphrasing requires paraphrase pairs annotated with entailment relation. However, collecting entailment relation annotations for large size of paraphrase corpora such as ParaBank 2 (Hu et al. 2019b) is too costly. To obtain entailment relations automatically for available paraphrase pairs, we train a NLI classifier and use it to derive the entailment relations as described below. Entailment Relation Oracle NLI is a standard natural language understanding task of determining whether a hypothesis h is true (entailment 3 E), false (contradiction C), or undetermined (neutral N) given a premise p (MacCartney 2009). To build an entailment relation oracle, O(X, Y ), we first train a RoBERTa-based 3-class classifier, o(l|⟨p, h⟩), to predict the uni-directional (E, N, C) labels given a ⟨p, h⟩ pair. This classifier is then run forwards (⟨X, Y ⟩) and backwards (⟨Y, X⟩) on the paraphrase pairs to get the uni-directional predictions which are further used to derive the entailment relations as follows.
O(X, Y ) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ≡ if o(l|⟨X, Y ⟩) = E & o(l|⟨Y, X⟩) = E ⊏ if o(l|⟨X, Y ⟩) = E & o(l|⟨Y, X⟩) = N ⊐ if o(l|⟨X, Y ⟩) = N & o(l|⟨Y, X⟩) = E C if o(l|⟨X, Y ⟩) = C & o(l|⟨Y, X⟩) = C N if o(l|⟨X, Y ⟩) = N & o(l|⟨Y, X⟩) = N Invalid ,otherwise
The Oracle serves two purposes: (1) generates weaksupervision for entailment relations for existing paraphrase corpora, and (2)
Intrinsic Evaluation
Here we provide details on the entailment-aware and unaware comparison models, and the evaluation measures.
Comparison Models
To contextualize ERAP's performance, we train several related models including supervised and weakly-supervised, Train 1344 684 684 420 1274 2524 641 Dev 196 63 63 43 143 281 71 Test 1386 814 814 494 1404 2790 712 Table 1: E, N, C denote entailment, neutral, and contradiction, respectively. Others refers to neutral or invalid relation. entailment-aware and unaware models to obtain lower and upper bound performance on recasted SICK as follows: (1) the generator is trained on recasted SICK in an entailment-aware (Seq2seq-A) and unaware (Seq2seq-U) supervised setting;
(2) the generator is pre-trained on Para-Bank dataset in entailment-aware (Pre-trained-A) and unaware (Pre-trained-U) setting to directly test on the test set of recasted SICK; (3) the pre-trained generators are fine-tuned on recasted SICK in entailment-aware (Finetuned-A) and unaware (Fine-tuned-U) supervised setting;
(4) multiple outputs (k∈{1, 5, 10, 20}) are sampled using nucleus sampling (Holtzman et al. 2019) from Seq2seq-U (Re-rank-s2s-U) or Fine-tuned-U (Re-rank-FT-U) and reranked based on the combined score f (s t , y t ). The highest scoring output is considered as the final output for Re-rank-s2s-U and Re-rank-FT-U.
Evaluation Measures
Automatic evaluation to evaluate the quality of paraphrases is primarily done using iBLEU (
Results and Analysis
To use paraphrasing models for downstream tasks, we need to ensure that the generated paraphrases conform to the specified entailment relation and are of good quality. Automatic evaluation We first evaluate the pre-trained generators on a held-out set from ParaBank containing 500 examples for each relation. Table 2 shows that the entailment-aware generator outperforms its unaware counterpart across all the measures. This boost is observed with weak-supervision for entailment relation demonstrating the good quality of weak-supervision. Next, We evaluate ERAP variants against the comparison models ( §4.1) on the recasted SICK test samples belonging to ≡, ⊏, ⊐ relation and report the results in Table 3 4 . Entailment-aware (-A) variants outperform corresponding unaware (-U) variants on iBLEU score, while outperforming the majority-class (i.e., ≡) copy-input baseline (except for Seq2seq-A). Weakly-supervised pre-training helps in boosting the performance in terms of iBLEU and R-Consistency score as evident from higher scores for Finetuned-A(U) model over Seq2seq-A(U). Poor performance of Seq2seq variants is because of the small dataset size and much harder multi-objective task. Re-ranking outputs from Fine-tuned-U achieve higher iBLEU and consistency score than Pre-trained-A which is explicitly trained with weaksupervision for relation. However, this comes at the computational cost of sampling multiple 5 (k=20) outputs. Improved performance of Fine-tuned models over Seq2seq indicates the importance of pre-training. Both the ERAP variants achieve higher iBLEU and consistency than its lowerbounding (Pre-trained) models but the outputs show less diversity in expression and make conservative lexical or Table 4: Ablation of scorers in ERAP. Con, Sim, Div refers to relation consistency, semantic similarity, and expression diversity scorers. Underline denote more copying of input for Diversity score and presence of heuristics in outputs for R-Consistency score than gold-references.
syntactic changes. These results look encouraging until we notice Copy-input (last row) which achieves high BLEU and iBLEU, indicating that these metrics fail to punish against copying through the input (an observation consistent with Niu et al. (2020)).
Ablation analysis of each scorer
We demonstrate the effectiveness of each scorer in ERAP via an ablation study in Table 4. Using only consistency scorer for rewarding the generated paraphrases, a significant improvement in consistency score is observed as compared to Pre-trained-A and Gold-references. However, this high score may occur at the cost of semantic similarity (e.g. 1 in Table 5) wherein output conforms to ⊏ relation at the cost of losing much of the content. Adding similarity scorer, helps in retaining some of the content (higher BLEU and iBLEU) but results in copying (low diversity) from the input. Addition of diversity scorer helps in introducing diversity in expression. However, model is still prone to heuristics (e.g. losing most of the content from input (1 in Table 5), or adding irrelevant tokens 'with mexico' or 'desert' (2 in Table 5)) for ensuring high consistency score. Introducing Adversary reduces the heuristics learned by the generator. Together all the scorers help maintain a good balance for this multi-objective task.
Human evaluation We report the human evaluation for 25 test outputs each from 8 models for 4 measures in Table 6. ERAP-A achieves the highest consistency while maintaining a good balance between similarity, diversity and grammaticality. Re-rank-s2s-U has the highest diversity which comes at the cost of semantic similarity and grammaticality (e.g. 3 in Table 5). A strikingly different observation is high similarity and low diversity of Pre-trained variants, reinforcing the issues with existing automatic measures.
Extrinsic Evaluation
The intrinsic evaluations show that ERAP produces quality paraphrases while adhering to the specified entailment relation. Next, we examine the utility of entailment-aware paraphrasing models over unaware models for a downstream application, namely paraphrastic data augmentation for textual entailment task. Given two sentences, a premise p and a hypothesis h, the task of textual entailment is to determine if a human would infer h is true from p. Prior work has shown that paraphrastic augmentation of textual entailment datasets improve performance (Hu et al. 2019a); however, these ap-1. Qualitative outputs for ablation study Input ⊏ a shirtless man is escorting a horse that is pulling a carriage along a road Reference a shirtless man is leading a horse that is pulling a carriage Only Con a shirtless man escorts it. Con+Sim a shirtless man is escorting a horse. Con+Sim+Div a shirtless man escorts a horse.
ERAP-A
a shirtless person is escorting a horse who is dragging a carriage.
Example of heuristic learned w/o Hypothesis-only Adversary Input ⊐
a man and a woman are walking through a wooded area Reference a man and a woman are walking together through the woods -Adversary a desert man and a woman walk through a wooded area ERAP-A a man and a woman are walking down a path through a wooded area Input ⊐ four girls are doing backbends and playing outdoors Reference four kids are doing backbends in the park -Adversary four girls do backbends and play outside with mexico.
ERAP-A
four girls do backbends and play games outside.
3. Output from comparison models and ERAP Input ≡ a ribbon is being twirled by a girl in pink Reference a girl in pink is twirling a ribbon Seq2seq-U a girl is talking to a girl in pink Seq2seq-A a girl in blue is surfing on a pink Fine-tuned-U a girl in pink is turning a tape Fine-tuned-A a girl in pink is circling the ribbon. Re-rank-U a girl is talking to a girl in a pink built Re-rank-A a girl in a pink jacket is swinging a ribbon Pre-trained-U a girl in pink is circling the ribbon. Pre-trained-A the ribbon is being twisted by a girl in pink ERAP-U the ribbon is being twisted by the girl in the pink ERAP-A a girl in pink is twirling a ribbon. proaches make the simplifying assumption that entailment relations are preserved under paraphrase, which is not always the case (see Figure 1 and 30% of ParaBank pairs were found to be semantically-divergent using Oracle). We use SICK NLI dataset for this task because we have a paraphrasing system trained on similar data distribution 6 . We hypothesize that entailment-aware augmentations will result in fewer label violations, and thus overall improved performance on the textual entailment task. Moreover, explicit control over the entailment relation allows for greater variety of augmentations that can be generated (an exhaustive list of label preserving augmentations based on entailment relation between a p (or h) and its paraphrase is pre- sented in Table 7) with entailment-aware models. Paraphrastic Data Augmentation We generate paraphrases for all premises p ∈ P and hypotheses h ∈ H present in the train set of SICK NLI using entailmentaware and unaware models. We obtain augmentation data by combining all the paraphrases (generated using entailmentaware models) with original data and label them as per Table 8. As per our hypothesis, models trained with augmentations generated using entailmentaware models result in improved accuracy on both original as well as adversarial test samples over those trained with entailment-unaware augmentations. Textual entailment model trained only on SICK NLI data performs the best on adversarial test set as expected and proves that although augmentation helps in boosting the performance of a model, it introduces augmentation artifacts during training.
Original Augmentations ≡ ⊐ ⊏ Unknown ⟨p, h⟩ ⟨p ′ , h⟩ ⟨p, h ′ ⟩ ⟨p ′ , h ′ ⟩ ⟨pr, h⟩ ⟨pr, h ′ ⟩ ⟨p, hf ⟩ ⟨p ′ , hf ⟩ ⟨pr, hf ⟩ ⟨pf, h ′ ⟩ ⟨pf, hf ⟩ ⟨p, hr⟩ ⟨p ′ , hr⟩ ⟨pr, hr⟩ ⟨pf, r⟩ ⟨pf, h⟩ E/N E E/N E E/N E E/N E E/N E E/N E E/U E/U E/U U /U U /U U /U U /U U /U U /U U /U
Related Work
Paraphrase generation is a common NLP task with widespread applications. Earlier approaches are rulebased (Barzilay, McKeown, and Elhadad 1999;Ellsworth and Janin 2007) or data-driven (Madnani and Dorr 2010). Recent, supervised deep learning approaches use LSTMs (Prakash et al. 2016), VAEs (Gupta et al. 2018), pointer-generator networks (See, Liu, and Manning 2017), and transformer-based sequence-to-sequence models. Li et al. (2018) use RL for supervised paraphrasing. Unsupervised paraphrasing is a challenging and emerging NLP task with limited efforts. Bowman et al. (2015b) train VAE to sample less controllable paraphrases. Others use metropolis-hastings (Miao et al. 2019), simulated annealing or dynamic-blocking (Niu et al. 2020) to add constraints to the decoder at test time. Siddique, Oymak, and Hristidis (2020) use RL to maximize expected reward based on adequacy, fluency and diversity. Our RLbased approach draws inspiration from this work by introducing oracle and hypothesis-only adversary.
Controllable text generation is a closely related field with efforts been made to add lexical (Hu et al. 2019a;Garg et al. 2021) or syntactic control (Iyyer et al. 2018;Chen et al. 2019;Goyal and Durrett 2020) to improve diversity of paraphrases. However, ours is the first work which introduces a semantic control for paraphrase generation. Style transfer is a related field that aims at transforming an input to adhere to a specified target attribute (e.g. sentiment, formality). RL has been used to explicitly reward the output to adhere to a target attribute (Gong et al. 2019;Sancheti et al. 2020;Luo et al. 2019;Liu, Neubig, and Wieting 2020;Goyal et al. 2021). The target attributes are only a function of the output and defined at a lexical level. However, we consider a relation control which is a function of both the input and the output, and is defined at a semantic level.
Conclusion
We introduce a new task of entailment-relation-aware paraphrase generation and propose a RL-based weaklysupervised model (ERAP) that can be trained without a taskspecific corpus. Additionally, an existing NLI corpora is recasted to curate a small annotated dataset for this task, and provide performance bounds for it. A novel Oracle is proposed to obtain weak-supervision for relation control for existing paraphrase corpora. ERAP is shown to generate paraphrases conforming to the specified relation while maintaining quality of the paraphrase. Intrinsic and Extrinsic experiments demonstrate the utility of entailment-relation control, indicating a fruitful direction for future research.
A.5 Inference
We use beam search with beam width of 5, minimum and maximum output length set at 5 and 40, respectively. All the models were trained and tested on 8 NVIDIA Tesla V100 SXM2 32GB GPU machine.
B Recasting SICK
We use examples ∈{S1aS2a, S1aS2b, S1bS2a, S1bS2b, S1aS1b} meaning preserving transformations. In case of bidirectional entailing examples, we double the data by using both original as well as reverse pair and label it as ≡. In case of entailment from only premise to hypothesis, we add this sample with ⊏ relation and reverse sample with ⊐ relation.
C Additional Results
C.1 Re-ranking Baseline Results
Re-rank baselines are built over Seq2seq-U (Fine-tuned-U) resulting in Re-rank-s2s-U (Re-rank-FT-U). Seq2seq-U (or Fine-tuned-U) is used to generate multiple outputs (k∈{1, 5, 10, 20}) for an input using nucleus sampling (Holtzman et al. 2019) and the outputs are scored with the scorers for a desired entailment relation to obtain a combined score f (s t , y t ) for each output. Highest scoring output is considered as the final output. Re-ranking results for k ∈ {1, 5, 10} are reported in Table 9. Table 10: Automatic evaluation of paraphrases from ERAP against entailment-aware (A) and unaware (U) models. R-Consistency is measured only for models conditioned (R-Test) on R at test time. Shaded rows denote upper-and lower-bound models. ⋆ denotes that only pre-training is done in entailment-unaware setting. Bold-face denote best in each block and * denote best overall.
ref 1∶T using beam search and draw n samples of complete sequenceŝ Y 1∶T by rolling out the sub-sequenceŶ ref 1∶t using multinomial sampling to estimate reward at each time step t. Reward Estimation We send n samples of complete sequences drawn from the sub-sequenceŶ ref 1∶t to the scorers.
Sun and Zhou 2012) which penalizes for copying from the input. Following we also report BLEU(Papineni et al. 2002) (up to 4 n-grams using sacrebleu library) and Diversity (measured same as Eq. 1) scores to understand the trade-off between these measures. We also compute, R-Consistency, defined as the percentage of test examples for which the entailment relation predicted using oracle is same as the given entailment relation. Human evaluation is conducted on 4 aspects: (1) semantic similarity which measures the closeness in meaning between paraphrase and input on a scale of 5(Li et al. 2018);(2) diversity in expression which measures if different tokens or surface-forms are used in the paraphrase with respect to the input on a scale of 5 (Siddique, Oymak, and Hristidis 2020); (3) grammaticality which measures if paraphrase is well-formed and comprehensible on a scale of 5(Li et al. 2018); (4) relation consistency which measures the % of examples for which the annotated entailment relation is same as the input relation. Three annotations per sample are collected for similarity, diversity, and grammaticality using Amazon Mechanical Turk (AMT), and the authors (blinded to the identity of the model and following proper guidelines) manually annotate for relation consistency as it is more technical and AMT annotators were unable to get the qualification questions correct. More details in Appendix E.
h⟩ and ⟨h, p⟩. We recast this dataset to obtain paraphrase pairs with entailment relation annotations derived using the gold bi-directional labels in the same way as O. We only consider the sentence pairs which were created by combining meaning-preserving transformations (details in B). We augment this data by adding valid samples obtained by reversing sentence pairs (∀ p ⊏ h, we add h ⊐ p and ∀ p ≡ h, we add h ≡ p). Data statistics inTable 1.asses the generated paraphrases for rela-
tion consistency. We only focus on ≡, ⊏, and ⊐ relations
as contradictory, neutral or invalid pairs are considered as
semantically-divergent sentence pairs.
Recasting SICK Dataset SICK (Marelli et al. 2014) is a
NLI dataset created from sentences describing the same pic-
ture or video which are near paraphrases. It consists of sen-
tence pairs (p, h) with human-annotated NLI labels for both
directions ⟨p, Oracle Evaluation We train the NLI classifier o on existing
NLI datasets namely, MNLI (Williams, Nangia, and Bow-
man 2018), SNLI (Bowman et al. 2015a), SICK (Marelli
et al. 2014) as well as diagnostic datasets such as,
HANS (McCoy, Pavlick, and Linzen 2019), others intro-
duced in Glockner, Shwartz, and Goldberg (2018); Min
et al. (2020), using cross-entropy loss. Combining diagnos-
tic datasets during training has shown to improve robustness
of NLI systems which can resort to simple lexical or syntac-
tic heuristics (Glockner, Shwartz, and Goldberg 2018; Po-
liak et al. 2018) to perform well on the task. The accuracy of
o(l|⟨p, h⟩) on the combined test sets of the datasets used for
training is 92.32% and the accuracy of Entailment Relation
Oracle O(X, Y ) on the test set of recasted SICK dataset is
81.55%. Before using the Oracle to obtain weak-supervision
for entailment relation for training purposes, we validate it
by manually annotating 50 random samples from ParaBank.
78% of the annotated relations were same as the Oracle pre-
dictions when C, N, and Invalid labels were combined.
Table 2 :
2Evaluation of the generator pre-trained on ParaBank using entailment-aware () and unaware () settings.Model
R-Test
BLEU↑
Diversity↑
iBLEU↑
R-Consistency↑
Pre-trained-U
14.92
76.73
7.53
−
Pre-trained-A
17.20
74.25
8.75
65.53
Seq2seq-U
30.93
59.88
17.62
−
Seq2seq-A
31.44
63.90
18.77
38.42
Re-rank-s2s-U
30.06
64.51
17.26
51.86
Re-rank-FT-U
41.44
53.67
23.96
66.85
ERAP-U
⋆
19.37
69.70
9.43
66.89
ERAP-A
28.20
59.35
14.43
68.61
Fine-tuned-U
41.62
51.42
23.79
−
Fine-tuned-A
45.21
51.60
26.73
*
70.24
*
Copy-input
−
51.42
0.00
21.14
45.98
Table 3 :
3Automatic evaluation of paraphrases from ERAP
against entailment-aware (A) and unaware (U) models de-
scribed in §4.1. R-Consistency is measured only for models
conditioned (R-Test) on R at test time. Shaded rows denote
upper-and lower-bound models. ⋆ denotes that only pre-
training is done in entailment-unaware setting. Bold-face
denotes best in each block and * denotes best overall.
Table 5 :
5Qualitative outputs: 1 showing the effectiveness of various scorers, 2 showing heuristic learned in the absence of hypothesis-only adversary, and 3 from various models.Model
R-Test
Similarity↑
Diversity↑
Grammar↑
R-Consistency↑
Pre-trained-U
4.60
2.62
4.73
−
Pre-trained-A
4.67
2.60
4.67
48.00
Re-rank-s2s-U
2.72
3.15
3.46
24.00
Re-rank-FT-U
3.05
2.89
4.27
28.00
ERAP-U
3.98
2.85
4.10
40.00
ERAP-A
3.95
2.68
4.42
64.00
Fine-tuned-U
3.87
3.10
4.83
−
Fine-tuned-A
3.80
3.04
4.68
48.00
Table 6 :
6Mean scores across 3 annotators are reported for
Similarity (α=0.65), Diversity (α=0.55), and Grammatical-
ity (α=0.72) and % of correct specified relation for R-
Consistency (α=0.70). Moderate to strong inter-rater relia-
bility is observed with Krippendorff's α.
Table 7 :
7Original refers to the original sentence pair and its label (E (NE) denotes entails (does not entail)). Remaining columns
denote various augmentation pairs and corresponding labels according to the entailment composition rules defined in MacCart-
ney (2009). p
′ (h
′ ), pr(hr), pf (hf ) denote equivalent, reverse entailing, and forward entailing paraphrase, respectively.
Table 7 .
7PATE) models trained on entailment-aware augmentations will be less susceptible to such artifacts than models trained with entailment-unaware augmentations.To test this, we generate augmentations for the test set of SICK NLI and manually annotate 1253 augmented samples to obtain 218 incorrectly labeled examples. We evaluate PATE models on these examples (referred to as adversarial test examples).Augmentation paraphrases generated from
entailment-unaware models are (naïvely) assumed to hold
the ≡ relation. RoBERTa-based binary classifiers are trained
on original dataset along with the paraphrastic augmenta-
tions to predict whether p entails h.
Susceptibility to Augmentation Artifacts If paraphrastic
augmentations introduce noisy training examples with incor-
rectly projected labels, this could lead to, what we call aug-
mentation artifacts in downstream models. We hypothesize
that paraphrastically augmented textual entailment (hence-
forth, Data
R-Test
Original-Dev↑
Original-Test↑
Adversarial-Test↑
SICK NLI
-
95.56
93.78
83.02
+FT-U(≡)
95.15
93.68
69.72
+FT-A(≡)
95.35
94.62
77.98
+FT-A(≡, ⊐)
95.76
93.95
75.69
+ERAP-A(≡)
95.15
94.58
78.44
+ERAP-A(≡, ⊐)
95.15
93.86
69.72
Table 8 :
8Accuracy results on downstream data-augmentation experiments for textual entailment task. FT/ERAP refer to the Fine-tuned/proposed model used for generating augmentations. Type of augmentation used as perTable 7in parenthesis. U/A denote entailment-unaware (aware) variant.Extrinsic ResultsWe report accuracy of PATE models on original SICK development and test sets as well as on adversarial test examples in
Table 9 :
9Additional Re-rank baseline results
Re-rank-FT-U (k=1)Model
R-Test
BLEU↑
Diversity↑
iBLEU↑
R-Consistency↑
Pre-trained-U
17.15
71.05
8.23
−
Pre-trained-A
20.57
67.89
10.05
66.52
Seq2seq-U
30.93
59.88
17.62
−
Seq2seq-A
31.44
63.90
18.77
38.42
36.65
57.45
21.11
38.65
Re-rank-FT-U (k=5)
39.83
55.60
23.09
58.36
Re-rank-FT-U (k=10)
39.45
55.45
22.77
64.20
Re-rank-FT-U (k=20)
40.49
54.94
23.50
69.81
ERAP-U
⋆
22.04
63.71
10.37
63.07
ERAP-A
28.10
54.21
13.32
73.95
*
Fine-tuned-U
40.99
52.81
23.70
−
Fine-tuned-A
43.75
53.59
26.08
*
65.53
Copy-input
−
51.42
0.00
21.14
45.98
If 0.3 ≤ r s (X,Ŷ ) ≤ 0.98 then the score is used as is otherwise, 0. Similarly, if r s (X,Ŷ ) > 0 after thresholding then r d , r l , and p l are computed as defined, otherwise 0.
It consists of 50 million high quality English paraphrases obtained training a Czech-English neural machine translation (NMT) system and adding lexical-constraints to NMT decoding procedure.3 Entailment in NLI is a uni-directional relation while Equivalence is a bi-directional entailment relation.
We report analogous results for ParaNMT in Appendix C.2.5 We report results for k ∈ {1, 5, 10} in the Appendix C.1.
Note that we retained the train, test, development sets of SICK NLI dataset in the recasted SICK dataset and therefore the paraphrasing models have only seen train set.
A Implementation and Dataset DetailsA.1 Entailment Relation OracleWe use RoBERTa-L architecture (355M parameters) from transformers library(Wolf et al. 2019)to buildNLIclassifier, o(l|⟨p, h⟩). We lower-case all the data and train the classifier for 3 epochs (≈ 20hrs) using cross-entropy loss. The model with best accuracy on the combined development splits of MNLI, SNLI, and SICK is used for final relation prediction. We use Adam optimizer with initial learning rate of 2e −5 , warm up steps set at 0.06 of total steps, batch size of 32, and maximum input length of 128.A.2 Generator Pre-trainingWe build a transformers(Vaswani et al. 2017)based encoder-decoder model with 6 layers, 8 attention heads, 512 hidden state dimension, and 2048 feed-forward hidden state size. The generator is trained for 20 epochs on 4GPUs and the model with best iBLEU score on the development set is used for testing in case of entailment-unaware setting. In the case of entailment-aware setting model with best score for harmonic mean of iBLEU and R-Consistency on the development set is used for testing and further fine-tuning. We use Adam optimizer with initial learning rate of 1e −4 , batch size of 256, label smoothing and drop out rate 0.1, maximum input length 40, and minimum output length 5. It took 2 weeks to pre-train the generators. Entailment-unaware pretraining was done on ParaBank dataset which was partitioned into train (93596425) and validation (1500) split. We also run analogous experiments on ParaNMT partitioned into train (5367128), validation (1000), and test(2000)sets. For entailment-aware pre-training, we filter paraphrase pairs for which the oracle predicted entailment relation was anything other than ≡, ⊏, or ⊐ and downsample the majority relation (≡) in ParaBank to get 21797111 (7570204 belong to ⊏, 7366240 to ⊐ , and rest ≡) examples. For ParaNMT, we upsample the minority relations to get 9480295 (3330183 belong to ⊏, 3233640 to ⊏, and remaining ≡) examples.A.3 Generator Fine-tuningFor RL-finetuning, we use the same training setting as entailment-aware model mentioned above but use learning rate of 2e−5, batch size of 16, and fine-tune on 1 GPU. We experiment with α = 0.4, 0.5, 0.3, β = 0.4, 0.5, 0.3, and δ = 0.2, 0.3 and found 0.4, 0.4, 0.2 respectively to give best results. The discounting factor γ was set at 0.99. Sequences were sampled using nucleus sampling(Holtzman et al. 2019)probability of 0.8, and temperature set at 1.0. We filter examples belonging to entailment relation other than ≡, ⊏, or ⊐, from recasted SICK and upsample the minority relation to fine-tune the pre-trained generator using RL.A.4 EvaluatorWe use python's moverscore library https://pypi.org/project/ moverscore/ to compute moverscore for semantic similarity, sacrebleu https://pypi.org/project/sacrebleu/ to compute BLEU, and RoBERTa-L classifier for the Adversary with same configuration as Entailment Relation Oracle.Re-rank-s2s-UC.2 Analogous Results for ParaNMTWe pre-train the generator with 5M version of ParaNMT dataset in entailmentaware and unaware settings and fine-tune on recasted SICK dataset. The results are shown inTable 10. Re-rank-s2s-U are same as ParaBank Table in the main paper.D Automatic EvaluationBLEU score is computed against one gold-paraphrases for recasted SICK and multiple gold-references available for ParaBank dataset.E Human Evaluation DetailsEach annotator is asked to rate paraphrase pairs from 8 models for an aspect and 4 additional test question annotations are obtained for quality check. If all the test questions are answered correctly then only annotations are considered else discarded. The Liker scale for various aspects is defined as. Semantic similarity 1-completely different meaning, 2different meaning, 3-slightly similar in meaning, 4-mostly similar in meaning, 5-exactly same in meaning. Diversity in Expression 1-exactly same, 2-minor changes (a→the), 3-slightly different, 4-different, 5-very different.Grammaticality 1-completely nonsensical, 2-somewhat nonsensical, 3-major grammatical errors, 4-minor grammatical errors, 5-no grammatical error. Only paraphrased sentence is annotated. Relation consistency annotation is done by the authors because it is more technical. Each paraphrase pair is annotated with one of the following relations; equivalence, forward entailment, reverse entailment, contradiction, neutral, or none of these and majority relation for an example is compared against the input relation. The % of examples annotated same as the input relation is reported in human evaluation table. There were 11, 8, and 6 samples belonging to equivalence, forward entailment, and reverse entailment relation, respectively out of the 25 randomly sampled examples which were annotated.
A survey of paraphrasing and textual entailment methods. I Androutsopoulos, P Malakasiotis, Journal of Artificial Intelligence Research. 38Androutsopoulos, I.; and Malakasiotis, P. 2010. A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research, 38: 135-187.
Information fusion in the context of multi-document summarization. R Barzilay, K Mckeown, M Elhadad, Proceedings of the 37th annual meeting of the Association for Computational Linguistics. the 37th annual meeting of the Association for Computational LinguisticsBarzilay, R.; McKeown, K.; and Elhadad, M. 1999. Informa- tion fusion in the context of multi-document summarization. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics, 550-557.
What is a paraphrase?. R Bhagat, E Hovy, Computational Linguistics. 39Bhagat, R.; and Hovy, E. 2013. What is a paraphrase? Com- putational Linguistics, 39(3): 463-472.
LEDIR: An unsupervised algorithm for learning directionality of inference rules. R Bhagat, P Pantel, E Hovy, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningEMNLP-CoNLLBhagat, R.; Pantel, P.; and Hovy, E. 2007. LEDIR: An un- supervised algorithm for learning directionality of inference rules. In Proceedings of the 2007 Joint Conference on Em- pirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP-CoNLL), 161-170.
A large annotated corpus for learning natural language inference. S R Bowman, G Angeli, C Potts, C D Manning, Conference on Empirical Methods in Natural Language Processing. 2015Association for Computational Linguistics (ACL)Bowman, S. R.; Angeli, G.; Potts, C.; and Manning, C. D. 2015a. A large annotated corpus for learning natural lan- guage inference. In Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, 632-642. As- sociation for Computational Linguistics (ACL).
Detecting crosslingual semantic divergence for neural machine translation. S R Bowman, L Vilnis, O Vinyals, A M Dai, R Jozefowicz, S Bengio, Y Vyas, X Niu, arXiv:1511.06349Proceedings of the First Workshop on Neural Machine Translation. the First Workshop on Neural Machine TranslationarXiv preprintGenerating sentences from a continuous spaceBowman, S. R.; Vilnis, L.; Vinyals, O.; Dai, A. M.; Jozefow- icz, R.; and Bengio, S. 2015b. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Carpuat, M.; Vyas, Y.; and Niu, X. 2017. Detecting cross- lingual semantic divergence for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, 69-79.
Controllable Paraphrase Generation with a Syntactic Exemplar. M Chen, Q Tang, S Wiseman, K Gimpel, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsChen, M.; Tang, Q.; Wiseman, S.; and Gimpel, K. 2019. Controllable Paraphrase Generation with a Syntactic Exem- plar. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, 5972-5984.
Mutaphrase: Paraphrasing with framenet. M Ellsworth, A Janin, J Ganitkevitch, B Van Durme, C Callison-Burch, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesProceedings of the ACL-PASCAL workshop on textual entailment and paraphrasingEllsworth, M.; and Janin, A. 2007. Mutaphrase: Paraphras- ing with framenet. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, 143-150. Ganitkevitch, J.; Van Durme, B.; and Callison-Burch, C. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, 758-764.
Unsupervised contextual paraphrase generation using lexical control and reinforcement learning. S Garg, S Prabhu, H Misra, G Srinivasaraghavan, arXiv:2103.12777arXiv preprintGarg, S.; Prabhu, S.; Misra, H.; and Srinivasaraghavan, G. 2021. Unsupervised contextual paraphrase generation using lexical control and reinforcement learning. arXiv preprint arXiv:2103.12777.
Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. M Glockner, V Shwartz, Y Goldberg, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Glockner, M.; Shwartz, V.; and Goldberg, Y. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 650-655.
. H Gong, S Bhat, L Wu, J Xiong, W.-M Hwu, Gong, H.; Bhat, S.; Wu, L.; Xiong, J.; and Hwu, W.-M.
Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus. In Proceedings of the 2019 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 3168- 3180.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. 27Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural in- formation processing systems, 27.
Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus. N Goyal, B V Srinivasan, N Anandhavelu, A Sancheti, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesGoyal, N.; Srinivasan, B. V.; Anandhavelu, N.; and Sancheti, A. 2021. Multi-Style Transfer with Discriminative Feed- back on Disjoint Corpus. In Proceedings of the 2021 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, 3500-3510.
Neural Syntactic Preordering for Controlled Paraphrase Generation. T Goyal, G Durrett, Annual Meeting of the Association for Computational Linguistics. Goyal, T.; and Durrett, G. 2020. Neural Syntactic Preorder- ing for Controlled Paraphrase Generation. In Annual Meet- ing of the Association for Computational Linguistics.
A deep generative framework for paraphrase generation. A Gupta, A Agarwal, P Singh, P Rai, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Gupta, A.; Agarwal, A.; Singh, P.; and Rai, P. 2018. A deep generative framework for paraphrase generation. In Pro- ceedings of the AAAI Conference on Artificial Intelligence, volume 32.
A Holtzman, J Buys, L Du, M Forbes, Y Choi, arXiv:1904.09751The curious case of neural text degeneration. arXiv preprintHoltzman, A.; Buys, J.; Du, L.; Forbes, M.; and Choi, Y. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751.
Improved lexically constrained decoding for translation and monolingual rewriting. J E Hu, H Khayrallah, R Culkin, P Xia, T Chen, M Post, B Van Durme, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Hu, J. E.; Khayrallah, H.; Culkin, R.; Xia, P.; Chen, T.; Post, M.; and Van Durme, B. 2019a. Improved lexically con- strained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North Ameri- can Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long and Short Papers), 839-850.
ParaBank: Monolingual bitext generation and sentential paraphrasing via lexically-constrained neural machine translation. J E Hu, R Rudinger, M Post, B Van Durme, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Hu, J. E.; Rudinger, R.; Post, M.; and Van Durme, B. 2019b. ParaBank: Monolingual bitext generation and sen- tential paraphrasing via lexically-constrained neural ma- chine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 6521-6528.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. M Iyyer, J Wieting, K Gimpel, L Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersIyyer, M.; Wieting, J.; Gimpel, K.; and Zettlemoyer, L. 2018. Adversarial Example Generation with Syntactically Con- trolled Paraphrase Networks. In Proceedings of the 2018 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long Papers), 1875-1885.
Domain Control for Neural Machine Translation. C Kobus, J M Crego, J Senellart, Proceedings of the International Conference Recent Advances in Natural Language Processing. the International Conference Recent Advances in Natural Language Processing2017Kobus, C.; Crego, J. M.; and Senellart, J. 2017. Domain Control for Neural Machine Translation. In Proceedings of the International Conference Recent Advances in Natu- ral Language Processing, RANLP 2017, 372-378.
From word embeddings to document distances. M Kusner, Y Sun, N Kolkin, K Weinberger, PMLRInternational conference on machine learning. Kusner, M.; Sun, Y.; Kolkin, N.; and Weinberger, K. 2015. From word embeddings to document distances. In Interna- tional conference on machine learning, 957-966. PMLR.
Paraphrase Generation with Deep Reinforcement Learning. Z Li, X Jiang, L Shang, H Li, EMNLP. Li, Z.; Jiang, X.; Shang, L.; and Li, H. 2018. Paraphrase Generation with Deep Reinforcement Learning. In EMNLP.
Z Li, X Jiang, L Shang, Q Liu, arXiv:1906.09741Decomposable neural paraphrase generation. arXiv preprintLi, Z.; Jiang, X.; Shang, L.; and Liu, Q. 2019. De- composable neural paraphrase generation. arXiv preprint arXiv:1906.09741.
Unsupervised Paraphrasing by Simulated Annealing. X Liu, L Mou, F Meng, H Zhou, J Zhou, S Song, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLiu, X.; Mou, L.; Meng, F.; Zhou, H.; Zhou, J.; and Song, S. 2020. Unsupervised Paraphrasing by Simulated Annealing. In Proceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, 302-312.
Y Liu, G Neubig, J Wieting, arXiv:2010.12771On Learning Text Style Transfer with Direct Rewards. arXiv preprintLiu, Y.; Neubig, G.; and Wieting, J. 2020. On Learning Text Style Transfer with Direct Rewards. arXiv preprint arXiv:2010.12771.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLiu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
A dual reinforcement learning framework for unsupervised text style transfer. F Luo, P Li, J Zhou, P Yang, B Chang, Z Sui, X Sun, arXiv:1905.10060arXiv preprintLuo, F.; Li, P.; Zhou, J.; Yang, P.; Chang, B.; Sui, Z.; and Sun, X. 2019. A dual reinforcement learning frame- work for unsupervised text style transfer. arXiv preprint arXiv:1905.10060.
Natural language inference. B Maccartney, Stanford UniversityMacCartney, B. 2009. Natural language inference. Stanford University.
Generating phrasal and sentential paraphrases: A survey of data-driven methods. N Madnani, B J Dorr, Computational Linguistics. 363Madnani, N.; and Dorr, B. J. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven meth- ods. Computational Linguistics, 36(3): 341-387.
A SICK cure for the evaluation of compositional distributional semantic models. M Marelli, S Menini, M Baroni, L Bentivogli, R Bernardi, R Zamparelli, Lrec. ReykjavikMarelli, M.; Menini, S.; Baroni, M.; Bentivogli, L.; Bernardi, R.; Zamparelli, R.; et al. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Lrec, 216-223. Reykjavik.
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. T Mccoy, E Pavlick, T Linzen, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMcCoy, T.; Pavlick, E.; and Linzen, T. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natu- ral Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3428-3448.
Cgmh: Constrained sentence generation by metropolishastings sampling. N Miao, H Zhou, L Mou, R Yan, L Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Miao, N.; Zhou, H.; Mou, L.; Yan, R.; and Li, L. 2019. Cgmh: Constrained sentence generation by metropolis- hastings sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 6834-6842.
Syntactic Data Augmentation Increases Robustness to Inference Heuristics. J Min, R T Mccoy, D Das, E Pitler, T Linzen, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMin, J.; McCoy, R. T.; Das, D.; Pitler, E.; and Linzen, T. 2020. Syntactic Data Augmentation Increases Robustness to Inference Heuristics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2339-2352.
Bleu: a method for automatic evaluation of machine translation. T Niu, S Yavuz, Y Zhou, H Wang, N S Keskar, C Xiong, K Papineni, S Roukos, T Ward, W.-J Zhu, arXiv:2010.12885Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsarXiv preprintUnsupervised paraphrase generation via dynamic blockingNiu, T.; Yavuz, S.; Zhou, Y.; Wang, H.; Keskar, N. S.; and Xiong, C. 2020. Unsupervised paraphrase generation via dynamic blocking. arXiv preprint arXiv:2010.12885. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine trans- lation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 311-318.
Adding semantics to datadriven paraphrasing. E Pavlick, J Bos, M Nissim, C Beller, B Van Durme, C Callison-Burch, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1Pavlick, E.; Bos, J.; Nissim, M.; Beller, C.; Van Durme, B.; and Callison-Burch, C. 2015. Adding semantics to data- driven paraphrasing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), 1512-1522.
Fixing translation divergences in parallel corpora for neural mt. M Q Pham, J M Crego, J Senellart, F Yvon, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingPham, M. Q.; Crego, J. M.; Senellart, J.; and Yvon, F. 2018. Fixing translation divergences in parallel corpora for neural mt. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2967-2973.
Hypothesis Only Baselines in Natural Language Inference. A Poliak, J Naradowsky, A Haldar, R Rudinger, B Van Durme, Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. the Seventh Joint Conference on Lexical and Computational SemanticsPoliak, A.; Naradowsky, J.; Haldar, A.; Rudinger, R.; and Van Durme, B. 2018. Hypothesis Only Baselines in Natural Language Inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, 180- 191.
A Prakash, S A Hasan, K Lee, V Datla, A Qadir, J Liu, O Farri, arXiv:1610.03098Neural paraphrase generation with stacked residual LSTM networks. arXiv preprintPrakash, A.; Hasan, S. A.; Lee, K.; Datla, V.; Qadir, A.; Liu, J.; and Farri, O. 2016. Neural paraphrase genera- tion with stacked residual LSTM networks. arXiv preprint arXiv:1610.03098.
Reinforced rewards framework for text style transfer. A Sancheti, K Krishna, B V Srinivasan, A Natarajan, Advances in Information Retrieval. 12035545Sancheti, A.; Krishna, K.; Srinivasan, B. V.; and Natarajan, A. 2020. Reinforced rewards framework for text style trans- fer. Advances in Information Retrieval, 12035: 545.
Get to the point: Summarization with pointer-generator networks. A See, P J Liu, C D Manning, arXiv:1704.04368arXiv preprintSee, A.; Liu, P. J.; and Manning, C. D. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.
Unsupervised paraphrasing via deep reinforcement learning. A Siddique, S Oymak, V Hristidis, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningSiddique, A.; Oymak, S.; and Hristidis, V. 2020. Unsuper- vised paraphrasing via deep reinforcement learning. In Pro- ceedings of the 26th ACM SIGKDD International Confer- ence on Knowledge Discovery & Data Mining, 1800-1809.
Joint learning of a dual SMT system for paraphrase generation. H Sun, M Zhou, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsShort Papers2Sun, H.; and Zhou, M. 2012. Joint learning of a dual SMT system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), 38-42.
Studies on natural logic and categorial grammar. V M S Valencia, Universiteit van AmsterdamValencia, V. M. S. 1991. Studies on natural logic and cate- gorial grammar. Universiteit van Amsterdam.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in neural information processing systems, 5998-6008.
ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations. J Wieting, K Gimpel, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Wieting, J.; and Gimpel, K. 2018. ParaNMT-50M: Pushing the Limits of Paraphrastic Sentence Embeddings with Mil- lions of Machine Translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), 451-462.
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. A Williams, N Nangia, S Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong PapersWilliams, A.; Nangia, N.; and Bowman, S. 2018. A Broad- Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), 1112-1122.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning. R J Williams, 8Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ma- chine learning, 8(3): 229-256.
. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al.
Seqgan: Sequence generative adversarial nets with policy gradient. L Yu, W Zhang, J Wang, Y Yu, arXiv:1910.03771Huggingface's transformers: State-of-the-art natural language processing. 31arXiv preprintProceedings of the AAAI conference on artificial intelligenceHuggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Yu, L.; Zhang, W.; Wang, J.; and Yu, Y. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI conference on artificial intelli- gence, volume 31.
MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance. W Zhao, M Peyrard, F Liu, Y Gao, C M Meyer, S Eger, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingZhao, W.; Peyrard, M.; Liu, F.; Gao, Y.; Meyer, C. M.; and Eger, S. 2019. MoverScore: Text Generation Evaluat- ing with Contextualized Embeddings and Earth Mover Dis- tance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 563-578.
| [] |
[
"Depression Severity Estimation from Multiple Modalities",
"Depression Severity Estimation from Multiple Modalities"
] | [
"Evgeny Stepanov evgeny.stepanov@unitn.it \nUniversity of Trento\nItaly\n",
"Stephane Lathuiliere stephane.lathuiliere@inria.fr \nINRIA\nGrenobleFrance\n",
"Shammur Absar Chowdhury shammur.chowdhury@unitn.it \nUniversity of Trento\nItaly\n",
"Arindam Ghosh arindam.ghosh@unitn.it \nUniversity of Trento\nItaly\n",
"Radu-Laurenţiu Vieriu radulaurentiu.vieriu@unitn.it \nUniversity of Trento\nItaly\n",
"Nicu Sebe niculae.sebe@unitn.it \nUniversity of Trento\nItaly\n",
"Giuseppe Riccardi giuseppe.riccardi@unitn.it \nUniversity of Trento\nItaly\n"
] | [
"University of Trento\nItaly",
"INRIA\nGrenobleFrance",
"University of Trento\nItaly",
"University of Trento\nItaly",
"University of Trento\nItaly",
"University of Trento\nItaly",
"University of Trento\nItaly"
] | [] | Depression is a major debilitating disorder which can affect people from all ages. With a continuous increase in the number of annual cases of depression, there is a need to develop automatic techniques for the detection of the presence and extent of depression. In this AVEC challenge we explore different modalities (speech, language and visual features extracted from face) to design and develop automatic methods for the detection of depression. In psychology literature, the PHQ-8 questionnaire is well established as a tool for measuring the severity of depression. In this paper we aim to automatically predict the PHQ-8 scores from features extracted from the different modalities. We show that visual features extracted from facial landmarks obtain the best performance in terms of estimating the PHQ-8 results with a mean absolute error (MAE) of 4.66 on the development set. Behavioral characteristics from speech provide an MAE of 4.73. Language features yield a slightly higher MAE of 5.17. When switching to the test set, our Turn Features derived from audio transcriptions achieve the best performance, scoring an MAE of 4.11 (corresponding to an RMSE of 4.94), which makes our system the winner of the AVEC 2017 depression sub-challenge. | 10.1109/healthcom.2018.8531119 | [
"https://arxiv.org/pdf/1711.06095v1.pdf"
] | 781,152 | 1711.06095 | 5e03a582866c97f6875f296d6d28696a5030ae42 |
Depression Severity Estimation from Multiple Modalities
Evgeny Stepanov evgeny.stepanov@unitn.it
University of Trento
Italy
Stephane Lathuiliere stephane.lathuiliere@inria.fr
INRIA
GrenobleFrance
Shammur Absar Chowdhury shammur.chowdhury@unitn.it
University of Trento
Italy
Arindam Ghosh arindam.ghosh@unitn.it
University of Trento
Italy
Radu-Laurenţiu Vieriu radulaurentiu.vieriu@unitn.it
University of Trento
Italy
Nicu Sebe niculae.sebe@unitn.it
University of Trento
Italy
Giuseppe Riccardi giuseppe.riccardi@unitn.it
University of Trento
Italy
Depression Severity Estimation from Multiple Modalities
Affective ComputingDepression DetectionMachine Learn- ingSpeechNatural Language ProcessingFacial Expres- sions
Depression is a major debilitating disorder which can affect people from all ages. With a continuous increase in the number of annual cases of depression, there is a need to develop automatic techniques for the detection of the presence and extent of depression. In this AVEC challenge we explore different modalities (speech, language and visual features extracted from face) to design and develop automatic methods for the detection of depression. In psychology literature, the PHQ-8 questionnaire is well established as a tool for measuring the severity of depression. In this paper we aim to automatically predict the PHQ-8 scores from features extracted from the different modalities. We show that visual features extracted from facial landmarks obtain the best performance in terms of estimating the PHQ-8 results with a mean absolute error (MAE) of 4.66 on the development set. Behavioral characteristics from speech provide an MAE of 4.73. Language features yield a slightly higher MAE of 5.17. When switching to the test set, our Turn Features derived from audio transcriptions achieve the best performance, scoring an MAE of 4.11 (corresponding to an RMSE of 4.94), which makes our system the winner of the AVEC 2017 depression sub-challenge.
INTRODUCTION
According to the World Health Organization (WHO), depression is a major mental disorder with about 300 million people of all ages affected worldwide. As per the Global Burden of Disease Study [17], depression is the second leading cause of disability worldwide and is on the rise. Depression affects every aspect of a person's life. People affected from depression often suffer from a certain extent of physical and social impairment. Side effects of depression include sleep disruptions or insomnia, drug or alcohol abuse, and overall loss of quality of life. If left untreated it can lead to complications such as reductions in the volume of the hippocampus [47]. Major clinical depression may even lead to suicide and annually the burden of death due to depression is on the rise. There is growing evidence that depression can cause impairment of the immune function by affecting different immunological pathways such as the central nervous system (CNS), the endocrine system, and the cardiovascular system. This can lead to the development or aggravation of co-morbidities and worsen health conditions in other diseases. Nicholson et al [36], through a meta-analysis of 54 cohort studies which performed follow up analysis of coronary heart diseases (CHD) showed that patients with major depression had an increased risk of developing fatal CHD.
Diagnosis of depression still remains a challenge. Some symptoms of depression are not readily visible to others. Since depressed people often have decreased social contact, detection of the disease becomes difficult. Current diagnosis of depression is dependent on an evaluation by a psychiatrist supported by standard questionnaires to screen for depression. The Personal Health Questionnaire Depression Scale (PHQ-8) Scoring and the Hamilton Depression Rating Scale are two well established tools for the diagnosis of depression. However, these questionnaires need to be administered and interpreted by a therapist. The stigma around the disease and lack of understanding often prevents patients from seeking early psychiatric help.
The growing burden of this disease suggests that there is a need to develop technologies which can aid in automatic detection and effective care of patients suffering from depression. Affective computing focuses on the sensing, detection, and interpretation of affective states of people from interactions with computers or machines. Research on affective computing uses modalities ranging from overt signals such as speech, language and video to covert signals such as heart rate, skin temperature, galvanic skin response to understand the mental and affective states of humans. While the initial goal of affective computing research was to build better computers which could understand and empathize with humans, the same techniques have been applied to turn computers into tools for automatically identifying psychological states and mental health.
Therefore, the motivation of the study, is to explore different sources of information, such as audio, video, language and behavioral cues, to predict the severity of depression. While doing so, we also investigate different feature repre-sentation and modeling techniques corresponding to each modality for improving the performance of automatic prediction.
The paper is organized as follows. In Section 2, we present the literature review and the state of the art experiments performed for the detection of depression and affective disorders from speech, language, and facial expressions. This is followed by a brief description of the multi-modal data used for the study in Section 3. An overview of the features and experimental methodology used in this study are given in Section 4 and then we provide a conclusion in Section 5.
STATE OF THE ART -SPEECH, LAN-GUAGE AND FACIAL EXPRESSIONS
Speech, language and facial expressions are three of the major overt signals which have been widely used for interpreting human psychological states. Automatic analysis of speech has been used for emotion recognition [50,37], stress detection [24,55], and mood state characterisation [52,8]. Natural language and speech processing from diaries and recordings have been used to detect the onset of dementia, alzheimers, and aphasia [49,19]. Analysis of facial expressions have shown to be highly effective in tracking the progressive degeneration of cognitive health in patients suffering from schizophrenia and bipolar disorder [6].
Speech and Language
Several psychological conditions clearly manifest themselves through changes in speech patterns and language usage. Computational and automatic screening methods have the power to detect micro-changes in speech and language patterns which would otherwise have gone unnoticed. Properties such as speech rate, pause duration and usage of fillers can be indicative of cognitive decline in individuals. Changes in prosody, and fluency can also be useful is detecting mental health changes of depressive patients.
Research on the diagnosis of mental health from speech and language was pioneered by the German psychiatrist Zwirner [57] in early 1930. He designed a device capable of tracking fundamental frequency for the detection of mental health of patients suffering from depression. Newman and Mather [35] in 1938 carried out similar experiments to systematically record patient's speech as they read pre-defined text and interacted with a psychiatrist. This data was analysed to show that there were distinct speech features such as speech tempo, prosodic pauses, absence of glottal rasping associated with patients suffering from affective disorders.
France et. al [18] performed multivariate feature and discriminant analyses on the speech data from 67 male and 48 female subjects to show that formant and power spectral density (PSD) based features demonstrated the highest discriminative powers for classification in both genders. Pope et al [40] investigated the relationship between anxiety and depression and speech patterns to show that anxiety was positively correlated with speech disturbances and resistivity in speech. He also found that silent pauses were positively correlated with depression [40].
Kaya et al [28] demonstrated that feature selection techniques based on canonical correlation analysis (CCA) can be effective in detecting depression from speech signals.
Wang et al [53] applied data mining techniques to build models which achieved a precision of 80% for detecting de-pression based on sentiment analysis of users on a Chinese micro-blogging platform. Rumshisky et al [42] demonstrated through a study on 4687 patients that NLP techniques such as topic modeling can be used to improve prediction of psychiatric readmission.
Face Analysis
Facial expressions can be an extremely powerful medium used to convey human overt emotional feedback. In recent times, there has been significant progress in developing methods for facial feature tracking for the analysis of facial expressions and the detection of emotions. Studies have shown that it is possible to effectively detect the presence of pain shown on faces.
Machine learning techniques have been shown to be effective for the automatic detection of pain and mental state from facial expressions. Littleworth et al. [31] used a twostage system to train machine learning algorithms to detect expressions of real and fake pain. Their classifier obtained an accuracy of 88% compared to an accuracy of 49% demonstrated by naive human subjects used in their study. Ambadar et al [4] demonstrated that analysis of facial expression can be used to classify smiles into three distinct categoriesamused, polite and nervous.
One of the most popular technique used for capturing the subtlety and fine-grained variations in facial expression is the Facial Action Coding System (FACS) developed by Ekman and Freisen. The FACS is based on the consensus of the judgment human experts who observe pre-recorded facial expressions and perform manual annotation of FACS codes for each frame. These annotations, which are called action units (AUs) can belong to one of 44 different classes. FACS has been widely used in the field of psychology for measuring emotions, affect, and behavior [12,5,43]. More recently [20], FACS have been shown to be correlated with depression severity. Specifically, [20] found that severely depressed subjects are more likely to show fewer affiliative facial action units (AU12 and AU15) and more non-affiliative ones (AU14).
Head pose and eye gaze have also been shown to encode information about depression. For instance [20] observes that an increase in the severity of depression comes with a diminished head motion. Other works [3,27,45] have also investigated the link between head pose, eye gaze and depression, all evidence that such a link exists and it is all worth considering.
Combination
Combination of facial expressions, speech and multimodal information can be used to enhance the recognition of human mental state. Busso et al. [7] demonstrated that both feature fusion (early fusion) and decision fusion (late fusion) from the different modalities outperformed individual features-based classification.
Dibeklioglu et al [13] combined speech, facial movement and head movement to achieve an accuracy of 88.9% for the detection of depression from clinical interviews. The accuracy of the combined signal streams exceeded the accuracy of single modalities to show that multimodal measures can be powerful for detection of depression. Alghowinem et al [2] also demonstrated similar findings in their research to show that a combination of head pose, eye gaze and paralinguistic features yielded better performance than unimodal schemes.
AVEC AUDIO VIDEO DATABASE
The 2017 Audio/Video Emotion Challenge and Workshop (AVEC 2017) "Real-life depression" provides a corpus comprising of audio and video recordings and transcribed speech from the Distress Analysis Interview Corpus (DAIC) [21].
The dataset comprises of recordings from 189 sessions of human agent interaction where each subject was interviewed by a virtual psychologist (see Table 1 for the distribution of labels in the training and development sets). The audio files, transcripts and continuous facial features of the human subject is provided as part of the challenge. The Personal Health Questionnaire Depression Scale (PHQ-8) score of the subjects is also provided in the dataset. The PHQ-8 [30] is a set of 8 short multiple choice questions which has been established as a diagnostic tool for the measurement of the severity of depressive disorders. Automatic estimation of the PHQ-8 score from different modalities such as speech and video can aid in the early detection of depression and monitoring of depressive states. In the AVEC challenge, the goal is to look at different streams of data recorded during a session with the subject to predict the PHQ-8 scores, and to classify the subject as depressed or not.
EXPERIMENTS
In this section we describe the experiments conducted for the feature extraction and regression experiments conducted on the speech, behavioral, language and
Speech and Behavioral Characteristic Features
Acoustic Features
To understand the predictive characteristics of low-level acoustic feature groups to assess the depression severity of the participant, we extracted low-level descriptors (LLDs) from the participant's turns in each conversation. For this, we have extracted different groups of low-level features using openSMILE [16], motivated by their successful utilization in several paralinguistic tasks [46,1,10,9]. These sets of acoustic features were extracted with approximately 100 overlapping frames per second and with 25 milliseconds of window. The low-level features are extracted as three groups including:
• Spectral features (S) such as energy in spectral bands (0-250Hz, 0-650Hz, 250-650Hz, 1-4kHz), roll-off points (25%, 50%, 70%, 90%), centroid, flux, max-position and min-position.
• Prosodic features (P) such as pitch (Fundamental frequency f0, f0-envelope), loudness, voice-probability.
• Voice Quality features (VQ) such as jitter, shimmer, logarithmic harmonics-to-noise ratio (logHNR).
These low-level features are then projected on 24 statistical functionals, which include range, absolute position of max and min, linear and quadratic regression coefficients and their corresponding approximation errors, zero crossing rate, peaks, mean peak distance, mean peak, geometric mean of non-zero values, number of non-zeros and momentscentroid, variance, standard deviation, skewness, and kurtosis.
Behavioral Characteristics Features
Apart from extracting low-level features from raw speech signals, we also explored the transcription.
We crafted features that can capture information regarding the participant's non-vocal behavior (NB) along with their turn-taking behaviors (TB) and participants' Previous Diagnosed Information (PDI) features. The non-vocal behavior (|N B| = 3) includes:
• frequency of laughter in participant's turns.
• percentage of disfluencies in the participant's turns, which might indicate hesitations.
• counts of cues that might suggest inconvenience like whistling, mumbling, whispering or taking deep breaths among others.
The features that are used to describe the turn-taking behaviors, (|T B| = 6) are the first and third quartiles and the median duration of respond time (in seconds) of the participants. Similarly we also extracted statistics for the with-in speaker silence (pause). The respond time represents how long the participants took to respond to the previous turn of the agent.
The PDI feature set (|P DI| = 3) contained numerical representations of the response of the participants to queries such as having any Post-traumatic Stress Disorder (PTSD), ptsd, depression dep, even having any military backgrounds m b. Each individual feature is encoded into three values (-1,0,1) where -1 represents the query is not present in the session, 0 presents a disconfirmation (e.g ptsd=0 means the participant responded as "no" to the previous turn query) and 1 presents confirmation of the query.
Methodology and Results
For the regression task, we studied the performance of acoustic and behavioral characteristics features. For modeling individual acoustic feature groups and their linear combination we used support vector machine for regression, implemented in weka [23] using Radial Basis Function (RBF) kernel with γ = 0.01 and C = 1.0.
As for the linear combination of different acoustic feature groups, we first merged all the feature vectors linearly to obtain vector M , as shown in Equation 1
M = P ∪ S ∪ V Q = {p1, .., pm, s1.., sn, v1, .., v l }(1)
where feature vectors P, S and VQ stands for prosody, spectral and voice quality as presented in Equations 2-4.
P = {p1, p2, ..., pm}(2)S = {s1, s2, ..., sn}(3)V Q = {v1, v2, ..., v l }(4)
From the merged feature vector we selected relevant feature subset F s − M using training set only. For the automatic feature selection, we used Relief feature selection technique [29,41], successfully used in paralinguistic tasks [1,11]. The technique calculates the weight of the features based on the nearest k instances (k = 20, used for this study) of the same and different classes to rank each features. Then by using a threshold, th = 0.02, we selected top 20 features to use for the regression task. These parameters (th=0.02, 0, -0.02 and k=5, 10,15,20) are tuned using 3-fold cross validation of the training set. As for predictor using behavioral characteristic feature group, we used Reduced Error Pruning Tree ("REPT") implemented in weka [23], which is a fast regression tree learner that uses information of variance reduction and prunes it using reduced error pruning.
The results are presented in Table 2 for individual feature set and their combinations. The result indicated that spectral features are a good predictor of PHQ score compared to all other settings presented in the table. It is observed that even feature selection on the merged vector also performed better than other sets except spectral and is above the baseline, i.e., M AE = 5.36 and RM SE = 6.74 on the same development set. The selected features include features from spectral group (75%), prosodic group (20%) and voice quality (5%) group.
It is also observed that using behavioral characteristic features, we obtained a decrease of both MAE and RMSE by a magnitude of 0.63 and 1.20 respectively compared to all the results reported in the AVEC2017 baseline manuscript. Further analysis using feature ranking technique, Relief, indicated that the PDI features especially dep and ptsd are the top ranked features followed by the median of the response time, the quartiles of the within-speaker silence duration and laughter frequency.
Language
Additional to the speech-based features, we explore textbased representations to predict depression severity estimates. The widely used representation of a document in NLP is bag-of-words, where a document is represented by word occurences ignoring the order in which they appear. We experiment both with binary (BOOL) and tf-idf (TFIDF) weighted representations. While the binary representation encodes Table 4: Root mean square error (RMSE) and mean absolute error (MAE) for depression severity regression using lexical features and Support Vector regression with linear kernel on the development set for the mean baseline (BL: mean), binary (BOOL), tf-idf weighted (TFIDF ) bag-of-words representations, and averaged word embedding vectors (WE ). We also provide the audio and audio-video featurebased baselines (BL: Audio and BL: Audio-Video) words that are present in the document regardless of their frequency, tf-idf weighted representation considers both the frequency of the term (tf ) in a document and the inverse document frequency (idf ) -which lowers the weight of the very frequent terms in a collection and increases the weight of the rare terms with respect to the equations 5-6.
tf − idf (t, d) = tf (t, d) * idf (t) (5) idf (t) = log n d df (d, t) + 1(6)
Where tf (t, d) is the term frequency, n d is the total number of documents, and df (d, t) is the frequency of documents containing the term. Besides bag-of-words representation, we also experiment with the word embedding representation (WE) [34], where pre-trained per-word embedding vectors are averaged for a document. We make use of the SKIPGRAM embedding vectors pre-trained on GoogleNews with a embedding dimension 300 and window 10.
Since the provided speech transcripts are of human-machine conversations, we first extract human turns and convert them into bag-of-words representation. The transcripts contain annotations for the speech phenomena such as laughter, sigh, etc., which were treated as any other token. Thus, the representation implicitly encodes the presence of these phenomena in the conversation; and also its frequency in the case of tf-idf based representations. For the word embedding representation, however, this is not the case, as there are no pre-trained vectors for these.
The algorithm of our choice for text-based representations is Support Vector Regression (SVR) with linear kernel, implemented in scikit-learn [38]. The regression results for each of the document representations are given in Table 4 in terms of RMSE and MAE. We also provide a mean baseline (BL:mean) and the audio and audio-video feature-based baselines 1 . As it can be observed, the only representation that outperforms all the baselines is the binary bag-of-word representation that yields RMSE=6.31 and MAE=5.17.
Visual Features
1 Cite the baseline paper Inspired by [51] and the success reported in [54], we use the 68 3D facial keypoints and compute geometric features as follows: for every facial representation, we first remove the 3D bias (equal to a translation in the Euclidean space by subtracting the mean value in 3D), then we normalize the resulting representation so that the average distance to the center (origin) is equal to 1. Finally, we compute Euclidean distances between all possible pairs of 3D normalized points and add them to the normalized representation. This results in a feature vector of size 2482. Consequently, we reduce this dimension by applying PCA and keeping over 99.5% of variance, resulting in a feature vector of size 33.
Since we are dealing with video sequences, we propose to regress depression using models naturally designed for temporal data. Specifically, we propose the use of LSTMs [25] for this task. LSTMs have emerged as an effective and scalable model for several learning problems related to sequential data, such as handwriting recognition [39,14], generation of handwritten characters [22], language modeling and translation [56,32], audio [33] and video [15] signal analysis, acoustic speech modeling [44] and others. They have proved effective at capturing long-term temporal dependencies without suffering from the optimization hurdles that plague simple recurrent neural networks (RNNs).
In order to build our training set, we apply a sliding window approach to the video sequences, using windows of size W , overlapped by O samples. We use the success flag provided by the dataset creators which models the tracking confidence for each frame. We adopt a 0-tolerance strategy and discard all windows for which at least one failed tracking is present. We do this to exclude the risk of introducing artifacts into the feature space, that the model might misleadingly exploit for solving the task. We set the values for W and O empirically to 60 and 30, respectively. We downsample the data to 1 second, which makes our windows 1 minute long, with an overlap of 30 seconds. During testing, we apply the same window-ing scheme and average the window-level predictions over the length of the test sequence.
Next, we train a double layered LSTM model on regressing depression at window level on the training set. The model is composed of two stacked layers of size 16, followed by a Dense layer with a linear activation function. We use dropout [48] equal to 0.5 to control overfitting and batch normalization [26] to limit internal covariance shift. As loss function, we use the mean squared error. In order to validate our LSTM model, we perform a leave-one-sequenceout cross-validation scheme on the training set. After 100 epochs, our models achieve an MAE of 4.97 and an RMSE of 6.26, which we find encouraging. We further retrain the model on the full training set and monitor the performance on the development partition. Figure 1 shows the learning plots of the loss function during training for both training (black) and validation (red) sets. We observe a monotonic decrease of the loss function on the training set, while on the validation, the behavior is a typical decrease, followed by an increase of the same loss. We use the validation set to early stop the training, thus resulting in a model (lstm opt) with the best performance on this set.
Following the baseline manuscript, we report in Table 5 as performance measures the RMSE and MAE of lstm opt on both train as well as test sets. In addition to the requested quantities, we also report the explained variation regression Figure 1: LSTM learning curves: trainset (black) and development set (red). We note the existence of a turning point in the validation loss, typically used as a good compromise between underfitting and overfitting Table 5: Performance measures obtained using our LSTM model on the training set as well as on the development set. We also report the explained variance regression score (EVS), which measures the degree to which the model "explains" the variation of the ground truth labels using the predictions (see Equation
where V ar represents the statistical variance. EVS measures the degree to which a model (in our case lstm opt) accounts for the variance of a given set of labels through the predictions it makes. The upper bound of EVS is 1 and corresponds to a perfect modeling.
As can be observed from Table 5, our LSTM model fits well the training set and manages to score a promising MAE on the development partition, better than all reported values in the AVEC2017 baseline manuscript as well as in the last year's winning paper [54].
Results on the test set
We submitted four trials for evaluation on the held out test set. Results are depicted in Tab. 6. The behavioral characteristic features extracted from audio transcriptions achieve the lowest errors on the test partition, which is unsurprising considering the promising cross-validation results obtained on the the development set (i.e. RMSE of 5.54 and MAE of 4.73). What is slightly surprising though is the performance of the visual features. Despite achieving an encouraging MAE on the development set, our LSTM model failed to generalize well enough to unseen data.
CONCLUSIONS
In this paper we address the depression sub-challenge problem formulated in AVEC2017, i.e. regressing PHQ-8 depression scores from multi-modal data. We process different modalities (audio, language, visual) accompanying the corpus and developed regression systems separately. In the audio domain, we find the spectral features to be most suited for this task, achieving an MAE score of 4.96 on the development set (RMSE = 6.32) while lexical features score no lower than 5.17 (MAE) and 6.31 (RMSE). Despite being the worst performing modality in the baseline manuscript, visual features achieve the smallest errors on the development set in our experiments. Using a sliding window approach and temporal modeling, we obtain an MAE of 4.66 (RMSE = 6.09). We also observed that behavioral cues extracted from transcripts achieve smaller errors (MAE = 4.73, RMSE = 5.54) compared to audio and language features and are good predictor of the depression severity scores. When studied further, we found that previous diagnosed information cues, participants' response time to the agent among others are one of the most informed feature to predict the depression PHQ-8 scores. This is indeed confirmed by the results obtained on the test set, where behavioral cues scored the smallest MAE values among all other feature sets.
In this paper, we have studied each modality individually to understand its strength in estimating the depression severity. In future work, we plan investigating how we can combine individual modalities to improve the overall performance.
Table 1 :
1Distribution of the AVEC data set into training and development sets for depressed (D) and non-depressed (ND) classes, and overall (ALL).ND
D
ALL
Training
77 (72%) 30 (28%)
107
Developement 23 (66%) 13 (34%)
35
Table 2 :
2Results of individual acoustic feature groups with linearly merged feature groups and with Relief feature selection for depression severity estimation on the development set. represents results tuned using 3-fold cross validation on the training set. |F | represent feature set dimension.Feature set, F
|F|
RMSE MAE
Spectral
864
6.32
4.96
Voice Quality
288
7.05
5.70
Prosody
288
7.10
5.75
Merged
1440
6.43
5.40
Merged+Feat.Selection
20
6.70
5.20
Table 3 :
3Result for depression severity estimation
using behavioral characteristic features on develop-
ment set. |F | represent feature set dimension.
Feature set, F
|F| RMSE MAE
Behavioral characteristic 12
5.54
4.73
7 for a formal definition)RMSE MAE EVS
train set 3.17
2.32
0.66
dev set
6.09
4.66
0.15
score (EVS), defined as:
evs(y, y) = 1 −
V ar(y − y)
V ar(y)
Table 6 :
6Results on the test setRMSE MAE
Spectral features (speech)
6.63
5.08
Turn features (speech)
4.94
4.11
Text features
5.83
4.88
Video features
6.72
5.36
Comparative study of speaker personality traits recognition in conversational and broadcast news speech. F Alam, G Riccardi, INTERSPEECH. F. Alam and G. Riccardi. Comparative study of speaker personality traits recognition in conversational and broadcast news speech. In INTERSPEECH, pages 2851-2855, 2013.
Multimodal depression detection: fusion analysis of paralinguistic, head pose and eye gaze behaviors. S Alghowinem, R Goecke, M Wagner, J Epps, M Hyett, G Parker, M Breakspear, IEEE Transactions on Affective Computing. S. Alghowinem, R. Goecke, M. Wagner, J. Epps, M. Hyett, G. Parker, and M. Breakspear. Multimodal depression detection: fusion analysis of paralinguistic, head pose and eye gaze behaviors. IEEE Transactions on Affective Computing, 2016.
Head pose and movement analysis as an indicator of depression. S Alghowinem, R Goecke, M Wagner, G Parkerx, M Breakspear, Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on. IEEES. Alghowinem, R. Goecke, M. Wagner, G. Parkerx, and M. Breakspear. Head pose and movement analysis as an indicator of depression. In Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on, pages 283-288. IEEE, 2013.
All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Z Ambadar, J F Cohn, L I Reed, Journal of nonverbal behavior. 331Z. Ambadar, J. F. Cohn, and L. I. Reed. All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of nonverbal behavior, 33(1):17-34, 2009.
Fully automatic facial action recognition in spontaneous behavior. M S Bartlett, G Littlewort, M Frank, C Lainscsek, I Fasel, J Movellan, Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on. IEEEM. S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan. Fully automatic facial action recognition in spontaneous behavior. In Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on, pages 223-230. IEEE, 2006.
Facial expression in patients with bipolar disorder and schizophrenia in response to emotional stimuli: a partially shared cognitive and social deficit of the two disorders. G Bersani, E Polli, G Valeriani, D Zullo, C Melcore, E Capra, A Quartini, P Marino, A Minichino, L Bernabei, Neuropsychiatric disease and treatment. 91137G. Bersani, E. Polli, G. Valeriani, D. Zullo, C. Melcore, E. Capra, A. Quartini, P. Marino, A. Minichino, L. Bernabei, et al. Facial expression in patients with bipolar disorder and schizophrenia in response to emotional stimuli: a partially shared cognitive and social deficit of the two disorders. Neuropsychiatric disease and treatment, 9:1137, 2013.
Analysis of emotion recognition using facial expressions, speech and multimodal information. C Busso, Z Deng, S Yildirim, M Bulut, C M Lee, A Kazemzadeh, S Lee, U Neumann, S Narayanan, Proceedings of the 6th international conference on Multimodal interfaces. the 6th international conference on Multimodal interfacesACMC. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, and S. Narayanan. Analysis of emotion recognition using facial expressions, speech and multimodal information. In Proceedings of the 6th international conference on Multimodal interfaces, pages 205-211. ACM, 2004.
How's my mood and stress?: an efficient speech analysis library for unobtrusive monitoring on mobile phones. K Chang, D Fisher, J Canny, B Hartmann, Proceedings of the 6th International Conference on Body Area Networks. the 6th International Conference on Body Area NetworksK.-h. Chang, D. Fisher, J. Canny, and B. Hartmann. How's my mood and stress?: an efficient speech analysis library for unobtrusive monitoring on mobile phones. In Proceedings of the 6th International Conference on Body Area Networks, pages 71-77.
Annotating and categorizing competition in overlap speech. S A Chowdhury, M Danieli, G Riccardi, Proc. of ICASSP. of ICASSPIEEES. A. Chowdhury, M. Danieli, and G. Riccardi. Annotating and categorizing competition in overlap speech. In Proc. of ICASSP. IEEE, 2015.
A deep learning approach to modeling competitiveness in spoken conversation. S A Chowdhury, G Riccardi, Proc. of International Conference on Acoustics, Speech and Signal Processing. of International Conference on Acoustics, Speech and Signal essingIEEEICASSPS. A. Chowdhury and G. Riccardi. A deep learning approach to modeling competitiveness in spoken conversation. In Proc. of International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017.
Unsupervised recognition and clustering of speech overlaps in spoken conversations. S A Chowdhury, G Riccardi, F Alam, Proc. of Workshop on Speech, Language and Audio in Multimedia. of Workshop on Speech, Language and Audio in MultimediaS. A. Chowdhury, G. Riccardi, and F. Alam. Unsupervised recognition and clustering of speech overlaps in spoken conversations. In Proc. of Workshop on Speech, Language and Audio in Multimedia, 2014.
Emote aloud during learning with autotutor: Applying the facial action coding system to cognitive-affective states during learning. S D Craig, S Mello, A Witherspoon, A Graesser, Cognition and Emotion. 225S. D. Craig, S. D'Mello, A. Witherspoon, and A. Graesser. Emote aloud during learning with autotutor: Applying the facial action coding system to cognitive-affective states during learning. Cognition and Emotion, 22(5):777-788, 2008.
Multimodal detection of depression in clinical interviews. H Dibeklioglu, Z Hammal, Y Yang, J F Cohn, Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. the 2015 ACM on International Conference on Multimodal InteractionACMH. Dibeklioglu, Z. Hammal, Y. Yang, and J. F. Cohn. Multimodal detection of depression in clinical interviews. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pages 307-310. ACM, 2015.
Fast and robust training of recurrent neural networks for offline handwriting recognition. P Doetsch, M Kozielski, H Ney, Frontiers in Handwriting Recognition (ICFHR). IEEE14th International Conference onP. Doetsch, M. Kozielski, and H. Ney. Fast and robust training of recurrent neural networks for offline handwriting recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on, pages 279-284. IEEE, 2014.
Long-term recurrent convolutional networks for visual recognition and description. J Donahue, L Hendricks, S Guadarrama, M Rohrbach, S Venugopalan, K Saenko, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2625-2634, 2015.
Recent developments in opensmile, the munich open-source multimedia feature extractor. F Eyben, F Weninger, F Gross, B Schuller, Proc. of the 21st ACM international conference on Multimedia. of the 21st ACM international conference on MultimediaACMF. Eyben, F. Weninger, F. Gross, and B. Schuller. Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proc. of the 21st ACM international conference on Multimedia, pages 835-838. ACM, 2013.
. A J Ferrari, F J Charlson, R E Norman, S B , A. J. Ferrari, F. J. Charlson, R. E. Norman, S. B.
Burden of depressive disorders by country, sex, age, and year: findings from the global burden of disease study. G Patten, C J Freedman, T Murray, H A Vos, Whiteford, PLoS medicine. 1011Patten, G. Freedman, C. J. Murray, T. Vos, and H. A. Whiteford. Burden of depressive disorders by country, sex, age, and year: findings from the global burden of disease study 2010. PLoS medicine, 10(11), 2013.
Acoustical properties of speech as indicators of depression and suicidal risk. D J France, R G Shiavi, S Silverman, M Silverman, M Wilkes, IEEE transactions on Biomedical Engineering. 477D. J. France, R. G. Shiavi, S. Silverman, M. Silverman, and M. Wilkes. Acoustical properties of speech as indicators of depression and suicidal risk. IEEE transactions on Biomedical Engineering, 47(7):829-837, 2000.
Automatic speech recognition in the diagnosis of primary progressive aphasia. K Fraser, F Rudzicz, N Graham, E Rochon, Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies. the Fourth Workshop on Speech and Language Processing for Assistive TechnologiesK. Fraser, F. Rudzicz, N. Graham, and E. Rochon. Automatic speech recognition in the diagnosis of primary progressive aphasia. In Proceedings of the Fourth Workshop on Speech and Language Processing for Assistive Technologies, pages 47-54, 2013.
Nonverbal social withdrawal in depression: Evidence from manual and automatic analyses. J M Girard, J F Cohn, M H Mahoor, S M Mavadati, Z Hammal, D P Rosenwald, Image and vision computing. 3210J. M. Girard, J. F. Cohn, M. H. Mahoor, S. M. Mavadati, Z. Hammal, and D. P. Rosenwald. Nonverbal social withdrawal in depression: Evidence from manual and automatic analyses. Image and vision computing, 32(10):641-647, 2014.
The distress analysis interview corpus of human and computer interviews. J Gratch, R Artstein, G M Lucas, G Stratou, S Scherer, A Nazarian, R Wood, J Boberg, D Devault, S Marsella, LREC. J. Gratch, R. Artstein, G. M. Lucas, G. Stratou, S. Scherer, A. Nazarian, R. Wood, J. Boberg, D. DeVault, S. Marsella, et al. The distress analysis interview corpus of human and computer interviews. In LREC, pages 3123-3128, 2014.
Speech recognition with deep recurrent neural networks. A Graves, A Mohamed, G Hinton, Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEEA. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645-6649. IEEE, 2013.
The weka data mining software: an update. M Hall, E Frank, G Holmes, B Pfahringer, P Reutemann, I H Witten, ACM SIGKDD Explorations Newsletter. 111M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The weka data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10-18, 2009.
Analysis and compensation of speech under stress and noise for environmental robustness in speech recognition. J H Hansen, Speech communication. 201-2J. H. Hansen. Analysis and compensation of speech under stress and noise for environmental robustness in speech recognition. Speech communication, 20(1-2):151-173, 1996.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, International Conference on Machine Learning. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456, 2015.
Can body expressions contribute to automatic depression analysis?. J Joshi, R Goecke, G Parker, M Breakspear, 10th IEEE International Conference and Workshops on. IEEEAutomatic Face and Gesture Recognition (FG)J. Joshi, R. Goecke, G. Parker, and M. Breakspear. Can body expressions contribute to automatic depression analysis? In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on, pages 1-7. IEEE, 2013.
Cca based feature selection with application to continuous depression recognition from acoustic speech features. H Kaya, F Eyben, A A Salah, B Schuller, Acoustics, Speech and Signal Processing. IEEE2014 IEEE International Conference onH. Kaya, F. Eyben, A. A. Salah, and B. Schuller. Cca based feature selection with application to continuous depression recognition from acoustic speech features. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 3729-3733. IEEE, 2014.
Estimating attributes: analysis and extensions of relief. I Kononenko, European conference on machine learning. SpringerI. Kononenko. Estimating attributes: analysis and extensions of relief. In European conference on machine learning, pages 171-182. Springer, 1994.
The phq-8 as a measure of current depression in the general population. K Kroenke, T W Strine, R L Spitzer, J B Williams, J T Berry, A H Mokdad, Journal of affective disorders. 1141K. Kroenke, T. W. Strine, R. L. Spitzer, J. B. Williams, J. T. Berry, and A. H. Mokdad. The phq-8 as a measure of current depression in the general population. Journal of affective disorders, 114(1):163-173, 2009.
Automatic coding of facial expressions displayed during posed and genuine pain. G C Littlewort, M S Bartlett, K Lee, Image and Vision Computing. 2712G. C. Littlewort, M. S. Bartlett, and K. Lee. Automatic coding of facial expressions displayed during posed and genuine pain. Image and Vision Computing, 27(12):1797-1803, 2009.
Addressing the rare word problem in neural machine translation. M.-T Luong, I Sutskever, Q V Le, O Vinyals, W Zaremba, arXiv:1410.8206arXiv preprintM.-T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014.
Multi-resolution linear prediction based features for audio onset detection with bidirectional lstm neural networks. E Marchi, G Ferroni, F Eyben, L Gabrielli, S Squartini, B Schuller, 2014 IEEE International Conference on. IEEEE. Marchi, G. Ferroni, F. Eyben, L. Gabrielli, S. Squartini, and B. Schuller. Multi-resolution linear prediction based features for audio onset detection with bidirectional lstm neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 2164-2168. IEEE, 2014.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintT. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Analysis of spoken language of patients with affective disorders. S Newman, V G Mather, American journal of psychiatry. 944S. Newman and V. G. Mather. Analysis of spoken language of patients with affective disorders. American journal of psychiatry, 94(4):913-942, 1938.
Depression as an aetiologic and prognostic factor in coronary heart disease: a meta-analysis of 6362 events among 146 538 participants in 54 observational studies. A Nicholson, H Kuper, H Hemingway, European heart journal. 2723A. Nicholson, H. Kuper, and H. Hemingway. Depression as an aetiologic and prognostic factor in coronary heart disease: a meta-analysis of 6362 events among 146 538 participants in 54 observational studies. European heart journal, 27(23):2763-2774, 2006.
Emotion recognition from speech with gaussian mixture models & via boosted gmm. P Patel, A Chaudhari, R Kale, M Pund, International Journal of Research In Science & Engineering. 3P. Patel, A. Chaudhari, R. Kale, and M. Pund. Emotion recognition from speech with gaussian mixture models & via boosted gmm. International Journal of Research In Science & Engineering, 3, 2017.
Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011.
Dropout improves recurrent neural networks for handwriting recognition. V Pham, T Bluche, C Kermorvant, J Louradour, Frontiers in Handwriting Recognition (ICFHR). IEEE14th International Conference onV. Pham, T. Bluche, C. Kermorvant, and J. Louradour. Dropout improves recurrent neural networks for handwriting recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on, pages 285-290. IEEE, 2014.
Anxiety and depression in speech. B Pope, T Blass, A W Siegman, J Raher, Journal of Consulting and Clinical Psychology. 351p1128B. Pope, T. Blass, A. W. Siegman, and J. Raher. Anxiety and depression in speech. Journal of Consulting and Clinical Psychology, 35(1p1):128, 1970.
An adaptation of relief for attribute estimation in regression. M Robnik-Sikonja, I Kononenko, Fourteenth International Conference on Machine Learning. D. H. FisherMorgan KaufmannM. Robnik-Sikonja and I. Kononenko. An adaptation of relief for attribute estimation in regression. In D. H. Fisher, editor, Fourteenth International Conference on Machine Learning, pages 296-304. Morgan Kaufmann, 1997.
Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. A Rumshisky, M Ghassemi, T Naumann, P Szolovits, V Castro, T Mccoy, R Perlis, Translational psychiatry. 610921A. Rumshisky, M. Ghassemi, T. Naumann, P. Szolovits, V. Castro, T. McCoy, and R. Perlis. Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. Translational psychiatry, 6(10):e921, 2016.
Is there universal recognition of emotion from facial expression? a review of the cross-cultural studies. J A Russell, Psychological bulletin. 1151102J. A. Russell. Is there universal recognition of emotion from facial expression? a review of the cross-cultural studies. Psychological bulletin, 115(1):102, 1994.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling. H Sak, A Senior, F Beaufays, Fifteenth Annual Conference of the International Speech Communication Association. H. Sak, A. Senior, and F. Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Fifteenth Annual Conference of the International Speech Communication Association, 2014.
Automatic audiovisual behavior descriptors for psychological disorder analysis. S Scherer, G Stratou, G Lucas, M Mahmoud, J Boberg, J Gratch, L.-P Morency, Image and Vision Computing. 3210S. Scherer, G. Stratou, G. Lucas, M. Mahmoud, J. Boberg, J. Gratch, L.-P. Morency, et al. Automatic audiovisual behavior descriptors for psychological disorder analysis. Image and Vision Computing, 32(10):648-658, 2014.
Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. B Schuller, A Batliner, S Steidl, D Seppi, Speech Communication. 539B. Schuller, A. Batliner, S. Steidl, and D. Seppi. Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. Speech Communication, 53(9):1062-1087, 2011.
Untreated depression and hippocampal volume loss. Y I Sheline, M H Gado, H C Kraemer, American Journal of Psychiatry. 1608Y. I. Sheline, M. H. Gado, and H. C. Kraemer. Untreated depression and hippocampal volume loss. American Journal of Psychiatry, 160(8):1516-1518, 2003.
Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G E Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, Journal of Machine Learning Research. 151N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014.
Automatic detection and rating of dementia of alzheimer type through lexical analysis of spontaneous speech. C Thomas, V Keselj, N Cercone, K Rockwood, E Asp, Mechatronics and Automation. IEEE3C. Thomas, V. Keselj, N. Cercone, K. Rockwood, and E. Asp. Automatic detection and rating of dementia of alzheimer type through lexical analysis of spontaneous speech. In Mechatronics and Automation, 2005 IEEE International Conference, volume 3, pages 1569-1574. IEEE, 2005.
Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. G Trigeorgis, F Ringeval, R Brueckner, E Marchi, M A Nicolaou, B Schuller, S Zafeiriou, Acoustics, Speech and Signal Processing. IEEE2016 IEEE International Conference onG. Trigeorgis, F. Ringeval, R. Brueckner, E. Marchi, M. A. Nicolaou, B. Schuller, and S. Zafeiriou. Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 5200-5204. IEEE, 2016.
Avec 2016: Depression, mood, and emotion recognition workshop and challenge. M Valstar, J Gratch, B Schuller, F Ringeval, D Lalanne, M Torres, S Scherer, G Stratou, R Cowie, M Pantic, Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. the 6th International Workshop on Audio/Visual Emotion ChallengeACMM. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M. Torres Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic. Avec 2016: Depression, mood, and emotion recognition workshop and challenge. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, pages 3-10. ACM, 2016.
Speech analysis for mood state characterization in bipolar patients. N Vanello, A Guidi, C Gentili, S Werner, G Bertschy, G Valenza, A Lanata, E P Scilingo, 2012 Annual International Conference of the IEEE. IEEEN. Vanello, A. Guidi, C. Gentili, S. Werner, G. Bertschy, G. Valenza, A. Lanata, and E. P. Scilingo. Speech analysis for mood state characterization in bipolar patients. In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, pages 2104-2107. IEEE, 2012.
A depression detection model based on sentiment analysis in micro-blog social network. X Wang, C Zhang, Y Ji, L Sun, L Wu, Z Bao, Pacific-Asia Conference on Knowledge Discovery and Data Mining. SpringerX. Wang, C. Zhang, Y. Ji, L. Sun, L. Wu, and Z. Bao. A depression detection model based on sentiment analysis in micro-blog social network. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 201-213. Springer, 2013.
Decision tree based depression classification from audio video and language information. L Yang, D Jiang, L He, E Pei, M C Oveneke, H Sahli, Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. the 6th International Workshop on Audio/Visual Emotion ChallengeACML. Yang, D. Jiang, L. He, E. Pei, M. C. Oveneke, and H. Sahli. Decision tree based depression classification from audio video and language information. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, pages 89-96. ACM, 2016.
A new hybrid pso assisted biogeography-based optimization for emotion and stress recognition from speech signal. C Yogesh, M Hariharan, R Ngadiran, A H Adom, S Yaacob, C Berkai, K Polat, Expert Systems with Applications. 69C. Yogesh, M. Hariharan, R. Ngadiran, A. H. Adom, S. Yaacob, C. Berkai, and K. Polat. A new hybrid pso assisted biogeography-based optimization for emotion and stress recognition from speech signal. Expert Systems with Applications, 69:149-158, 2017.
W Zaremba, I Sutskever, O Vinyals, arXiv:1409.2329Recurrent neural network regularization. arXiv preprintW. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014.
Contribution to the speech of depressives. phonometry iii. special applications i. E Zwirner, E. Zwirner. Contribution to the speech of depressives. phonometry iii. special applications i, 1930.
| [] |
[
"Modulating Bottom-Up and Top-Down Visual Processing via Language-Conditional Filters",
"Modulating Bottom-Up and Top-Down Visual Processing via Language-Conditional Filters"
] | [
"Ilker Kesen \nKUIS AI Center\nKoç University\n\n\nComputer Engineering Department\nKoç University\n\n",
"Ozan Arkan Can \nAmazon Search\n\n",
"Erkut Erdem \nKUIS AI Center\nKoç University\n\n\nComputer Engineering Department\nHacettepe University\n\n",
"Aykut Erdem \nKUIS AI Center\nKoç University\n\n\nComputer Engineering Department\nKoç University\n\n",
"Deniz Yuret \nKUIS AI Center\nKoç University\n\n\nComputer Engineering Department\nKoç University\n\n"
] | [
"KUIS AI Center\nKoç University\n",
"Computer Engineering Department\nKoç University\n",
"Amazon Search\n",
"KUIS AI Center\nKoç University\n",
"Computer Engineering Department\nHacettepe University\n",
"KUIS AI Center\nKoç University\n",
"Computer Engineering Department\nKoç University\n",
"KUIS AI Center\nKoç University\n",
"Computer Engineering Department\nKoç University\n"
] | [] | How to best integrate linguistic and perceptual processing in multi-modal tasks that involve language and vision is an important open problem. In this work, we argue that the common practice of using language in a topdown manner, to direct visual attention over high-level visual features, may not be optimal. We hypothesize that the use of language to also condition the bottom-up processing from pixels to high-level features can provide benefits to the overall performance. To support our claim, we propose a U-Net-based model and perform experiments on two language-vision dense-prediction tasks: referring expression segmentation and language-guided image colorization. We compare results where either one or both of the top-down and bottom-up visual branches are conditioned on language. Our experiments reveal that using language to control the filters for bottom-up visual processing in addition to top-down attention leads to better results on both tasks and achieves competitive performance. Our linguistic analysis suggests that bottom-up conditioning improves segmentation of objects especially when input text refers to low-level visual concepts. Code is available at https://github.com/ilkerkesen/bvpr. | 10.1109/cvprw56347.2022.00507 | [
"https://arxiv.org/pdf/2003.12739v3.pdf"
] | 239,016,941 | 2003.12739 | bbcc241f7bf74e822b60c83c35aedeee539f6bb7 |
Modulating Bottom-Up and Top-Down Visual Processing via Language-Conditional Filters
Ilker Kesen
KUIS AI Center
Koç University
Computer Engineering Department
Koç University
Ozan Arkan Can
Amazon Search
Erkut Erdem
KUIS AI Center
Koç University
Computer Engineering Department
Hacettepe University
Aykut Erdem
KUIS AI Center
Koç University
Computer Engineering Department
Koç University
Deniz Yuret
KUIS AI Center
Koç University
Computer Engineering Department
Koç University
Modulating Bottom-Up and Top-Down Visual Processing via Language-Conditional Filters
How to best integrate linguistic and perceptual processing in multi-modal tasks that involve language and vision is an important open problem. In this work, we argue that the common practice of using language in a topdown manner, to direct visual attention over high-level visual features, may not be optimal. We hypothesize that the use of language to also condition the bottom-up processing from pixels to high-level features can provide benefits to the overall performance. To support our claim, we propose a U-Net-based model and perform experiments on two language-vision dense-prediction tasks: referring expression segmentation and language-guided image colorization. We compare results where either one or both of the top-down and bottom-up visual branches are conditioned on language. Our experiments reveal that using language to control the filters for bottom-up visual processing in addition to top-down attention leads to better results on both tasks and achieves competitive performance. Our linguistic analysis suggests that bottom-up conditioning improves segmentation of objects especially when input text refers to low-level visual concepts. Code is available at https://github.com/ilkerkesen/bvpr.
Introduction
As human beings, we can easily perceive our surroundings with our visual system and interact with each other using language. Since the work of Winograd [78], developing a system that understands human language in a situated environment has been one of the long-standing goals of artificial intelligence. Recent successes of deep learning studies in both language and vision domains have increased the interest in tasks that combine language and vision [2,3,34,43,72,80]. However, how to best integrate linguistic and perceptual processing is still an important open problem. Towards this end, we investigate whether language should be used for conditioning bottom-up visual processing as well as top-down attention.
In the human visual system, attention is driven by both top-down cognitive processes (e.g. focusing on a given shape) and bottom-up salient, behaviourally relevant stimuli (e.g. fast moving objects and contrasting colors) [13,14,74]. Studies on embodied language explore the link between linguistic and perceptual representations [22,66,76] and often assume that language has a high-level effect on perception and drives the top-down visual attention [5,18,37]. However, recent studies from cognitive science point out that language comprehension also affects low-level visual processing [6,54,61]. Motivated by this, we propose a model that can modulate either or both of bottom-up and top-down visual processing with language and compare different designs for language modulation. Current deep learning systems for language-vision tasks typically start with low-level image processing, then connect the language representation with high-level visual features to control the visual focus. To integrate both modalities, concatenation [56], element-wise multiplication [41,51], attention from language to vision [1,50,79,86,91] and transformers [17,20,73] are commonly used. These studies typically do not condition low-level visual features on language. Some methods [15,65] do the opposite by conditioning only the bottom-up visual processing on language.
To evaluate language-modulation on the bottom-up and top-down visual branches independently, we develop an architecture that clearly separates these two branches (based on U-Net [68]) and allows us to experiment with modulating one or both branches with language. The bottomup branch starts from low-level visual features and applies a sequence of contracting filters that result in successively higher level feature maps with lower spatial resolution. Following this, a top-down branch takes the final low resolution feature map and applies a sequence of expanding filters that eventually result in a map with the original image resolution. Information flows between branches through skip connections between contracting and expanding filters at the same level. Our proposed architecture is task-agnostic and it can be used for various vision-language tasks involving dense prediction. We evaluate our model with different language-modulation settings on two different tasks: referring expression segmentation and language-guided image colorization.
In the referring expression segmentation (RES) task, given an image and a natural language description, the aim is to obtain the segmentation mask that marks the object(s) described. We can contrast this with pure image based object detection [25,67] and semantic segmentation [9,49] tasks which are limited to predefined semantic classes. The language input may contain various visual attributes (e.g. shape), spatial information (e.g. in front of ), actions (e.g. running) and interactions/relations between different objects (e.g. arm of the chair that the cat is sitting on).
In language-guided image colorization (LIC) task, given a grayscale image and a description, the aim is to predict pixel color values. The absence of color information in the input images makes this problem interesting to experiment with because color words do not help in conditioning the bottom-up branch when the input image is grayscale.
We find that conditioning both branches leads to better results, achieving competitive performance on both tasks. Our experiments suggest that conditioning the bottom-up branch on language is important to ground low-level visual information. On RES, we find that modulating only the bottom-up branch performs significantly better than modulating only the top-down branch especially when colordependent language is present in the input. Our findings on LIC show that when color information absent in input images, the bottom-up baseline naturally fails to predict and manipulate colors of target objects specified by input language. That said, conditioning the bottom-up branch still improves the colorization quality by helping our model to accurately segment and colorize the target objects as a whole.
The rest of the paper is structured as follows: We summarize related work and compare it to our approach in Section 2. We describe our model in detail in Section 3. We share the details of our experiments in Section 4. Section 5 summarizes our contributions.
Related Work
Referring Expression Comprehension (REC). In this problem, the goal is to locate a bounding box for the object(s) described in the input language. The proposed solutions can be divided into two categories: two-stage and onestage methods. Two-stage methods [12,30,30,48,58,63,77,81,89,89] rely on a pre-trained object detector [27,67] to generate object proposals in the first stage. In the second stage, they assign scores to the object proposals depending on how much they match with input language. One-stage methods [17,45,53,82,84,85] directly localize the referred objects in one step. Most of these methods condition only the top-down visual processing on language, while some fuse language with multi-level visual representations. Referring Expression Segmentation (RES). In this task, the aim is to generate a segmentation mask for the object(s) referred in the input language [31]. To accomplish this, multi-modal LSTMs [47,60], ConvLSTMs [7,47,70,87], word-level attention [7,35,69,87], cross-modal attention [32,33,38,52,53,88], and transformers [20,75] have been used. Each one of these methods modulates only the topdown branch with language. As one exception, EFN [21] conditions the bottom-up branch on language, but does not modulate the top-down branch with language. Language-guided Image Colorization (LIC). In this task, the aim is to predict colors for all the pixels of a given input grayscale image based on input descriptive text. Specifically, [57] inserts extra Feature-wise Linear Modulation (FiLM) layers [65] into a pre-trained ResNet to predict color values in LAB color space. Multi-modal LSTMs [47,94] and generative adversarial networks [4,8,26] are also used in this context to colorize sketches. Similar to us, Tag2Pix [40] extends U-Net to perform colorization on line art data, but it modulates only the top-down visual processing with symbolic input using concatenation. Language-conditional Parameters. Here we review methods that use input-text-dependent dynamic parameters to process visual features. To control a visual model with language, MODERN and FiLM [15,65] used conditional batch normalization layers with language-conditioned coefficients rather than customized filters. Numerous methods [11,23,24,44,60,62] generate language-conditional dynamic filters to convolve visual features. Some RES models [11,60] also incorporate language-conditional filters into the their top-down visual processing. To map instructions to actions in virtual environments, LingUNet [62] extends U-Net by adding language-conditional filters to the top-down visual processing only. Each one of these methods conditions either top-down or bottom-up branch only. Comparison. To support our main research question, our architecture clearly separates bottom-up and top-down visual processing. This allows us to experiment with modulating either one branch or both branches with language and evaluate their individual contributions. The majority of related work conditions only the top-down visual processing on language. Other U-Net-based methods [40,62] and most transformer models [17,20] which implement cross-modal attention between textual and visual representations in topdown visual processing fall into this category. A few exceptions [15,21,65] do the opposite by conditioning only the bottom-up branch. Some methods [33,53,85] fuse language with a multi-level visual representation, which leads to good results, but this kind of fusion does not allow the evaluation of language conditioning on top-down vs bottom-up visual processing. Our architecture allows language to control either or both of top-down and bottom-up branches. We show that (i) the bottom-up conditioning is important to ground language to low-level visual features, and (ii) conditioning both branches on language leads to best results.
The Model
Here, we describe our proposed model in detail. Figure 1 shows an overview of our proposed architecture. First, the model extracts a tensor of low-level features using a pretrained convolutional neural network and encodes the given natural language expression to a vector representation using an LSTM [29]. Starting with the visual feature tensor, the model generates feature maps through a contracting and expanding path where the output head takes the final map and performs dense prediction, similar to U-Net [68]. Our proposed architecture modulates both of the contracting and expanding paths with language using convolutional filters generated from the given expression. It is important to emphasize that the previous works either have a languageguided top-down or a language-conditional bottom-up visual processing branch. As will be discussed in the next section, our experiments show that modulating both of these paths improves the performance dramatically.
Low-level Image Features
Given an input image I, we extract visual features I 0 by using a pre-trained convolutional network. In the RES task, we use DeepLab-v3+ backbones [10], and in the LIC task, we use ResNet-101 pre-trained on ImageNet [16,28]. On the task of referring expression segmentation, we also concatenate 8-D location features to this feature map following the previous work [47,88].
Language Representation
Consider a language input S = [w 1 , w 2 , ..., w n ] where each word w i is represented with a 300-dimensional GloVe embedding [64]. We map the language input S to hidden states using an LSTM as h i = LSTM(h i−1 , w i ). We use the final hidden state of the LSTM as the language representation, r = h n . Later on, we split this language representation into pieces to generate language-conditional filters.
Multi-modal Encoder
After generating image and language representations, our model generates a multi-modal feature map representing the input image and the given natural language expression. Our multi-modal encoder module extends U-Net by conditioning both contracting and expanding branches on language using language-conditional filters.
In the bottom-up branch, our model applies m convolutional modules to the image representation I 0 . Each mod-ule, CNN i , takes the concatenation of the previously generated feature map (F i−1 ) and its convolved version with language-conditional filters K F i and produces an output feature map (F i ). Each CNN i has a 2D convolution layer followed by batch normalization [36] and ReLU activation function [55]. The convolution layers have 5 × 5 filters with stride = 2 and padding = 2 halving the spatial resolution, and they all have the same number of output channels.
Similar to [62], we split the textual representation r to m equal parts (t i ), and then use each t i to generate a languageconditional filter for ith bottom-up layer (K F i ):
K F i = AFFINE F i (t i )(1)
Each AFFINE F i is an affine transformation followed by normalizing and reshaping to convert the output to convolutional filters. After obtaining the filters, we convolve them over the feature map obtained from the previous module (F i−1 ) to relate expressions to image features:
G F i = CONVOLVE(K F i , F i−1 )(2)
Then, the concatenation of these text-modulated features (G F i ) for ith bottom-up layer and the previously generated features (F i−1 ) is fed into module CNN i for the next step.
In the top-down branch, we generate m feature maps starting from the final output of the contracting branch as:
G H i = CONVOLVE(K H i , F i )(3)H m = DCNN i (G H m )(4)H i = DCNN i (G H i ⊕ H i−1 )(5)
Similar to the bottom-up branch, G H i is the modulated feature map with language-conditional filters defined as:
K H i = AFFINE H i (t i )(6)
where AFFINE H i is again an affine transformation followed by normalizing and reshaping for ith layer of the top-down branch. Here, we convolve the filter (K H i ) over the feature maps from the contracting branch (F i ). Each upsampling module DCNN i gets the concatenation (⊕) of the text related features and the feature map (H i ) generated from the previous module. Only the first module operates on just convolved features. Each DCNN i consists of a 2D deconvolution layer followed by a batch normalization and ReLU activation function. Final output H 1 becomes our joint feature map J representing the input image / language pair. The deconvolution layers have 5 × 5 filters with stride = 2 and padding = 2 doubling the spatial resolution, and they all have the same number of output channels.
Output Heads
As mentioned earlier, we develop our model as a generic solution which can be used to solve language-vision problems involving dense prediction. In this direction, we adapt our model to two different tasks by varying the output head: referring expression segmentation and language-guided image colorization. Segmentation. In the referring expression segmentation problem, the goal is to generate segmentation mask for a given input image and language pair. After generating the joint feature map J, we apply a stack of layers (D 1 , D 2 , ..., D m ) to map J to the exact image size. Similar to upsampling modules, each D k is a 2D deconvolution layer followed by batch normalization and the ReLU activation. Each D k preserves the number of channels except for the last one which maps the features to a single channel for the mask prediction. We omit the batch normalization and the ReLU activation for the final module, instead we apply a sigmoid function to turn the final features into probabilities. Given these probabilities and ground-truth mask, we train our network by using binary cross entropy loss. Colorization. In the language-guided image colorization task, the goal is to predict pixel color values for given input image with the guidance of language input. A convolutional layer by with 3×3 filters generates class scores for each spatial location of J. We apply bilinear upsampling to these predicted scores to match input image size. Given predicted scores and ground-truth color classes, we train our model by using a weighted cross entropy loss. To create compound LAB color classes and their weights, we follow the exactly same process with [57,92].
Experimental Analysis
This section contains the details of our experiments on referring expression segmentation (Section 4.1) and language-guided image colorization (Section 4.2) tasks 1 .
Referring Expression Segmentation
Datasets. We evaluate our model on ReferIt (130.5k expressions, 19.9k images) [39], UNC (142k expressions, 20k images), UNC+ (141.5k expressions, 20k images) [ calculate IoU: the overall IoU calculates the total intersection over total union score throughout the entire dataset and the mean IoU calculates the mean of IoU scores of each individual example. For a fair comparison, we use both IoU metrics. The second metric, p@X, calculates the percentage of the examples that have IoU score higher than the threshold X.
Quantitative Results. Table 1 shows the comparison of our model with previous methods. Bold faces highlight the highest achieved scores. We evaluate our model by using both IoU metrics for a fair evaluation. Table 2 presents the comparison of our model with the state-of-the-art in terms of p@X. The difference between our model and the stateof-the-art increases as the threshold increases. This indicates that our model is better at segmenting the referred objects including smaller ones. Qualitative Results. We visualize some of the segmentation predictions of our model to gain better insights about the trained model. Figure 2 shows some of the cases that our model segments correctly. These examples demon- strate that our model can learn a variety of language and visual reasoning patterns. For example, the first two examples of the fourth column show that our model learns to relate superlative adjectives (e.g., taller) with visual comparison. Examples including spatial prepositions (e.g., near to) demonstrate the spatial reasoning ability of the model. Our model can also learn a domain-specific nomenclature (e.g. catcher) that is present in the dataset. Lastly, we can see that the model can also detect certain actions (e.g., sitting). Figure 3 shows failure cases from our model on the UNC test split. Examples in (a) show that our model tends to fail in case of typos. Our model segments the correct objects for these two examples when the typos are fixed (e.g. pink instead of pick). Examples in (b) show that some of the expressions are ambiguous, where there are multiple objects that could be referred to by the expression. In this case, the model seems to segment the most salient object. Some of the annotations contain incorrect or incomplete groundtruth mask (c). Finally, some of the examples (d) are hard to segment completely due to the lack of light or objects that mask the referred objects. Ablation Study. We implemented 3 different models, the top-down baseline, the bottom-up baseline and our model, baseline also fails to detect object categories in some cases, and segments additional unwanted objects with similar category or appearance (e.g. banana vs. orange). Overall, our model which conditions both visual branches on language gives the best results.
Language-oriented Analysis. To analyze the effect of language on model performance, we divided UNC test splits into subsets depending on the different types of words (e.g. colors) and phrases (e.g. noun phrases with multiple adjectives) included in input expressions.
Language-guided Image Colorization
Datasets. Following the prior work [57], we use a modified version of COCO dataset [46] where the descriptions that do not contain color words are excluded. In this modified version, the training split has 66165, and the validation split has 32900 image / description pairs, and all images have a resolution of 224 × 224 pixels. Evaluation Metrics. Following the previous work, we use pixel-level top-1 (acc@1) and top-5 accuracies (acc@5) in LAB color space, and additionally PSNR and LPIPS [93] in RGB for evaluation. A lower score is better for LPIPS, and a higher score is better for the rest. Quantitative Results. We present the quantitative performance of our model in Table 5, and compare it with different design choices and previous work. FiLMed ResNet [57] uses FiLM [65] to perform language-conditional colorization. FiLMed ResNet (ours) denotes the results reproduced by the implementation provided by the authors. To show the effect of language modulation on different branches, we train 3 different models again: the top-down baseline, the bottom-up baseline and our model. We also re-train our model without class rebalancing and denote it as Our Model w/o balancing. Contrary to the segmentation experiments, the top-down baseline performs better than the bottom-up baseline on the colorization task in all measures. Since, color information is absent in input images, bottom-up branch cannot encode low-level image features by modulating colordependent language.
When we disable class rebalancing in the training phase, we observe a large improvement in acc@1 and acc@5 due to the imbalanced color distribution, where the model predicts the frequent colors exist in the backgrounds. Qualitative Results. We visualize some of the colorization outputs of the trained models to analyze them in more de-tail in Figure 5. FiLMed ResNet (ours) can understand all colorization hints, and it can manipulate object colors with some incorrectly predicted areas. The top-down baseline also performs similar to FiLMed ResNet (ours), where both models condition only the top-down branch on language.
In this task, since the models are blind to color, the bottom-up baseline loses its effectiveness to some degree, and starts to predict the most probable colors. This can be seen on the second and the last example, where the bottomup baseline predicts red for the stop sign and blue for the sky. Although, the bottom-up baseline performs worse in this task, modulating the bottom-up branch with language still contributes to our final model to localize and recognize the objects present in the scene. This can be seen on the last two examples, where the top-down baseline mixes colors up in some object parts (e.g. the red parts in the motorcycle). Our model w/o balancing tends to predict more grayish colors (e.g. dog, sky). Figure 6 highlights some of the failure cases we observed throughout the dataset. In the first two examples, our model is able to localize and recognize the target objects, but it fails to colorize them successfully by colorizing not only the targeted parts but also other parts. Models generally fail to colorize small objects since the data is imbalanced and it contains vast backgrounds and big objects frequently. The last two examples show that models fail to colorize reflective or transparent objects like glasses or water, these were also difficult in the language based segmentation task (see Figure 3 (d)).
Conclusion
In this work, we suggested that conditioning both top-down and bottom-up visual processing on language is beneficial for grounding language to vision. To support this claim, we proposed a generic architecture with explicit bottom-up and top-down visual branches for vision-language problems involving dense prediction. Our experiments on two different tasks demonstrated that conditioning both visual branches on language gives the best results. Our experiments on the referring expression segmentation task revealed that conditioning the bottom-up branch on language plays a vital role to process colordependent input language. The language-guided image colorization experiments demonstrated similar conclusions, the bottom-up baseline failed to colorize the target objects since the color information is absent in the input images.
Limitations. We share common failure cases in Figure 3 and Figure 6. The performance of our model on both tasks decreases in the presence of transparent and/or reflective objects. Our model also fails to colorize small objects, mostly due to having an imbalanced color distribution. Finally, our current model is limited to integrated vision and language tasks involving dense prediction, and we did not perform experiments on other vision and language problems.
A. Supplementary Material
This supplementary material contains the implementation details (Section A.1) and the complete ablation studies (Section A.2) of our work.
A.1. Implementation Details
Referring Expression Segmentation. Following previous work [7,47,60,88], we limit the maximum length of expressions to 20. We set input image size to 512 × 512 and 640 × 640 for training and inference phase respectively. We use the first four layers of DeepLabv3+ with ResNet-101 backbone, pre-trained on COCO dataset by excluding images appear on the validation and the test sets of UNC, UNC+ and G-Ref datasets similar to previous work [20,53,89]. Thus, our low-level visual feature map I has the size of 64×64×64×1032 in training, and 80×80×1032 in inference phase, both including 8-D location features. In all convolutional layers, we set the filter size, stride, and number of filters as (5, 5), 2, and 512, respectively. The depth is 4 in the multimodal encoder part of the network. For the bottom-up-only baseline, we used grouped convolution in the bottom-up branch to prevent linguistic information leakage to the top-down visual branch. We apply dropout regularization [71] to language representation r with 0.2 probability. We use Adam optimizer [42] with default parameters. We freeze the DeepLab-v3+ ResNet-101 weights. There are 32 examples in each minibatch. We train our model for 20 epochs on a Tesla V100 GPU with mixed precision and each epoch takes at most two hours depending on the dataset.
Language-guided Image Colorization. Unless otherwise speficied, we follow the same design choices applied for the referring expression segmentation task. We set the number of language-conditional filters as 512, replace the LSTM encoder with a BiLSTM encoder, and we use the first two layers of ResNet-101 trained on ImageNet as image encoder to have a similar model capacity and make a fair comparison with the previous work [57]. We set input image width and height to 224 in both training and validation. Thus, the lowlevel visual feature map has the size of 28×28×512, and we don't use location features. Additionally, in our experimental analysis, we consider the same design choices with previous work [57,92]. Specifically, we use LAB color space, and our model predicts ab color values for all the pixels of the input image. We perform the class re-balancing procedure to obtain class weights for weighted cross entropy objective. We use 313 ab classes present in ImageNet dataset, and encode ab color values to classes by assigning them to their nearest neighbors. We use input images with a size of 224 × 224, and output target images with a size of 56 × 56 which is same with the previous work.
A.2. Ablation Studies
We performed additional ablation experiments on referring expression segmentation task in order to understand the contributions of the remaining components of our model. We share results in Table A1. Each row stands for a different architectural setup. Horizontal lines separate the different ablation studies we performed, and first column denotes the ablation study group. Columns on the left determine these architectural setups. ✓on the Top-down column indicates that the corresponding setup modulates top-down visual branch with language, and similarly ✓on the Bottomup column indicates that the corresponding setup modulates bottom-up visual branch with language. Depth indicates how many layers the multi-modal encoder has. Layer indicates the type of language-conditional layer used. Visual and Textual indicates which visual encoder and textual encoder used for the corresponding setup, respectively. The remaining columns stand for results. Network Depth (2). We performed experiments by varying the depth size of the multi-modal encoder. We originally started with the depth size of 4. Increasing the depth size slightly increased the scores for some metrics, but more importantly, decreasing the depth size caused the model to perform worse than the bottom-up baseline. This happens because decreasing the depth size shrinks the receptive field of the network, and the model becomes less capable of drawing conclusions for the scenes that requires to be seen as a whole in order to fully understand. FiLM vs. Language-conditional Filters (3). Another method for modulating language is using conditional batch normalization [15] or its successor, FiLM layers. When we replaced language-conditional filters with FiLM layers in our model, we observed ≈2.5 IoU decrease. This is natural, since the FiLM layer can be thought as grouped convolution with language-conditional filters, where the number of groups is equal to number of channels / filters. LSTM vs. BERT as language encoder (4). We also experimented with BERT [19] as input language encoder in addition to LSTM network. We update BERT weights simultaneously with the rest of our model, where we use a smaller learning rate for BERT (0.00005). We use the CLS output embedding as our language representation r, than split this embedding into pieces to create language-conditional filters. Our model achieved similar quantitative results using BERT as language encoder. This points out a language encoder pre-trained on solely textual data might be suboptimal for integrating vision and language. The impact of the visual backbone (5). We first start training our model with DeepLabv3+ ResNet-50 backbone pre-trained on Pascal VOC dataset. Then, we pre-trained a DeepLabv3+ with ResNet-101 backbone on COCO dataset by excluding the images appear on the validation and the test splits of all benchmarks similar to the previous work [20,53,89]. We only used 20 object categories present in Pascal VOC. Thus, using a more sophisticated visual backbone resulted with ≈ 1 improvement on the IoU score.
Figure 1 .
1Overview of the proposed model on the task of referring expression segmentation.
Figure 2 .Figure 3 .
23Some predictions of our model on the UNC validation set. Some incorrect predictions of our model on the UNC validation set. Each group (a-d) shows one pattern we observed within the predictions. The first row shows the ground truth mask, the second one is the prediction of our model.to show the effect of modulating language in expanding and contracting visual branches. While the bottom-up baseline modulates language in bottom-up branch only, the top-down baseline modulates language in top-down branch only. Our model conditions language on both branches. Table 3 shows us the results. The bottom-up baseline outperforms the top-down one with ≈2.7 IoU improvement. Modulating language in both branches yields the best results by improving the bottom-up baseline with ≈2.85 IoU score.
Figure 4
4visualizes the predictions of the different models on the same examples. The bottom-up baseline performs better when the description has color information as we show in the first three examples. The top-down-only
Figure 5 .Figure 6 .
56Color manipulation performance of different models on COCO validation examples. Each column focuses on a different color conversion. a red and white plane is flying in the sky a snowboarder wearing a red jacket heading down the slope the orange bus is driving down the street a red fire hydrant is leaking onto a Failure cases of our model on the language-guided image colorization task.
90] and Google-Ref (G-Ref) (104.5k expressions, 26.7k images)[58] datasets. Unlike UNC, location-specific expressions are excluded in UNC+ through enforcing annotators to describe objects by their appearance. ReferIt, UNC, UNC+ datasets are collected through a two-player game and have short expressions (avg. 4 words). G-Ref have longer and richer expressions, its expressions are collected from Amazon Mechanical Turk instead of a two-player game. G-Ref does not contain a test split, and Nagaraja et al.[63] extends it by having separate splits for validation and test, which are denoted as val (U) and test (U). Evaluation Metrics. Following previous work, we use intersection-over-union (IoU) and p@X as evaluation metrics. Given the predicted segmentation mask and the ground truth, the IoU metric is the ratio between the intersection and the union of the two. There are two different ways toTable 1. Comparison with the previous work by using the overall IoU metric. † denotes the corresponding method uses the mean IoU metric. "-" indicates that the model has not been evaluated on that dataset.Method
UNC
UNC+
G-Ref
ReferIt
val
testA testB
val
testA testB val (G) val (U) test (U)
test
CMSA [88]
58.32 60.61 55.09 43.76 47.60 37.89 39.98
-
-
63.80
STEP [7]
60.04 63.46 57.97 48.19 52.33 40.41 46.40
-
-
64.13
BRINet [32] 61.35 63.37 59.57 48.57 52.87 42.13 48.04
-
-
63.46
CMPC [33]
61.36 64.53 59.64 49.56 53.44 43.23 49.05
-
-
65.53
LSCM [35]
61.47 64.99 59.55 49.34 53.12 43.50 48.05
-
-
66.57
EFN [21]
62.76 65.69 59.67 51.50 55.24 43.01 51.93
-
-
66.70
BUSNet [83] 63.27 66.41 61.39 51.76 56.87 44.13 50.56
-
-
-
Our model
64.63 67.76 61.03 51.76 56.77 43.80 50.88
52.12
52.94
66.01
MCN † [53]
62.44 64.20 59.71 50.62 54.99 44.69
-
49.22
49.40
-
CGAN † [52] 64.86 68.04 62.07 51.03 55.51 44.06 46.54
51.01
51.69
-
LTS † [38]
65.43 67.76 63.08 54.21 58.32 48.02
-
54.40
54.25
-
VLT † [20]
65.65 68.29 62.73 55.50 59.20 49.36 49.76
52.99
56.65
-
Our model †
67.01 69.63 63.45 55.34 60.72 47.11 53.51
55.09
55.31
57.09
Method
p@0.5 p@0.6 p@0.7 p@0.8 p@0.9
CMSA
66.44
59.70
50.77
35.52
10.96
STEP
70.15
63.37
53.15
36.53
10.45
BRINet
71.83
65.05
55.64
39.36
11.21
LSCM
70.84
63.82
53.67
38.69
12.06
EFN
73.95
69.58
62.59
49.61
20.63
MCN
76.60
70.33
58.39
33.68
5.26
LTS
75.16
69.51
60.74
45.17
14.41
Our Model
76.67
71.77
64.76
51.69
22.73
Table 2. Comparison with the previous works on the val set of
UNC dataset with p@X metrics.
Table 3 .
3Ablation study on the validation set of the UNC dataset with overall IoU metric.
Table 4 Table 4 .
44shows us the results of different models on these subsets. The first column stands for models, and the rest stand for different input expression categories. We exclude the categories which IoU performance of different setups depending on the input expression category for the UNC test splits.Method
+C
-C
+IN
+IN,-C +JJ* +JJ*,-C
Top-down Baseline 52.59 59.59 54.84
55.66
57.66
61.16
Bottom-up Baseline 60.40 60.02 56.05
55.49
61.18
61.86
Our Model
62.98 63.57 59.60
59.27
64.45
65.56
second yellow umbrella
white with red in middle
left orange
front player
Top-down Baseline
Bottom-up Baseline
Our Model
Ground Truth
Figure 4. Comparison of different architectural setups on the UNC
test samples.
do not contribute to our analysis. We use DeepLab-v3+
ResNet-50 as visual backbone in each method. The nota-
tion of the categories are similar to Part of Speech (POS)
tags [59], where we denote prepositions with IN, examples
with adjectives with JJ*, and colors with C. Preceding plus
and minus signs stand for inclusion and exclusion. For in-
stance, +IN,-C column stands for the subset where each ex-
pression contains at least one preposition without any color
words. Color words (e.g. red, darker) has the most im-
pact on the performance in comparison to other types of
words and phrases. Our model and the bottom-up base-
line performs significantly better than top-down baseline on
the subset that includes colors. In the opposite case, where
input expressions with colors are excluded, the top-down
baseline has performance similar to the bottom-up baseline,
and our final model outperforms both single branch mod-
els. Since colors can be seen as low-level sensory infor-
mation, low performance in the absence of the bottom-up
branch can be expected. This demonstrates the importance
of conditioning the bottom-up visual branch on language to
capture low-level visual concepts.
Method
acc@1 acc@5 PSNR LPIPS
FiLMed ResNet
23.70
60.50
-
-
FiLMed ResNet (ours)
20.22
49.57
20.89 0.1280
Top-down Baseline
22.83
51.85
21.29 0.1226
Bottom-up Baseline
21.85
51.34
20.98 0.1448
Our Model
23.38
54.27
21.42 0.1262
Our Model w/o balancing 33.74
67.83
22.75 0.1250
Table 5. Colorization results on the modified COCO validation
split.
# Top-down Bottom-up Depth LayerTable A1. The complete ablation studies on the UNC validation set with p@X and overall IoU metrics.Visual
Textual p@0.5 p@0.6 p@0.7 p@0.8 p@0.9
IoU
1
✓
4
Conv
ResNet-50
LSTM
66.40
58.59
49.35
36.01
13.42 58.06
✓
4
Conv
ResNet-50
LSTM
71.40
65.14
57.36
45.11
19.04 60.74
✓
✓
4
Conv
ResNet-50
LSTM
75.12
70.08
63.32
50.50
22.29 63.59
2
✓
✓
3
Conv
ResNet-50
LSTM
69.96
63.13
55.04
41.33
15.98 60.23
✓
✓
4
Conv
ResNet-50
LSTM
75.12
70.08
63.32
50.50
22.29 63.59
✓
✓
5
Conv
ResNet-50
LSTM
75.56
70.59
63.82
51.68
22.84 63.52
3
✓
✓
4
Conv
ResNet-50
LSTM
75.12
70.08
63.32
50.50
22.29 63.59
✓
✓
4
FiLM ResNet-50
LSTM
71.18
65.14
57.32
44.66
18.75 61.12
4
✓
✓
4
Conv
ResNet-50
LSTM
75.12
70.08
63.32
50.50
22.29 63.59
✓
✓
4
Conv
ResNet-50
BERT
75.60
70.39
63.05
49.93
21.16 63.57
5
✓
✓
4
Conv
ResNet-50
LSTM
75.12
70.08
63.32
50.50
22.29 63.59
✓
✓
4
Conv ResNet-101 LSTM
76.67
71.77
64.76
51.69
22.73 64.63
We provide the implementation details, and present the complete ablation experiments in the supplementary material.
Acknowledgements. This work was supported in part by an AI Fellowship to I. Kesen provided by the KUIS AI Center, GEBIP 2018 Award of the Turkish Academy of Sciences to E. Erdem, and BAGEP 2021 Award of the Science Academy to A. Erdem.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang, CVPR. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, pages 6077-6086, 2018. 1
and Anton van den Hengel. Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, CVPR. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments. In CVPR, June 2018. 1
VQA: Visual Question Answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, Devi Parikh, ICCV. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, Dec. 2015. 1
Coloring with words: Guiding image colorization through textbased palette generation. Hyojin Bahng, Seungjoo Yoo, Wonwoong Cho, David Keetae Park, Ziming Wu, Xiaojuan Ma, Jaegul Choo, ECCV. Hyojin Bahng, Seungjoo Yoo, Wonwoong Cho, David Kee- tae Park, Ziming Wu, Xiaojuan Ma, and Jaegul Choo. Col- oring with words: Guiding image colorization through text- based palette generation. In ECCV, pages 431-447, 2018. 2
How children learn the meanings of words. Paul Bloom, MIT pressPaul Bloom. How children learn the meanings of words. MIT press, 2002. 1
Words jump-start vision: A label advantage in object recognition. Bastien Boutonnet, Gary Lupyan, Journal of Neuroscience. 3525Bastien Boutonnet and Gary Lupyan. Words jump-start vi- sion: A label advantage in object recognition. Journal of Neuroscience, 35(25):9329-9335, 2015. 1
See-through-text grouping for referring image segmentation. Ding-Jie Chen, Songhao Jia, Yi-Chen Lo, Hwann-Tzong Chen, Tyng-Luh Liu, ICCV. 512Ding-Jie Chen, Songhao Jia, Yi-Chen Lo, Hwann-Tzong Chen, and Tyng-Luh Liu. See-through-text grouping for referring image segmentation. In ICCV, pages 7454-7463, 2019. 2, 5, 12
Language-based image editing with recurrent attentive models. Jianbo Chen, Yelong Shen, Jianfeng Gao, Jingjing Liu, Xiaodong Liu, CVPR. Jianbo Chen, Yelong Shen, Jianfeng Gao, Jingjing Liu, and Xiaodong Liu. Language-based image editing with recurrent attentive models. In CVPR, pages 8721-8729, 2018. 2
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, TPAMI. 404Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolu- tion, and fully connected crfs. TPAMI, 40(4):834-848, 2017. 2
Encoder-decoder with atrous separable convolution for semantic image segmentation. Yukun Liang-Chieh Chen, George Zhu, Florian Papandreou, Hartwig Schroff, Adam, ECCV. Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, pages 801-818, 2018. 3
Referring Expression Object Segmentation with Caption-Aware Consistency. Y.-W Chen, Y.-H Tsai, T Wang, Y.-Y. Lin, M.-H Yang, BMVC. Y.-W. Chen, Y.-H. Tsai, T. Wang, Y.-Y. Lin, and M.-H. Yang. Referring Expression Object Segmentation with Caption- Aware Consistency. In BMVC, 2019. 2
Using syntax to ground referring expressions in natural images. Volkan Cirik, Taylor Berg-Kirkpatrick, Louis-Philippe Morency, AAAI. Volkan Cirik, Taylor Berg-Kirkpatrick, and Louis-Philippe Morency. Using syntax to ground referring expressions in natural images. In AAAI, 2018. 2
Visual attention: bottom-up versus top-down. E Charles, Howard E Connor, Steven Egeth, Yantis, Current biology. 1419Charles E Connor, Howard E Egeth, and Steven Yantis. Vi- sual attention: bottom-up versus top-down. Current biology, 14(19):R850-R852, 2004. 1
Control of goaldirected and stimulus-driven attention in the brain. Maurizio Corbetta, Gordon L Shulman, Nature reviews neuroscience. 33Maurizio Corbetta and Gordon L Shulman. Control of goal- directed and stimulus-driven attention in the brain. Nature reviews neuroscience, 3(3):201-215, 2002. 1
Modulating early visual processing by language. Florian Harm De Vries, Jérémie Strub, Hugo Mary, Olivier Larochelle, Aaron C Pietquin, Courville, NeurIPS. 112Harm De Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C Courville. Mod- ulating early visual processing by language. In NeurIPS, pages 6594-6604, 2017. 1, 2, 12
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248-255, 2009. 3
Transvg: End-to-end visual grounding with transformers. Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, Houqiang Li, ICCV. 1Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, and Houqiang Li. Transvg: End-to-end visual ground- ing with transformers. In ICCV, pages 1769-1779, October 2021. 1, 2
More than meets the eye: The role of language in binding and maintaining feature conjunctions. Banchiamlack Dessalegn, Barbara Landau, Psychological science. 192Banchiamlack Dessalegn and Barbara Landau. More than meets the eye: The role of language in binding and maintain- ing feature conjunctions. Psychological science, 19(2):189- 195, 2008. 1
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, ACL. 12Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional trans- formers for language understanding. In ACL, pages 4171- 4186, June 2019. 12
Vision-language transformer and query generation for referring segmentation. Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang, ICCV. 1213Henghui Ding, Chang Liu, Suchen Wang, and Xudong Jiang. Vision-language transformer and query generation for refer- ring segmentation. In ICCV, 2021. 1, 2, 5, 12, 13
Encoder fusion network with co-attention embedding for referring image segmentation. Guang Feng, Zhiwei Hu, Lihe Zhang, Huchuan Lu, CVPR. 25Guang Feng, Zhiwei Hu, Lihe Zhang, and Huchuan Lu. En- coder fusion network with co-attention embedding for refer- ring image segmentation. In CVPR, pages 15506-15515, 2021. 2, 5
The brain's concepts: The role of the sensory-motor system in conceptual knowledge. Vittorio Gallese, George Lakoff, Cognitive neuropsychology. 223-4Vittorio Gallese and George Lakoff. The brain's concepts: The role of the sensory-motor system in conceptual knowl- edge. Cognitive neuropsychology, 22(3-4):455-479, 2005. 1
Question-guided hybrid convolution for visual question answering. Peng Gao, Hongsheng Li, Shuang Li, Pan Lu, Yikang Li, C H Steven, Xiaogang Hoi, Wang, ECCV. Peng Gao, Hongsheng Li, Shuang Li, Pan Lu, Yikang Li, Steven CH Hoi, and Xiaogang Wang. Question-guided hy- brid convolution for visual question answering. In ECCV, pages 469-485, 2018. 2
Actor and action video segmentation from a sentence. Kirill Gavrilyuk, Amir Ghodrati, Zhenyang Li, G M Cees, Snoek, CVPR. Kirill Gavrilyuk, Amir Ghodrati, Zhenyang Li, and Cees GM Snoek. Actor and action video segmentation from a sentence. In CVPR, pages 5958-5966, 2018. 2
. Ross Girshick, Fast R-Cnn, Iccv, Ross Girshick. Fast R-CNN. ICCV, Dec. 2015. 2
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahra- mani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Wein- berger, editors, NeurIPS, volume 27, 2014. 2
. Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask R-CNN. ICCVKaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Gir- shick. Mask R-CNN. ICCV, Oct. 2017. 2
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 3
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. 3
Modeling Relationships in Referential Expressions with Compositional Modular Networks. Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, Kate Saenko, CVPRRonghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling Relationships in Ref- erential Expressions with Compositional Modular Networks. CVPR, July 2017. 2
Segmentation from Natural Language Expressions. Ronghang Hu, Marcus Rohrbach, Trevor Darrell, Lecture Notes in Computer Science. 2Ronghang Hu, Marcus Rohrbach, and Trevor Darrell. Seg- mentation from Natural Language Expressions. Lecture Notes in Computer Science, pages 108-124, 2016. 2
Bi-Directional Relationship Inferring Network for Referring Image Segmentation. Zhiwei Hu, Guang Feng, Jiayu Sun, Lihe Zhang, Huchuan Lu, CVPR. 25Zhiwei Hu, Guang Feng, Jiayu Sun, Lihe Zhang, and Huchuan Lu. Bi-Directional Relationship Inferring Network for Referring Image Segmentation. In CVPR, pages 4424- 4433, 2020. 2, 5
Referring image segmentation via cross-modal progressive comprehension. Shaofei Huang, Tianrui Hui, Si Liu, Guanbin Li, Yunchao Wei, Jizhong Han, Luoqi Liu, Bo Li, CVPR. 25Shaofei Huang, Tianrui Hui, Si Liu, Guanbin Li, Yunchao Wei, Jizhong Han, Luoqi Liu, and Bo Li. Referring im- age segmentation via cross-modal progressive comprehen- sion. In CVPR, pages 10488-10497, 2020. 2, 5
A Drew, Christopher D Hudson, Manning, GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering. CVPR. Drew A Hudson and Christopher D Manning. GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering. CVPR, 2019. 1
Linguistic structure guided context modeling for referring image segmentation. Tianrui Hui, Si Liu, Shaofei Huang, Guanbin Li, Sansi Yu, Faxi Zhang, Jizhong Han, ECCV. Springer25Tianrui Hui, Si Liu, Shaofei Huang, Guanbin Li, Sansi Yu, Faxi Zhang, and Jizhong Han. Linguistic structure guided context modeling for referring image segmentation. In ECCV, pages 59-75. Springer, 2020. 2, 5
Batch normalization: Accelerating deep network training by reducing internal covariate shift. Sergey Ioffe, Christian Szegedy, PMLRICML. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. In ICML, pages 448-456. PMLR, 2015. 3
Foundations of language: Brain, meaning, grammar, evolution. Ray Jackendoff, S Ray, Jackendoff, Oxford University PressUSARay Jackendoff and Ray S Jackendoff. Foundations of lan- guage: Brain, meaning, grammar, evolution. Oxford Univer- sity Press, USA, 2002. 1
Locate then segment: A strong pipeline for referring image segmentation. Ya Jing, Tao Kong, Wei Wang, Liang Wang, Lei Li, Tieniu Tan, CVPR. 25Ya Jing, Tao Kong, Wei Wang, Liang Wang, Lei Li, and Tie- niu Tan. Locate then segment: A strong pipeline for referring image segmentation. In CVPR, pages 9858-9867, 2021. 2, 5
Referitgame: Referring to objects in photographs of natural scenes. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, Tamara Berg, EMNLP. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitgame: Referring to objects in pho- tographs of natural scenes. In EMNLP, pages 787-798, 2014. 4
Tag2pix: Line art colorization using text tag with secat and changing loss. Hyunsu Kim, Young Ho, Eunhyeok Jhoo, Sungjoo Park, Yoo, ICCV. Hyunsu Kim, Ho Young Jhoo, Eunhyeok Park, and Sungjoo Yoo. Tag2pix: Line art colorization using text tag with secat and changing loss. In ICCV, pages 9056-9065, 2019. 2
Multimodal residual learning for visual qa. Jin-Hwa Kim, Sang-Woo Lee, Donghyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang, NeurIPS. Jin-Hwa Kim, Sang-Woo Lee, Donghyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Multimodal residual learning for visual qa. In NeurIPS, pages 361-369, 2016. 1
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.698012arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 12
Visual genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, IJCV. 1231Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalan- tidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32-73, 2017. 1
Tracking by natural language specification. Zhenyang Li, Ran Tao, Efstratios Gavves, G M Cees, Arnold Wm Snoek, Smeulders, CVPR. Zhenyang Li, Ran Tao, Efstratios Gavves, Cees GM Snoek, and Arnold WM Smeulders. Tracking by natural language specification. In CVPR, pages 6495-6503, 2017. 2
A real-time cross-modality correlation filtering method for referring expression comprehension. Yue Liao, Si Liu, Guanbin Li, Fei Wang, Yanjie Chen, Chen Qian, Bo Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Yue Liao, Si Liu, Guanbin Li, Fei Wang, Yanjie Chen, Chen Qian, and Bo Li. A real-time cross-modality correlation fil- tering method for referring expression comprehension. In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR), June 2020. 2
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, ECCV. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740-755, 2014. 7
Recurrent Multimodal Interaction for Referring Image Segmentation. ICCV. Chenxi Liu, Zhe L Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Alan L Yuille, 212Chenxi Liu, Zhe L. Lin, Xiaohui Shen, Jimei Yang, Xin Lu, and Alan L. Yuille. Recurrent Multimodal Interaction for Re- ferring Image Segmentation. ICCV, pages 1280-1289, 2017. 2, 3, 12
Learning to Assemble Neural Module Tree Networks for Visual Grounding. Daqing Liu, Hanwang Zhang, Feng Wu, Zheng-Jun Zha, ICCV. Daqing Liu, Hanwang Zhang, Feng Wu, and Zheng-Jun Zha. Learning to Assemble Neural Module Tree Networks for Vi- sual Grounding. In ICCV, Oct. 2019. 2
Fully convolutional networks for semantic segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, CVPR. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431-3440, 2015. 2
Knowing when to look: Adaptive attention via a visual sentinel for image captioning. Jiasen Lu, Caiming Xiong, Devi Parikh, Richard Socher, CVPR. Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. Knowing when to look: Adaptive attention via a visual sen- tinel for image captioning. In CVPR, pages 375-383, 2017. 1
Hierarchical question-image co-attention for visual question answering. Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh, NeurIPS. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NeurIPS, pages 289-297, 2016. 1
Cascade grouped attention network for referring expression segmentation. Gen Luo, Yiyi Zhou, Rongrong Ji, Xiaoshuai Sun, Jinsong Su, Chia-Wen Lin, Qi Tian, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on Multimedia25Gen Luo, Yiyi Zhou, Rongrong Ji, Xiaoshuai Sun, Jinsong Su, Chia-Wen Lin, and Qi Tian. Cascade grouped attention network for referring expression segmentation. In Proceed- ings of the 28th ACM International Conference on Multime- dia, pages 1274-1282, 2020. 2, 5
Multi-task collaborative network for joint referring expression comprehension and segmentation. Gen Luo, Yiyi Zhou, Xiaoshuai Sun, Liujuan Cao, Chenglin Wu, Cheng Deng, Rongrong Ji, CVPR. 1213Gen Luo, Yiyi Zhou, Xiaoshuai Sun, Liujuan Cao, Chenglin Wu, Cheng Deng, and Rongrong Ji. Multi-task collaborative network for joint referring expression comprehension and segmentation. In CVPR, pages 10034-10043, 2020. 2, 5, 12, 13
Words and the world: Predictive coding and the language-perception-cognition interface. Gary Lupyan, Andy Clark, Los Angeles. Sage CASage Publications24CAGary Lupyan and Andy Clark. Words and the world: Predic- tive coding and the language-perception-cognition interface. Current Directions in Psychological Science, 24(4):279- 284, 2015. Publisher: Sage Publications Sage CA: Los An- geles, CA. 1
Rectifier nonlinearities improve neural network acoustic models. L Andrew, Maas, Y Awni, Andrew Y Hannun, Ng, ICML. 30Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Recti- fier nonlinearities improve neural network acoustic models. In ICML, volume 30, page 3, 2013. 3
Ask your neurons: A neural-based approach to answering questions about images. Mateusz Malinowski, Marcus Rohrbach, Mario Fritz, ICCV. Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A neural-based approach to answering questions about images. In ICCV, pages 1-9, 2015. 1
Learning to color from language. Varun Manjunatha, Mohit Iyyer, Jordan Boyd-Graber, Larry Davis, NAACL-HLT. 212Varun Manjunatha, Mohit Iyyer, Jordan Boyd-Graber, and Larry Davis. Learning to color from language. In NAACL- HLT, vol. 2, pages 764-769, June 2018. 2, 4, 7, 12
Generation and Comprehension of Unambiguous Object Descriptions. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, Kevin Murphy, CVPR24Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, and Kevin Murphy. Generation and Comprehension of Unambiguous Object Descriptions. CVPR, June 2016. 2, 4
Building a large annotated corpus of english: the penn treebank. P Mitchell, Mary Ann Marcus, Beatrice Marcinkiewicz, Santorini, Computational Linguistics. 192Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: the penn treebank. Computational Linguistics, 19(2):313-330, 1993. 7
Dynamic multimodal instance segmentation guided by natural language queries. Edgar Margffoy-Tuay, C Juan, Emilio Pérez, Pablo Botero, Arbeláez, ECCV. 212Edgar Margffoy-Tuay, Juan C Pérez, Emilio Botero, and Pablo Arbeláez. Dynamic multimodal instance segmenta- tion guided by natural language queries. In ECCV, pages 630-645, 2018. 2, 12
Motion detection and motion verbs: Language affects low-level visual perception. Lotte Meteyard, Bahador Bahrami, Gabriella Vigliocco, Psychological Science. 1811Lotte Meteyard, Bahador Bahrami, and Gabriella Vigliocco. Motion detection and motion verbs: Language affects low-level visual perception. Psychological Science, 18(11):1007-1013, 2007. 1
Mapping instructions to actions in 3D environments with visual goal prediction. Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, Yoav Artzi, EMNLP. 23Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. Mapping instruc- tions to actions in 3D environments with visual goal predic- tion. In EMNLP, pages 2667-2678, Oct.-Nov. 2018. 2, 3
Modeling Context Between Objects for Referring Expression Understanding. K Varun, Vlad I Nagaraja, Larry S Morariu, Davis, Lecture Notes in Computer Science. 24Varun K. Nagaraja, Vlad I. Morariu, and Larry S. Davis. Modeling Context Between Objects for Referring Expres- sion Understanding. Lecture Notes in Computer Science, pages 792-807, 2016. 2, 4
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D Man- ning. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543, 2014. 3
FiLM: Visual Reasoning with a General Conditioning Layer. Ethan Perez, Florian Strub, Vincent Harm De Vries, Aaron C Dumoulin, Courville, AAAI. Ethan Perez, Florian Strub, Harm de Vries, Vincent Du- moulin, and Aaron C. Courville. FiLM: Visual Reasoning with a General Conditioning Layer. In AAAI, 2018. 1, 2, 7
Words in the brain's language. Friedemann Pulvermüller, Behavioral and brain sciences. 222Friedemann Pulvermüller. Words in the brain's language. Behavioral and brain sciences, 22(2):253-279, 1999. 1
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, TPAMI. 396Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. TPAMI, 39(6):1137-1149, June 2017. 2
U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, MICCAI. 13Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234-241, 2015. 1, 3
Key-Word-Aware Network for Referring Expression Image Segmentation. Hengcan Shi, Hongliang Li, Fanman Meng, Qingbo Wu, ECCV. Hengcan Shi, Hongliang Li, Fanman Meng, and Qingbo Wu. Key-Word-Aware Network for Referring Expression Image Segmentation. In ECCV, Sept. 2018. 2
Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, Wang-Chun Woo, Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. NeurIPSXingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun WOO. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In NeurIPS, pages 802-810. 2015. 2
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, JMLR. 15112Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958, 2014. 12
A Corpus of Natural Language for Visual Reasoning. Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi, ACL. Vancouver, Canada2Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. A Cor- pus of Natural Language for Visual Reasoning. In ACL vol. 2, pages 217-223, Vancouver, Canada, July 2017. Associa- tion for Computational Linguistics. 1
Lxmert: Learning crossmodality encoder representations from transformers. Hao Tan, Mohit Bansal, EMNLP-IJCNLP. Hao Tan and Mohit Bansal. Lxmert: Learning cross- modality encoder representations from transformers. In EMNLP-IJCNLP, pages 5100-5111, 2019. 1
Top-down and bottom-up control of visual selection. Jan Theeuwes, Acta psychologica. 13521Jan Theeuwes. Top-down and bottom-up control of visual selection. Acta psychologica, 135(2):77-99, 2010. 1
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, \textbackslashLukasz
Attention is all you need. Illia Kaiser, Polosukhin, NeurIPS. Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, pages 5998-6008, 2017. 2
Representing the meanings of object and action words: The featural and unitary semantic space hypothesis. Gabriella Vigliocco, P David, William Vinson, Merrill F Lewis, Garrett, Cognitive psychology. 484Gabriella Vigliocco, David P Vinson, William Lewis, and Merrill F Garrett. Representing the meanings of object and action words: The featural and unitary semantic space hy- pothesis. Cognitive psychology, 48(4):422-488, 2004. 1
Neighbourhood Watch: Referring Expression Comprehension via Language-Guided Graph Attention Networks. Peng Wang, Qi Wu, Jiewei Cao, Chunhua Shen, Lianli Gao, Anton Van Den, Hengel, CVPRPeng Wang, Qi Wu, Jiewei Cao, Chunhua Shen, Lianli Gao, and Anton van den Hengel. Neighbourhood Watch: Referring Expression Comprehension via Language-Guided Graph Attention Networks. CVPR, June 2019. 2
Understanding natural language. Terry Winograd, Cognitive psychology. 31Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1-191, 1972. 1
Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. Huijuan Xu, Kate Saenko, ECCV. SpringerHuijuan Xu and Kate Saenko. Ask, attend and answer: Ex- ploring question-guided spatial attention for visual question answering. In ECCV, pages 451-466. Springer, 2016. 1
Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, ICML. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption gen- eration with visual attention. In ICML, pages 2048-2057, 2015. 1
Graph-structured referring expressions reasoning in the wild. Sibei Yang, Guanbin Li, Yizhou Yu, 2020Sibei Yang, Guanbin Li, and Yizhou Yu. Graph-structured referring expressions reasoning in the wild. 2020. 2
Propagating over phrase relations for one-stage visual grounding. Sibei Yang, Guanbin Li, Yizhou Yu, ECCV. SpringerSibei Yang, Guanbin Li, and Yizhou Yu. Propagating over phrase relations for one-stage visual grounding. In ECCV, pages 589-605. Springer, 2020. 2
Bottom-up shift and reasoning for referring image segmentation. Sibei Yang, Meng Xia, Guanbin Li, Hong-Yu Zhou, Yizhou Yu, CVPR. Sibei Yang, Meng Xia, Guanbin Li, Hong-Yu Zhou, and Yizhou Yu. Bottom-up shift and reasoning for referring im- age segmentation. In CVPR, pages 11266-11275, 2021. 5
Improving one-stage visual grounding by recursive sub-query construction. Zhengyuan Yang, Tianlang Chen, Liwei Wang, Jiebo Luo, ECCV. SpringerZhengyuan Yang, Tianlang Chen, Liwei Wang, and Jiebo Luo. Improving one-stage visual grounding by recursive sub-query construction. In ECCV, pages 387-404. Springer, 2020. 2
A fast and accurate onestage approach to visual grounding. Zhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, Jiebo Luo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, and Jiebo Luo. A fast and accurate one- stage approach to visual grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4683-4693, 2019. 2
Stacked Attention Networks for Image Question Answering. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola, CVPR. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked Attention Networks for Image Ques- tion Answering. In CVPR, June 2016. 1
Dual convolutional lstm network for referring image segmentation. Linwei Ye, Zhi Liu, Yang Wang, TMM22Linwei Ye, Zhi Liu, and Yang Wang. Dual convolutional lstm network for referring image segmentation. TMM, 22(12):3224-3235, 2020. 2
Cross-Modal Self-Attention Network for Referring Image Segmentation. CVPR. Linwei Ye, Mrigank Rochan, Zhi Liu, Yang Wang, 512Linwei Ye, Mrigank Rochan, Zhi Liu, and Yang Wang. Cross-Modal Self-Attention Network for Referring Image Segmentation. CVPR, June 2019. 2, 3, 5, 12
MAttNet: Modular Attention Network for Referring Expression Comprehension. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, Tamara L Berg, CVPR. 13Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L. Berg. MAttNet: Modular At- tention Network for Referring Expression Comprehension. CVPR, June 2018. 2, 12, 13
Modeling Context in Referring Expressions. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, Tamara L Berg, Lecture Notes in Computer Science. 4Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. Modeling Context in Referring Ex- pressions. Lecture Notes in Computer Science, pages 69-85, 2016. 4
From Recognition to Cognition: Visual Commonsense Reasoning. Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi, CVPR. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From Recognition to Cognition: Visual Commonsense Rea- soning. In CVPR, June 2019. 1
Colorful image colorization. Richard Zhang, Phillip Isola, Alexei A Efros, ECCV. 412Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, pages 649-666, 2016. 4, 12
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, In CVPR. 7Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 7
Language-based colorization of scene sketches. TOG). Changqing Zou, Haoran Mo, Chengying Gao, Ruofei Du, Hongbo Fu, ACM38New York, NY, USA. 2Changqing Zou, Haoran Mo, Chengying Gao, Ruofei Du, and Hongbo Fu. Language-based colorization of scene sketches. TOG), 38(6):1-16, 2019. Publisher: ACM New York, NY, USA. 2
| [
"https://github.com/ilkerkesen/bvpr."
] |
[
"A Bayesian Model of Multilingual Unsupervised Semantic Role Induction",
"A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"
] | [
"Nikhil Garg nikgarg@gmail.com \nUniversity of Geneva\nSwitzerland\n",
"James Henderson james.henderson@unige.ch \nUniversity of Geneva\nSwitzerland\n"
] | [
"University of Geneva\nSwitzerland",
"University of Geneva\nSwitzerland"
] | [] | We propose a Bayesian model of unsupervised semantic role induction in multiple languages, and use it to explore the usefulness of parallel corpora for this task. Our joint Bayesian model consists of individual models for each language plus additional latent variables that capture alignments between roles across languages. Because it is a generative Bayesian model, we can do evaluations in a variety of scenarios just by varying the inference procedure, without changing the model, thereby comparing the scenarios directly. We compare using only monolingual data, using a parallel corpus, using a parallel corpus with annotations in the other language, and using small amounts of annotation in the target language. We find that the biggest impact of adding a parallel corpus to training is actually the increase in mono-lingual data, with the alignments to another language resulting in small improvements, even with labeled data for the other language. | null | [
"https://arxiv.org/pdf/1603.01514v1.pdf"
] | 1,253,684 | 1603.01514 | bb5accd41a3ac4505c40f731c20c507ed2a849c0 |
A Bayesian Model of Multilingual Unsupervised Semantic Role Induction
Nikhil Garg nikgarg@gmail.com
University of Geneva
Switzerland
James Henderson james.henderson@unige.ch
University of Geneva
Switzerland
A Bayesian Model of Multilingual Unsupervised Semantic Role Induction
We propose a Bayesian model of unsupervised semantic role induction in multiple languages, and use it to explore the usefulness of parallel corpora for this task. Our joint Bayesian model consists of individual models for each language plus additional latent variables that capture alignments between roles across languages. Because it is a generative Bayesian model, we can do evaluations in a variety of scenarios just by varying the inference procedure, without changing the model, thereby comparing the scenarios directly. We compare using only monolingual data, using a parallel corpus, using a parallel corpus with annotations in the other language, and using small amounts of annotation in the target language. We find that the biggest impact of adding a parallel corpus to training is actually the increase in mono-lingual data, with the alignments to another language resulting in small improvements, even with labeled data for the other language.
Introduction
Semantic Role Labeling (SRL) has emerged as an important task in Natural Language Processing (NLP) due to its applicability in information extraction, question answering, and other NLP tasks. SRL is the problem of finding predicate-argument structure in a sentence, as illustrated below: [ A0 Mike ] has [ P RED written ][ A1 a book ] (S1)
Here, the predicate WRITE has two arguments: 'Mike' as A0 or the writer, and 'a book' as A1 or the thing written. The labels A0 and A1 correspond to the PropBank annotations (Palmer et al., 2005).
As the need for SRL arises in different domains and languages, the existing manually annotated corpora become insufficient to build supervised systems. This has motivated work on unsupervised SRL (Lang and Lapata, 2011b;Titov and Klementiev, 2012a;Garg and Henderson, 2012). Previous work has indicated that unsupervised systems could benefit from the word alignment information in parallel text in two or more languages Titov and Klementiev, 2012b). For example, consider the German translation of sentence S1:
[ A0 Mike] hat [ A1 ein Buch][ P RED geschrieben] (S2)
If sentences S1 and S2 have the word alignments: Mike-Mike, written-geschrieben, and book-Buch, the system might be able to predict A1 for Buch, even if there is insufficient information in the monolingual German data to learn this assignment. Thus, in languages where the resources are sparse or not good enough, or the distributions are not informative, SRL systems could be made more accurate by using parallel data with resource rich or more amenable languages.
In this paper, we propose a joint Bayesian model for unsupervised semantic role induction in multiple languages. The model consists of individual Bayesian models for each language (Garg and Henderson, 2012), and crosslingual latent variables to incorporate soft role agreement between aligned constituents. This latent variable approach has been demonstrated to increase the performance in a multilingual unsupervised part-of-speech tagging model based on HMMs . We investigate the application of this approach to unsupervised SRL, presenting the performance improvements obtained in different settings involving labeled and unlabeled data, and analyzing the annotation effort required to obtain similar gains using labeled data.
We begin by briefly describing the unsupervised SRL pipeline and the monolingual semantic role induction model we use, and then describe our multilingual model.
Unsupervised SRL Pipeline
As established in previous work (Gildea and Jurafsky, 2002;Pradhan et al., 2005), we use a standard unsupervised SRL setup, consisting of the following steps:
1. Syntactic Parsing Off-the-shelf parsers can be used to syntactically parse a given sentence. We use a dependency parse because of its simplicity and easier comparison with the previous work in unsupervised SRL.
Predicate Identification
We select all the nonauxiliary verbs in a sentence as predicates.
3. Argument Identification For a given predicate, this step classifies each constituent of the parse tree as a semantic argument or a nonargument. Heuristics based on syntactic features such as the dependency relation of a constituent to its head, path from the constituent to the predicate, etc. have been used in unsupervised SRL.
Argument Classification
Without access to semantic role labels, unsupervised SRL systems cast the problem as a clustering problem. Arguments of a predicate in all the sentences are divided into clusters such that each cluster corresponds to a single semantic role. The better this clustering is, the easier it becomes for a human to give it an actual semantic role label like A0, A1, etc. Our model assigns a role variable to every identified argument. This variable can take any value from 1 to N , where N is the number of semantic roles that we want to induce. The task we model, unsupervised semantic role induction, is the step 4 of this pipeline.
Monolingual Model
We use the Bayesian model of Garg and Henderson (2012) as our base monolingual model. The semantic roles are predicate-specific. To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows:
Primary Role (PR) Let there be a total of N roles (or clusters) for each predicate. Assign K of them as PRs {P1, P2, ..., PK }. Further, create 3 additional PRs: ST ART denoting the start of the role sequence, EN D denoting its end, and P RED denoting the predicate. These (K + 3) PRs are not allowed to repeat in a frame and their ordering defines the global role ordering.
Secondary Role (SR) The rest of the (N − K) roles are called SRs {S1, S2, ..., SN−K }. Unlike PRs, they are not constrained to occur only once and only their ordering w.r.t. PRs is used in the probability model.
For example, the complete role sequence in a frame could be: (ST ART , P3, S1, S1, P RED, P2, S5, EN D). The ordering is defined as the sequence of PRs, (ST ART , P3, P RED, P2, EN D). Each pair of consecutive PRs in an ordering is called an interval. Thus, (P3, P RED) is an interval that contains two SRs, S 1 and S 1 . An interval could also be empty, for instance (ST ART,P3) contains no SRs. When we evaluate, these roles get mapped to gold roles. For instance, the PR P2 could get mapped to a core role like A0, A1, etc. or to a modifier role like AM − T M P , AM − M OD, etc. Garg and Henderson (2012) reported that, in practice, PRs mostly get mapped to core roles and SRs to modifier roles, which conforms to the linguistic motivations for this distinction. Figure 4 illustrates two copies of the monolingual model, on either side of the crosslingual latent variables. The generative process is as follows: All the multinomial and binomial distributions have symmetric Dirichlet and beta priors respectively. Figure 1a gives the probability equations P (r, f |p, vc) = P (o|p, vc) (2) and
o=ordering(r) r i ∈r∩P R P (f i |r i , p)P (f i |r i , p) = T t=1 P (f i,t |r i , p) (3) P (f |p, vc) = r P (r, f |p, vc)(4)
(a) Probability equations for the monolingual model. Bold-faced variables denote a sequence of values. r denotes the complete sequence of roles, and f denotes the complete sequence of features. p and vc denote the predicate and its voice respectively. o denotes the ordering of PRs in the sequence r and ordering(r) is a function for computing this ordering. ri and fi denote the role and features at position i respectively, and r(I) and f (I) respectively denote the SR sequence and feature sequence in interval I. fi,t denotes the value of feature t at position i. adj = 0 for generating the first SR, and 1 for a subsequent one.
Equation 1 gives the joint probability of the model and equation 4 gives the marginal probability of the observed features.
P (r l1 , f l1 , r l2 , f l2 , z|p l1 , vc l1 , p l2 , vc l2 ) = P (z) l∈{l1,l2} P (r l , f l |z, p l , vc l ) (5) ≈ P (z) l∈{l1,l2} P (r l , f l |p l , vc l ) i,k:z k →r l i P (r l i |z k )(6)
(a) Probability equations for the multilingual model. The superscript l denotes the variable for language l. z denotes the common crosslingual latent variables for both languages. z k → r l i denotes that the argument at position i in language l is connected to the crosslingual latent variable #k. for the monolingual model. This formulation models the global role ordering and repetition preferences using PRs, and limited context for SRs using intervals. Ordering and repetition information was found to be helpful in supervised SRL as well (Punyakanok et al., 2004;Pradhan et al., 2005;Toutanova et al., 2008). More details, including the motivations behind this model, are in (Garg and Henderson, 2012).
Multilingual Model
The multilingual model uses word alignments between sentences in a parallel corpus to exploit role correspondences across languages. We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables. Figure 4 illustrates this model. The generative process, as explained below, remains the same as the monolingual model for the most part, with the exception of aligned roles which are now generated by both the monolingual process as well as the CLV.
Monolingual Data Given a parallel frame with
the predicate pair p1, p2, generate two separate monolingual frames as in section 3.
Aligned Arguments
For each aligned argument, first generate a crosslingual latent variable from a Chinese Restaurant Process (CRP). Then generate the two aligned roles: for aligned arguments i, j: draw a crosslingual latent variable: z ∼ CRP (α CRP p1,p2 ) draw role for language l1: r i ∼ M ultinomial(θ align p1,p2,z,l1 ) draw role for language l2: r j ∼ M ultinomial(θ align p1,p2,z,l2 ) Every predicate-tuple has its own inventory of CLVs specific to that tuple. Each CLV z is a multivalued variable where each value defines a distribution over role labels for each language (denoted by θ align p1,p2,z,l above). These distributions over labels are trained to be peaky, so that each value c for a CLV represents a correlation between the labels that c predicts in the two languages. For ex- Figure 4: Multilingual model. The CLVs and their associated parameters are drawn in bold. P R3 in language 1 is aligned to P R3 in language 2 with the corresponding CLV taking the value c 2 , and SRM is aligned to P R2 with the CLV taking the value c 7 .
ample, a value c for the CLV z might give high probabilities to S3 and S8 in language 1, and to S1 in language 2. If c is the only value for z that gives high probability to S3 in language 1, and the monolingual model in language 1 decides to assign S3 to the role for z, then z will predict S1 in language 2, with high probability. We generate the CLVs via a Chinese Restaurant Process (Pitman, 2002), a non-parametric Bayesian model, which allows us to induce the number of CLVs for every predicate-tuple from the data. We continue to train on the non-parallel sentences using the respective monolingual models.
The multilingual model is deficient, since the aligned roles are being generated twice. Ideally, we would like to add the CLV as additional conditioning variables in the monolingual models. The new joint probability can be written as equation 5 (Figure 2a), which can be further decomposed following the decomposition of the monolingual model in Figure 1a. However, having this additional conditioning variable breaks the Dirichletmultinomial conjugacy, which makes it intractable to marginalize out the parameters during inference. Hence, we use an approximation where we treat each of the aligned roles as being generated twice, once by the monolingual model and once by the corresponding CLV (equation 6).
This is the first work to incorporate the coupling of aligned arguments directly in a Bayesian SRL model. This makes it easier to see how to extend this model in a principled way to incorporate additional sources of information. First, the model scales gracefully to more than two languages. If there are a total of n languages, and there is an aligned argument in m of them, the multilingual latent variable is connected to only those m aligned arguments.
Second, having one joint Bayesian model allows us to use the same model in various semisupervised learning settings, just by fixing the annotated variables during training. Section 6.6 evaluates a setting where we have some labeled data in one language (called source), while no labeled data in the second language (called target). Note that this is different from a classic annotation projection setting (e.g. (Padó and Lapata, 2009)), where the role labels are mapped from source constituents to aligned target constituents.
Inference and Training
The inference problem consists of predicting the role labels and CLVs (the hidden variables) given the predicate, its voice, and syntactic features of all the identified arguments (the visible variables). We use a collapsed Gibbs-sampling based approach to generate samples for the hidden variables (model parameters are integrated out). The sample counts and the priors are then used to calculate the MAP estimate of the model parameters.
For the monolingual model, the role at a given position is sampled as:
P (r i |r −i ,f ,p,vc,D − )∝P (r i ,r −i ,f |p,vc,D − ) = P (r i ,r −i ,f |θ,p,vc)P (θ|D − )dθ
where the subscript −i refers to all the variables except at position i, D − refers to the variables in all the training instances except the current one, and θ refers to all the model parameters. The above integral has a closed form solution due to Dirichlet-multinomial conjugacy.
For sampling roles in the multilingual model, we also need to consider the probabilities of roles being generated by the CLVs:
P (r i |r −i ,f ,p,vc,z,D − )∝P (r i ,r −i ,f |z,p,vc,D − ) = P (r i ,r −i ,f |θ,z,p,vc)P (θ|D − )dθ = P (r i ,r −i ,f |θ,p,vc)( P (r j |θ,z k ) j,k:z k →r j )P (θ|D − )dθ
For sampling CLVs, we need to consider three factors: two corresponding to probabilities of generating the aligned roles, and the third one corresponding to selecting the CLV according to CRP.
P (z k |r l1 i ,r l2 j ,D −,k )∝P (r l1 i |z k ,D −,k )P (r l2 j |z k ,D −,k )P (z k |D −,k )
where the aligned roles r l1 i and r l2 j are connected to z k , and D −,k refers to all the variables except z k , r l1 i , and r l2 j . We use the trained parameters to parse the monolingual data using the monolingual model. The crosslingual parameters are ignored even if they were used during training. Thus, the information coming from the CLVs acts as a regularizer for the monolingual models.
Experiments
Evaluation
Following the setting of Titov and Klementiev (2012b), we evaluate only on the arguments that were correctly identified, as the incorrectly identified arguments do not have any gold semantic labels. Evaluation is done using the metric proposed by Lang and Lapata (2011a), which has 3 components: (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let N denote the total number of argument instances, C i the instances in the induced cluster i, and G j the instances having label j in gold annotations.
P U = 1 N i max j |C i ∩G j | , CO= 1 N j max i |C i ∩G j | , and F 1= 2·P U ·CO P U +CO .
The score for each predicate is weighted by the number of its argument instances, and a weighted average is computed over all the predicates.
Baseline
We use the same baseline as used by Lang and Lapata (2011a) which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of N clusters, (N − 1) most frequent syntactic functions get a cluster each, and the rest are assigned to the N th cluster.
Closest Previous Work
This work is closely related to the cross-lingual unsupervised SRL work of Titov and Klementiev (2012b). Their model has separate monolingual models for each language and an extra penalty term which tries to maximize P (r l2 |r l1 ) and P (r l1 |r l2 ) i.e. for all the aligned arguments with role label r l1 in language 1, it tries to find a role label r l2 in language 2 such that the given proportion is maximized and vice verse. However, there is no efficient way to optimize the objective with this penalty term and the authors used an inference method similar to annotation projection. Further, the method does not scale naturally to more than two languages. Their algorithm first does monolingual inference in one language ignoring the penalty and then does the inference in the second language taking into account the penalty term. In contrast, our model adds the latent variables as a part of the model itself, and not an external penalty, which enables us to use the standard Bayesian learning methods such as sampling.
The monolingual model we use (Garg and Henderson, 2012) also has two main advantages over Titov and Klementiev (2012b). First, the former incorporates a global role ordering probability that is missing in the latter. Secondly, the latter defines argument-keys as a tuple of four syntactic features and all the arguments having the same argumentkeys are assigned the same role. This kind of hard clustering is avoided in the former model where two constituents having the same set of features might get assigned different roles if they appear in different contexts.
Data
Following Titov and Klementiev (2012b), we run our experiments on the English (EN) and German (DE) sections of the CoNLL 2009 corpus (Hajič et al., 2009), and EN-DE section of the Europarl corpus (Koehn, 2005). We get about 40k EN and 36k DE sentences from the CoNLL 2009 training set, and about 1.5M parallel EN-DE sentences from Europarl. For appropriate comparison, we keep the same setting as in (Titov and Klementiev, 2012b) for automatic parses and ar-gument identification, which we briefly describe here. The EN sentences are parsed syntactically using MaltParser (Nivre et al., 2007) and DE using LTH parser (Johansson and Nugues, 2008). All the non-auxiliary verbs are selected as predicates. In CoNLL data, this gives us about 3k EN and 500 DE predicates. The total number of predicate instances are 3.4M in EN (89k CoNLL + 3.3M Europarl) and 2.62M in DE (17k CoNLL + 2.6M Europarl). The arguments for EN are identified using the heuristics proposed by Lang and Lapata (2011a). However, we get an F1 score of 85.1% for argument identification on CoNLL 2009 EN data as opposed to 80.7% reported by Titov and Klementiev (2012b). This could be due to implementation differences, which unfortunately makes our EN results incomparable. For DE, the arguments are identified using the LTH system (Johansson and Nugues, 2008), which gives an F1 score of 86.5% on the CoNLL 2009 DE data. The word alignments for the EN-DE parallel Europarl corpus are computed using GIZA++ (Och and Ney, 2003). For high-precision, only the intersecting alignments in the two directions are kept. We define two semantic arguments as aligned if their head-words are aligned. In total we get 9.3M arguments for EN (240k CoNLL + 9.1M Europarl) and 4.43M for DE (32k CoNLL + 4.4M Europarl). Out of these, 0.76M arguments are aligned.
Main Results
Since the CoNLL annotations have 21 semantic roles in total, we use 21 roles in our model as well as the baseline. Following Garg and Henderson (2012), we set the number of PRs to 2 (excluding ST ART , EN D and P RED), and SRs to 21-2=19. Table 1 shows the results.
In the first setting (Line 1), we train and test the monolingual model on the CoNLL data. We observe significant improvements in F1 score over the Baseline (Line 0) in both languages. Using the CoNLL 2009 dataset alone, Titov and Klementiev (2012b) report an F1 score of 80.9% (PU=86.8%, CO=75.7%) for German. Thus, our monolingual model outperforms their monolingual model in German. For English, they report an F1 score of 83.6% (PU=87.5%, CO=80.1%), but note that our English results are not directly comparable to theirs due to differences argument identification, as discussed in section 6.4. As their argument identification score is lower, perhaps their system is discarding "difficult" arguments which leads to a higher clustering score.
In the second setting (Line 2), we use the additional monolingual Europarl (EP) data for training. We get equivalent results in English and a significant improvement in German compared to our previous setting (Line 1). The German dataset in CoNLL is quite small and benefits from the additional EP training data. In contrast, the English model is already quite good due to a relatively big dataset from CoNLL, and good accuracy syntactic parsers. Unfortunately, Titov and Klementiev (2012b) do not report results with this setting.
The third setting (Line 3) gives the results of our multilingual model, which adds the word alignments in the EP data. Comparing with Line 2, we get non-significant improvements in both languages. Titov and Klementiev (2012b) obtain an F1 score of 82.7% (PU=85.0%, CO=80.6%) for German, and 83.7% (PU=86.8%, CO=80.7%) for English. Thus, for German, our multilingual Bayesian model is able to capture the cross-lingual patterns at least as well as the external penalty term in (Titov and Klementiev, 2012b). We cannot compare the English results unfortunately due to differences in argument identification.
We also compared monolingual and bilingual training data using a setting that emulates the standard supervised setup of separate training and test data sets. We train only on the EP dataset and test on the CoNLL dataset. Lines 4 and 5 of Table 1 give the results. The multilingual model obtains small improvements in both languages, which confirms the results from the standard unsupervised setup, comparing lines 2 to 3.
These results indicate that little information can be learned about semantic roles from this parallel data setup. One possible explanation for this result is that the setup itself is inadequate. Given the definition of aligned arguments, only 8% of English arguments and 17% of German arguments are aligned. This plus our experiments suggest that improving the alignment model is a necessary step to making effective use of parallel data in multilingual SRI, for example by joint modeling with SRI. We leave this exploration to future work.
Multilingual Training with Labeled Data for One Language
Another motivation for jointly modeling SRL in multiple languages is the transfer of information from a resource rich language to a resource poor language. We evaluated our model in a very general annotation transfer scenario, where we have a small labeled dataset for one language (source), and a large parallel unlabeled dataset for the source and another (target) language. We investigate whether this setting improves the parameter estimates for the target language. To this end, we clamp the role annotations of the source language in the CoNLL dataset using a predefined mapping 1 , and do not sample them during training. This data gives us good parameters for the source language, which are used to sample the roles of the source language in the unlabeled Europarl data. The CLVs aim to capture this improvement and thereby improve sampling and parameter estimates for the target language. Table 2 shows the results of this experiment. We obtain small improvements in the target languages. As in the unsupervised setting, the small percentage of aligned roles probably limits the impact of the cross-lingual information.
Labeled Data in Monolingual Model
We explored the improvement in the monolingual model in a semi-supervised setting. To this end, we randomly selected S% of the sentences in the CoNLL dataset as "supervised sentences" and the rest (100−S)% were kept unsupervised. Next, we clamped the role labels of the supervised sentences 1 A0 was mapped to the primary role P1, A1 to P2, and the rest were mapped to the secondary roles (S1, ..., S19) in the order of their decreasing frequency. using the predefined mapping from Section 6.6. Sampling was done on the unsupervised sentences as usual. We then measured the clustering performance using the trained parameters. 2 To access the contribution of partial supervision better, we constructed a "supervised baseline" as follows. For predicates seen in the supervised sentences, a MAP estimate of the parameters was calculated using the predefined mapping. For the unseen predicates, the standard baseline was used. Figures 5a and 5b show the performance varia-tion with S. We make the following observations:
• In both languages, at around S = 10, the supervised baseline starts outperforming the semisupervised model, which suggests that manually labeling about 10% of the sentences is a good enough alternative to our training procedure. Note that 10% amounts to about 3.6k sentences in German and 4k in English. We noticed that the proportion of seen predicates increases dramatically as we increase the proportion of supervised sentences. At 10% supervised sentences, the model has already seen 63% of predicates in German and 44% in English. This explains to some extent why only 10% labeled sentences are enough.
• For German, it takes about 3.5% or 1260 supervised sentences to have the same performance increase as 1.5M unlabeled sentences (Line 1 to Line 2 in Table 1). Adding about 180 more supervised sentences also covers the benefit obtained by alignments in the multilingual model (Line 2 to Line 3 in Table 1). There is no noticeable performance difference in English.
We also evaluated the performance variation on a completely unseen CoNLL test set. Since the test set is very small compared to the training set, the clustering evaluation is not as reliable. Nonetheless, we broadly obtained the same pattern.
Related Work
As discussed in section 6.3, our work is closely related to the crosslingual unsupervised SRL work of Titov and Klementiev (2012b). The idea of using superlingual latent variables to capture crosslingual information was proposed for POS tagging by , which we use here for SRL. In a semi-supervised setting, Padó and Lapata (2009) used a graph based approach to transfer semantic role annotations from English to German. Fürstenau and Lapata (2009) used a graph alignment method to measure the semantic and syntactic similarity between dependency tree arguments of known and unknown verbs. For monolingual unsupervised SRL, Swier and Stevenson (2004) presented the first work on a domain-general corpus, the British National Corpus, using 54 verbs taken from VerbNet. Garg and Henderson (2012) proposed a Bayesian model for this problem that we use here. Titov and Klementiev (2012a) also proposed a closely related Bayesian model. Grenager and Manning (2006) proposed a generative model but their parameter space consisted of all possible linkings of syntactic constituents and semantic roles, which made unsupervised learning difficult and a separate language-specific rule based method had to be used to constrain this space. Other proposed models include an iterative split-merge algorithm (Lang and Lapata, 2011a) and a graphpartitioning based approach (Lang and Lapata, 2011b). provide a good overview of the supervised SRL systems.
Conclusions
We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora. The crosslingual latent variables capture correlations between roles in different languages, and regularize the parameter estimates of the monolingual models. Because this is a joint Bayesian model of multilingual SRI, we can apply the same model to a variety of training scenarios just by changing the inference procedure appropriately. We evaluate monolingual SRI with a large unlabeled dataset, bilingual SRI with a parallel corpus, bilingual SRI with annotations available for the source language, and monolingual SRI with a small labeled dataset. Increasing the amount of monolingual unlabeled data significantly improves SRI in German but not in English. Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora.
Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment.
Figure 3 :
3Probability equations for the (a) monolingual and (b) multilingual model.
Figure 5 :
5F1 with a portion of the data labeled.
1 .
1Predicate, Voice The predicate p and its voice vc are treated as top-level visible variables.2. Ordering (Generate PRs) Select an ordered
set of PRs from a multinomial distribution.
o ∼ M ultinomial(θ order
p,vc )
3. Generate SRs For each interval in the ordering
o, a sequence of SRs is generated as:
for each interval I ∈ o:
draw an indicator s ∼ Binomial(θ ST OP
p,I,0 )
while s = ST OP :
choose a SR r ∼ M ultinomial(θ SR
p,I )
draw an indicator s ∼ Binomial(θ ST OP
p,I,1 )
4. Generate Features For each PR and SR, the
features for that constituent are generated inde-
pendently. To keep the model simple and com-
parable to previous unsupervised work, we only
use three features: (i) dependency relation of the
argument to its head, (ii) head word of the argu-
ment, and (iii) POS tag of the head word:
for each generated role r:
for each feature type f :
choose a value v f ∼ M ultinomial(θ F
p,r,f )
To account for the randomness in selecting the supervised sentences, the experiment was repeated 10 times and average of the performance numbers was taken.
AcknowledgmentsThis work was funded by the Swiss NSF grant 200021 125137 and EC FP7 grant PARLANCE.
Graph alignment for semi-supervised semantic role labeling. ] H Lapata2009, M Fürstenau, Lapata, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics1and Lapata2009] H. Fürstenau and M. Lap- ata. 2009. Graph alignment for semi-supervised se- mantic role labeling. In Proceedings of the 2009 Conference on Empirical Methods in Natural Lan- guage Processing: Volume 1-Volume 1, pages 11- 20. Association for Computational Linguistics.
Unsupervised semantic role induction with global role ordering. [ Garg, ] N Henderson2012, J Garg, Henderson, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)[Garg and Henderson2012] N. Garg and J. Henderson. 2012. Unsupervised semantic role induction with global role ordering. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics.
Automatic labeling of semantic roles. [ Gildea, ] D Jurafsky2002, D Gildea, Jurafsky, Computational Linguistics. 283[Gildea and Jurafsky2002] D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. Com- putational Linguistics, 28(3):245-288.
Unsupervised discovery of a statistical verb lexicon. [ Grenager, ] T Manning2006, C D Grenager, Manning, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics[Grenager and Manning2006] T. Grenager and C.D. Manning. 2006. Unsupervised discovery of a sta- tistical verb lexicon. In Proceedings of the 2006 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1-8. Association for Com- putational Linguistics.
The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. [ Hajič, Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task. the Thirteenth Conference on Computational Natural Language Learning: Shared TaskAssociation for Computational LinguisticsProceedings of the Conference on Empirical Methods in Natural Language Processing[Hajič et al.2009] J. Hajič, M. Ciaramita, R. Johansson, D. Kawahara, M.A. Martí, L. Màrquez, A. Meyers, J. Nivre, S. Padó, J.Štěpánek, et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Nat- ural Language Learning: Shared Task, pages 1-18. Association for Computational Linguistics. [Johansson and Nugues2008] R. Johansson and P. Nugues. 2008. Dependency-based semantic role labeling of propbank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 69-78. Association for Computational Linguistics.
Europarl: A parallel corpus for statistical machine translation. P Koehn, MT summit. 5P. Koehn. 2005. Europarl: A parallel cor- pus for statistical machine translation. In MT sum- mit, volume 5.
Unsupervised semantic role induction via splitmerge clustering. [ Lang, ] J Lapata2011a, M Lang, Lapata, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. the 49th Annual Meeting of the Association for Computational LinguisticsPortland, Oregon[Lang and Lapata2011a] J. Lang and M. Lapata. 2011a. Unsupervised semantic role induction via split- merge clustering. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics, Portland, Oregon.
Unsupervised semantic role induction with graph partitioning. [ Lang, ] J Lapata2011b, M Lang, Lapata, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UK.Association for Computational Linguistics[Lang and Lapata2011b] J. Lang and M. Lapata. 2011b. Unsupervised semantic role induction with graph partitioning. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1320-1331, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.
Semantic role labeling: an introduction to the special issue. [ Màrquez, Computational linguistics. 34[Màrquez et al.2008] L. Màrquez, X. Carreras, K.C. Litkowski, and S. Stevenson. 2008. Semantic role labeling: an introduction to the special issue. Com- putational linguistics, 34(2):145-159.
Multilingual part-of-speech tagging: Two unsupervised approaches. T Naseem, B Snyder, J Eisenstein, R Barzilay, Journal of Artificial Intelligence Research. 361[Naseem et al.2009] T. Naseem, B. Snyder, J. Eisen- stein, and R. Barzilay. 2009. Multilingual part-of-speech tagging: Two unsupervised ap- proaches. Journal of Artificial Intelligence Re- search, 36(1):341-385.
Maltparser: A languageindependent system for data-driven dependency parsing. Nivre, Natural Language Engineering. 13295[Nivre et al.2007] J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. Kubler, S. Marinov, and E. Marsi. 2007. Maltparser: A language- independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95.
A systematic comparison of various statistical alignment models. F J Och, H Ney, Computational linguistics. 291Och and Ney2003[Och and Ney2003] F.J. Och and H. Ney. 2003. A sys- tematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51.
Cross-lingual annotation projection for semantic roles. ] S Lapata2009, M Padó, Lapata, Journal of Artificial Intelligence Research. 361Padó and[Padó and Lapata2009] S. Padó and M. Lapata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36(1):307-340.
The proposition bank: An annotated corpus of semantic roles. [ Palmer, Computational Linguistics. 311[Palmer et al.2005] M. Palmer, D. Gildea, and P. Kings- bury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguis- tics, 31(1):71-106.
Support vector learning for semantic argument classification. J Pitman ; S. Pradhan, K Hacioglu, V Krugler, W Ward, J H Martin, D Jurafsky, 621Machine Learning. 60Dept. Statistics, UC BerkeleyTechnical ReportCombinatorial stochastic processesJ. Pitman. 2002. Combinatorial stochas- tic processes. Technical report, Technical Report 621, Dept. Statistics, UC Berkeley, 2002. Lecture notes for St. Flour course. [Pradhan et al.2005] S. Pradhan, K. Hacioglu, V. Krugler, W. Ward, J.H. Martin, and D. Ju- rafsky. 2005. Support vector learning for semantic argument classification. Machine Learning, 60(1):11-39.
Semantic role labeling via integer linear programming inference. V Punyakanok, D Roth, W Yih, D Zimak, Proceedings of the 20th international conference on Computational Linguistics. the 20th international conference on Computational LinguisticsAssociation for Computational Linguistics1346Punyakanok et al.2004[Punyakanok et al.2004] V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2004. Semantic role label- ing via integer linear programming inference. In Proceedings of the 20th international conference on Computational Linguistics, page 1346. Association for Computational Linguistics.
Unsupervised multilingual grammar induction. [ Snyder, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP1Association for Computational Linguistics[Snyder et al.2009] B. Snyder, T. Naseem, and R. Barzi- lay. 2009. Unsupervised multilingual grammar in- duction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 1-Volume 1, pages 73-81. Association for Computational Lin- guistics.
Unsupervised semantic role labelling. R Swier, S Stevenson, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingSwier and Stevenson2004[Swier and Stevenson2004] R. Swier and S. Stevenson. 2004. Unsupervised semantic role labelling. In Pro- ceedings of the 2004 Conference on Empirical Meth- ods in Natural Language Processing, pages 95-102.
A bayesian approach to unsupervised semantic role induction. [ Titov, ] I Klementiev2012a, A Titov, Klementiev, Proceedings of the Conference of the European Chapter. the Conference of the European ChapterAssociation for Computational Linguistics12[Titov and Klementiev2012a] I. Titov and A. Klemen- tiev. 2012a. A bayesian approach to unsupervised semantic role induction. In Proceedings of the Con- ference of the European Chapter of the Association for Computational Linguistics, page 12.
Crosslingual induction of semantic roles. [ Titov, ] I Klementiev2012b, A Titov, Klementiev, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics[Titov and Klementiev2012b] I. Titov and A. Klemen- tiev. 2012b. Crosslingual induction of semantic roles. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. As- sociation for Computational Linguistics.
A global joint model for semantic role labeling. [ Toutanova, Computational Linguistics. 342[Toutanova et al.2008] K. Toutanova, A. Haghighi, and C.D. Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34(2):161-191.
| [] |
[
"UNIVERSAL TOPOLOGICAL REGULARITIES OF SYNTACTIC STRUCTURES: DECOUPLING EFFICIENCY FROM OPTIMIZATION A PREPRINT",
"UNIVERSAL TOPOLOGICAL REGULARITIES OF SYNTACTIC STRUCTURES: DECOUPLING EFFICIENCY FROM OPTIMIZATION A PREPRINT"
] | [
"Fermín Moscoso \nDepartment of Language and Communication & Center for Language Studies\nRadboud University\nErasmuslaan 16525 NLNijmegenThe Netherlands\n",
"Del Prado Martín \nDepartment of Language and Communication & Center for Language Studies\nRadboud University\nErasmuslaan 16525 NLNijmegenThe Netherlands\n"
] | [
"Department of Language and Communication & Center for Language Studies\nRadboud University\nErasmuslaan 16525 NLNijmegenThe Netherlands",
"Department of Language and Communication & Center for Language Studies\nRadboud University\nErasmuslaan 16525 NLNijmegenThe Netherlands"
] | [] | Human syntactic structures are usually represented as graphs. Much research has focused on the mapping between such graphs and linguistic sequences, but less attention has been paid to the shapes of the graphs themselves: their topologies. This study investigates how the topologies of syntactic graphs reveal traces of the processes that led to their emergence. I report a new universal regularity in syntactic structures: Their topology is communicatively efficient above chance. The pattern holds, without exception, for all 124 languages studied, across linguistic families and modalities (spoken, written, and signed). This pattern can arise from a process optimizing for communicative efficiency or, alternatively, by construction, as a by-effect of a sublinear preferential attachment process reflecting language production mechanisms known from psycholinguistics. This dual explanation shows how communicative efficiency, per se, does not require optimization. Among the two options, efficiency without optimization offers the better explanation for the new pattern. | 10.48550/arxiv.2302.00129 | [
"https://export.arxiv.org/pdf/2302.00129v1.pdf"
] | 256,459,289 | 2302.00129 | ed0a3f67cbd40d77e3c4d8c65e4d44f1b999ef21 |
UNIVERSAL TOPOLOGICAL REGULARITIES OF SYNTACTIC STRUCTURES: DECOUPLING EFFICIENCY FROM OPTIMIZATION A PREPRINT
Fermín Moscoso
Department of Language and Communication & Center for Language Studies
Radboud University
Erasmuslaan 16525 NLNijmegenThe Netherlands
Del Prado Martín
Department of Language and Communication & Center for Language Studies
Radboud University
Erasmuslaan 16525 NLNijmegenThe Netherlands
UNIVERSAL TOPOLOGICAL REGULARITIES OF SYNTACTIC STRUCTURES: DECOUPLING EFFICIENCY FROM OPTIMIZATION A PREPRINT
Syntax · Sublinear Preferential Attachment · Communicative Efficiency · Dependency Grammar
Human syntactic structures are usually represented as graphs. Much research has focused on the mapping between such graphs and linguistic sequences, but less attention has been paid to the shapes of the graphs themselves: their topologies. This study investigates how the topologies of syntactic graphs reveal traces of the processes that led to their emergence. I report a new universal regularity in syntactic structures: Their topology is communicatively efficient above chance. The pattern holds, without exception, for all 124 languages studied, across linguistic families and modalities (spoken, written, and signed). This pattern can arise from a process optimizing for communicative efficiency or, alternatively, by construction, as a by-effect of a sublinear preferential attachment process reflecting language production mechanisms known from psycholinguistics. This dual explanation shows how communicative efficiency, per se, does not require optimization. Among the two options, efficiency without optimization offers the better explanation for the new pattern.
Introduction
Human languages map linear sequences of symbols (e.g., sounds, letters, words, etc.) into more elaborate, nonsequential, syntactic structures. Linguists have proposed very diverse formalisms for representing these structures, and for their mapping into the linear sequences. Virtually all linguistic theories -dating as far back as Pān . ini [1] around the 4th century BCE-coincide in using structures that can be modelled as graphs. The meaning of the graph vertices and edges vary depending on the specific theory, ranging from the derivation trees of Generative Grammar paradigms [e.g., 2, 3] to the directed graphs used in Dependency Grammar formalisms [e.g., 4]). An important distinction can be drawn here between the topology of the syntactic graphs themselves on the one hand, and their mapping into linguistic sequences -their linearization-on the other [4]. Psychologically, the distinction between the topology of the graphs and their linearization is mirrored by separate stages of function assignment and positional processing during grammatical encoding in language production [5,6].
Cognitive and functional linguists have long hypothesized that syntactic structures in human languages are particularly efficient for communication [e.g., 7,8,9]. In recent years, several studies have provided empirical support for the communicative efficiency of different linguistic aspects, using large-scale data across many languages [10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Of these studies, those investigating syntactic structures have mainly focused on the emergence of specific properties in the linearization of the structures, taking their actual topologies as a given. These include: the sequential distance between words linked by a syntactic relation [24,11,12,25,17], the tendency to avoid link crossings within the graphs [11,25], and how communicative efficiency leads to the emergence of word or morpheme orderings and/or choices that are common across languages [12,13,14,21,22]. In contrast, the topology itself of the syntactic graphs has received comparatively little empirical attention. In this direction, using insights from Network Science, researchers have noticed that the topology of 'global' syntactic graphs (i.e., large graphs merging the syntactic relations across all utterances in a corpus) exhibit properties of small-world/scale-free networks [26,27,28,29], but such properties do not extend to individual utterances [26].
Many studies interpret the communicative efficiency of linguistic structure as direct evidence that such structures arise through processes that optimize for such efficiency [e.g., 10,11,12,13,14,15,16,17,18,19,21,22,23], rarely even mentioning -let alone comparing with-any alternatives to this optimization. Furthermore, the precise nature of the hypothesized optimization processes remains rather vague. In this respect, most authors point to some -unspecifiedselection process operating at either evolutionary, historical, or developmental time scales. In contrast, it has been found that -for some linguistic properties-the communicative efficiency itself may be epiphenomenal, if at all present: What appear to be optimized structures might in fact be just the most likely outcomes irrespective of any optimization [30,31,32,33,34], or may be side effects of known psycholinguistic mechanisms [34,35]. It is fairly clear that -as is argued by the studies above-higher communicative efficiency can indeed result from optimizing efficiency measures. However, the studies seem to overlook that efficiency might just as well be the cause that enabled a basic mechanism to be co-opted for language use, i.e., the mechanism survived because it turned out to result in efficient communication. This is especially relevant for graph structures, as it is known that efficiency in graphs can arise both by optimization and by construction [10,36,37].
Measuring the communicative efficiency of a graph's topology.
In order to benefit from existing corpora [38], I represent syntactic structures using Dependency Grammar [4]. In this formalism, syntactic structures are graphs such as that in Fig. 1A. The graph's vertices are labelled with the words or morphemes from the sentence, and its edges denote syntactic relations between pairs of words. From an information-theoretical perspective, efficient linguistic structures can be viewed as minimizing an energy function Ω weighting the cost of the structure from the point of view of the speaker and from that of the listener [10],
Ω = ρ C listener + (1 − ρ) C speaker ,(1)
where C listener , C speaker reflect the cost of the structure from the perspective of the listener and of the speaker, respectively, and 0 < ρ < 1 indexes the relative importance given to each of those costs (i.e., values of ρ > .5, prioritize minimizing the cost for the listener, and ρ < .5 gives more importance to the cost for the speaker). This model has been successfully applied to account for a variety of aspects of language structure [10,15,16,18,20,21,22]. Applying this model to the topologies of syntactic structures requires defining adequate measures for both C listener and C speaker .
For other aspects of linguistic structure [10,13,14,21,22] C speaker has been found to be proportional to the Shannon entropy [39] of the specific structure, which is an index of the structure's heterogeneity and unpredictability. The correspoding measure of entropy for graphs is the entropy of the degree distribution [37], and more specifically for directed trees, the entropy of the out-degree distribution. In a dependency tree with N vertices, let k be the number of edges that leave a vertex (i.e., the vertex's out-degree). One can define a probability distribution p(k) over the values of k in a dependency tree (see Fig. 1C). The entropy of the out-degree distribution is the Shannon entropy over p(k), which is then a measure of the cost of producing the topology of the associated syntactic structure,
h deg = − N −1 k=0 p(k) log 2 p(k) ∝ C speaker .(2)
From the listener's standpoint, the cost should be driven by the difficulty of accurately reconstructing the network's topology from the linear linguistic message. It has been found that star trees are optimal in this respect [10,37]. More generally, one can expect that such difficulty should be inversely related to the structure's overall degree of coherence: How closely related are the words/concepts expressed by the network? For graphs, this is indexed by their longest or average path lengths, which are minimal for star trees. Furthermore, the communication channel is noisy. Minimizing the effect that transmission errors might have on the reconstructed structure is of crucial importance for the listener. The impact of such errors on graph topologies is measured by the network's robustness. From the undirected skeleton of the dependency structure, that is, the pure topology of the dependency tree, not considering the edge directions (see Fig. 1D), one can define a N × N binary adjacency matrix A, whose elements a i,j are set to one if vertices i and j are connected (in either direction), and to zero otherwise (see Fig. 1E). The Kolmogorov-Sinai entropy of the graph [40] is then defined as h ks = log 2 λ,
where λ denotes A's largest positive eigenvalue (i.e., its spectral radius). This measure satisfies the three desiderata above: It is maximal for star graphs, it is negatively correlated with path lengths, and it is a good index of the graph's robustness [40]. Therefore, a good choice for the listener's cost function is C listener ∝ −h ks .
h deg = − k p(k) log 2 p(k) h ks = log 2 λ
Universality of efficiency distributions across languages.
Each yellow dot in Fig. 2 plots the mean normalized values of h deg and h ks for sentences in one of 124 languages, taken from the dependency-annotated corpora of the Universal Dependencies Project [38]. As a baseline, for each of language, the purple dots plot the average values of the two measures for uniformly sampled random dependency trees matched in number of vertices to the actual sentences from the corpora. It is clear from the graph that the actual dependency structures found in languages are markedly different from those one would expect by chance. These differences are consistent across languages, reflecting a substantially increased value of H ks with respect to chance (paired t[123] = 31.2495, p < .0001) holding for every single language studied. This increase is paired with a slight decrease in the average values of H deg (paired t[123] = −3.8435, p = .0002), which holds for the majority (60%) of languages. Put simply, real dependency structures have a substantially lower average comprehension costs than random ones, which seems to be a universal property, most often paired with a slight decrease in production costs. Most strikingly, the individual dependency graphs (i.e., not just the averages) are sufficiently distinct from chance that they can be correctly classified as real or random syntactic structures with 79% accuracy by a simple logistic classifier Each dot plots a language's mean values, and its color denotes the type of graphs considered: yellow, mean values for real dependency graphs in the corpus, purple, mean values for uniformly sampled (matched in sentence length to the real sentences) baseline, red, mean values for graphs resulting from applying the optimization algorithm to the baseline graphs, and blue, mean values for graphs randomly sampled using a sublinear preferential attachment process (matched in sentence length to the real sentences).
using just their h ks , and h deg values. This accuracy goes up to 85% when one considers only sentences with at least ten words (for which there is much more variability in the values of the cost measures). The increased efficiencies of the graph topologies suggests that, across languages, the dependency structures of sentences could be the result of the joint optimization of Eq. 1.
Efficiency by optimization.
Using the two cost measures defined above, we can rewrite the minimization of Ω of Eq. 1 as the maximization of Λ,
Λ = ρ h ks − (1 − ρ) h deg + ε,(4)
where ε denotes normal white noise that limits the possible optimization. The incomplete optimization induced by the noise term reflects that speakers do no have complete freedom over the choice of syntactic structures they use at a given point: On the one hand, some concepts will require very specific topologies for expressing them, with little choice for the speaker. On the other hand, the non-stationary nature of human discourse and dialogue entails that the syntactic structures of previously produced or encountered utterances constrain the choice of possible syntactic structures speakers will use at any given point [41,42]. Partial optimization is also necesary from a mathematical standpoint: Full, noiseless, optimization of Eq. 1 is known to result in scale-free graphs, with power-law tailed degree distributions [37]. However, individual dependency trees are not fully scale-free, rather showing degree distributions with stretched exponential tails [26]. Estimated Kullback-Leibler divergences (KLDs) between the distribution of the measures for the real graphs in a language, and the graphs generated by optimization (red) and by sublinear preferential attachment (blue). The grey line denotes a 'zero' baseline, the KLDs between two subsamples of the real distribution. Shaded areas denote 95% C.I. of the mean.
I applied a genetic algorithm maximizing Λ starting from uniformly sampled random dependency trees (each matched in number of vertices to a sentence in a language). Fig. 3A shows that the values of both cost measures rapidly converge. The optimization consists of a small, but significant, reduction in the production cost (h deg ), coupled with a larger reduction in the comprehension cost (−h ks ). As was predicted, the joint distribution of the cost measures of the gradually optimized graphs rapidly converges on the distribution of the actual cost values for the individual sentences in each language, both distributions being practically undistinguishable at the point of convergence (see Fig. 3B).
The resulting -partially-optimized-graphs have per-language mean costs (red dots in Fig. 2) that strongly overlap with those observed in the real sentences (yellow dots in Fig. 2). The algorithm just maximizes the communicative efficiency Λ, naturally resulting in distributions approaching those of actual languages. Note, however, that this optimization is extremely sensitive, with only a precise combination of the model's parameters (giving more importance to comprehension than to production costs) approximating real language distributions (see convergence analysis in the Materials and Methods section).
Efficiency by construction: Sublinear preferential attachment.
From the perspective of psycholinguistics, grammatical encoding for language production is known to be an incremental cascaded process [5]. Substantial experimental evidence [6] shows that the syntactic structure of an utterance is assembled piecewise, in what is referred to as the function assignment process. Rather than assembling the whole structure at once, words become gradually available as the output from the lexical selection process, and they are integrated into the syntactic graph as they arrive. Such a process is therefore a rich-get-richer process, that one would expect to result in the higher than random values of h ks that were reported above. Consistently, the types of graphs that arise from the optimization of energy functions such as Ω, can alternatively be understood as being efficient by construction, without any actual optimization intervening [37]. In particular, the scale-free networks resulting from optimizing Ω also arise as a result of a preferential attachment process [36]. In such model, a graph is constructed incrementally, adding one vertex at a time, which attaches to one of the vertices already in the network with probability proportional to the number of other vertices that are already attached to it (i.e., proportional to the existing vertex's degree, p attach ∝ k). In the same way that a noise term serves to prevent full optimization of individual syntactic graphs, one can obtain a not fully optimal version of the preferential attachment graph by making the probability of attaching to an existing vertex proportional to a power α ≥ 0 of the vertex's degree (i.e., p attach ∝ k α ). When α < 1 this is referred to as sublinear preferential attachment model [43], and its effects are similar to the partial optimization described earlier, resulting also in the stretched exponential tailed degree distributions that characterize individual dependency graphs. Each dot plots one graph, and its color denotes the type of graph considered: yellow, real dependency graphs in the corpus, purple, uniformly sampled (matched for sentence length to the real sentences) baseline, and blue, graphs randomly sampled using a sublinear preferential attachment process (matched in sentence length to the real sentences). The contours plot kernel density estimates in each condition, and the distributions on the margin are kernel density estimates for the marginal distributions.
The blue dots in Fig. 2 plot the mean values of the cost measures for random dependency graphs generated according to a sublinear preferential attachment model matched to the sentences in each of the languages. As predicted, they indeed model very well the distribution observed for the real sentences. This is remarkable as the modelling relies on one single parameter α, that is not "fitted", but rather assigned a value according to principled considerations based on the number of vertices in the graphs and their maximum degrees [43]. Even more remarkably, the distribution of individual random graphs generated by sublinear preferential attachment is virtually identical to that exhibited by the dependency graphs of actual sentences: The estimated Kullback-Leibler divergence between both distributions is very close to the estimated divergence between identical distributions (see Fig. 3B). This is visible in the strikingly alike marginal and joint distributions of the simulated and real sentences (see Fig. 4). In contrast with the uniform random graphs, a logistic classifier is hardly able to tell the sublinear preferential attachment random graphs apart from the real dependency graphs (55% accuracy; 58% for sentences with ten or more elements). Furthermore, this model is extremely robust; almost all possible values of its single parameter lead to close approximations of the real distributions, as long as the parameter value indeed corresponds to a sublinear attachment (see stability analysis in the Materials and Methods section).
Discussion.
This study demonstrates how the topologies of individual human syntactic structures allow -across languages and modalities-more efficient communication than what one would expect by chance: Real dependency graphs have a substantially increased noise robustness (h ks ), and a slightly reduced heterogeneity (h deg ). In other words, real dependency graphs are more "starry" than chance would predict, and "starry" graphs are most efficient for communication [37]. The regularities previously found for global syntactic graphs [26,27,28,29] are therefore paralleled with similar regularities at the individual sentence level.
The new property can be accounted for by two contrasting models: One using explicit noisy optimization of the efficiency measures, and another in which the efficiencies are side effects of a known incremental mechanism for producing syntactic structures [5,6]. In terms of accounting for the corpus data, both models perform remarkably well [44]. However, one should choose the preferential attachment process over the optimization for several reasons: Statistically speaking, on the one hand the optimization method requires very precisely adjusting two free parameters (the weighting factor and the noise intensity). On the other hand, the preferential attachment model requires just a single parameter (the sublinear exponent), whose value is extremely robust. The second model is therefore more parsimonious than the first and, on equal best performance, it should be preferred [e.g., a Bayes' Factor would pick the second model over the first one by a wide margin ; 45]. Furthermore, arguing that communicative efficiency is the result of evolutionary, historical, or developmental optimization processes comes with a burden of proof: It would require a detailed outline of how such processes take place, together with additional evidence for their presence (e.g., lower communicative efficiencies at earlier evolutionary, historical, or developmental stages). The description of such processes, as well as any evidence for their presence, are conspicuously absent from the literature. In our case, efficiency by construction assumes only the incrementality of grammatical encoding in language production [5], for which ample experimental evidence is available [6]. These results caution against blindly accepting a causal link between communicative efficiency and the presence of optimization processes, unless alternative explanations and additional evidence are considered. Claims for efficiency by optimization could be revisited to investigate whether they could also be accounted for with efficiency by construction.
From the above one should conclude that the increased efficiency of dependency graph topologies is -borrowing Gould and Lewontin's metaphor-a "spandrel" [46], which emerges from the way speakers build up their syntactic structures. Yet, even without optimization, it remains relevant that human syntactic structures exhibit above-chance communicative efficiency. One should expect human language as a whole [but not necessarily all of its aspects; 33, 34, 35] to be efficient -or rather not inefficient-for communication, least the evolutionary process might have selected against it [e.g., 7,8,9]. Indeed, it was assuming communicative efficiency in the first place that led to finding the regularities reported here.
Linguists and psycholinguists consistently find that speakers prioritize minimizing their own costs over easing those of the listeners when building syntactic structures [47,48,49]. These findings contrast with our finding that the efficiency from the listener's perspective prevails, which is shared with other optimization-based studies [21,23]. Decoupling efficiency from optimization by considering efficiency by construction reconciles both findings: The mechanism by which the speaker constructs syntactic structures naturally results in structures that are efficient for the listener as well. Efficiency arises from the structures' growth patterns, as it does in many other structures in nature [50]. Table S1 Algorithms S1 and S2
Materials and Methods
Corpora and preprocessing I used the treebank corpora in the Universal Dependencies Project v2.11 (38,52). For each of the available languages, I concatenated all the treebanks listed for that language. As the cost measures do not have any variability for N < 4 (i.e., all trees with fewer than four vertices are simultaneously line and star graphs), I selected only dependency graphs with at least four vertices. In addition, to limit processing costs, I discarded any dependency trees with more than 50 vertices (this amounts to excluding fewer than 2% of the available sentences for any language). I deleted any punctuation vertices, and I considered only the basic vertices in the tree, skipping all "range" vertices (e.g., "3-4" in CONLL format). I deleted all relation labels, and the vertex labels were replaced with plain numbers . I ensured that the resulting dependency structures were actual trees, discarding any that were not. For each language for which there were at least 50 sentences left in the corpus after applying the filters above, I randomly sampled (without replacement) a maximum of 1,000 sentences, if so many were available, or took all available sentences otherwise. By this method, I obtained samples for 124 languages with at least 50 sentences. For each of the dependency graphs, I computed the cost measures h deg and h ks using Eq. 2 and Eq. 3, respectively. Details of the languages considered, their sample sizes and estimated measures are provided in Table S1 (53). Although sample sizes ranging from just 50 to 1,000 sentences might seem small, the bylanguage means of the key measures h ks and h deg are unbiased and converge rapidly according to the Central Limit Theorem, with convergence speed proportional to the square root of the sample size. With 50 values, all means were indeed stable (see standard errors in Table S1), enabling the consideration of a large sample of languages.
H ks = h ks − min N h ks max N h ks − min N h ks , H deg = h deg − min N h deg max N h deg − min N h deg .(S1)
The extreme values for h ks are quite straightforward. On the one hand, h ks always takes its maximum for a star graph. The eigenvalues of its adjancency matrix A are the roots of its characteristic polynomial,
det(A − λ · I) = −λ 1 1 · · · 1 1 −λ 0 · · · 0 1 0 . . . . . . . . . . . . . . . . . . . . . 0 1 0 · · · 0 −λ = 0.
This determinant can be decomposed recursively into minors, to find that -for any N -it has its maximum positive root at √ N − 1. Therefore:
max N h ks = 1 2 log 2 (N − 1).
Similarly, h ks takes its minimum value for line graphs. A line graph of N vertices has a characteristic polynomial,
det(A − λ · I) = −λ 1 0 · · · 0 1 −λ 1 · · · 0 0 1 . . . . . . . . . . . . . . . . . . . . . 1 0 0 · · · 1 −λ = 0.
It is difficult to find a general closed form of this polynomial's roots for all values of N . However, they are easily computed for a specific N , so one just finds the largest positive eigenvalue λ and then applies Eq. 3. One can note, however, that the value of min N h ks converges on 1.0 from below for large N . With respect to h deg , its minimum value is taken for line graphs and star graphs, both of which have all vertices but one having the same degree, therefore,
min N h deg = log 2 N − N − 1 N log 2 (N − 1),
which converges on zero from above for large N . On the other hand, the value of max N h deg is difficult to compute in closed form, as it is related to the integer sum partitions of N − 1. It can be computed exactly using Algorithm S1. As this algorithm can be slow for large values of N , I precomputed the values of the four extrema for N up to 50 before running the remaining simulations. An interesting observation is that max N h deg appears to converge from below on exactly two bits. I have observed this empirically, but I have not been able to come up with a demonstration, therefore I leave this as a conjecture.
Uniform sampling of directed trees There are exactly N N −2 different trees that can be constructed with N labelled vertices (54). Each such tree can be uniquely identified by a Prüfer Sequence (55); a unique sequence of length N − 2 on the vertex labels 0 to N − 1. In turn, each labelled tree corresponds to N different rooted trees, each arising from choosing a different vertex as the tree's root. Each rooted tree corresponds to a single directed tree (i.e., choosing the root determines the directionality of all edges). We can therefore create an what I call an extended Prüfer Sequence by adding a number identifying the root to the standard Prüfer Sequence. Uniform sampling of directed trees then becomes a straighforward multinomial sampling of Extended Prüfer Sequences of length N − 1. Using this method, I sampled a random tree with the number of vertices matched to the number of vertices of each dependency tree selected from the corpora. As above, the cost measures for the uniformly sampled graphs were computed using Eqs. 2 and 3.
Optimization algorithm For optimization of the random trees I used a mutation-only genetic algorithm. The Extended Prüfer Sequences described above constitute the genetic representation of the trees (similar to previous approaches; e.g., 56, 57; but see also 58).
∆Λ = Λ − Λ = ρ (h ks − h ks ) − (1 − ρ) (h deg − h deg ) + ε > 0,
and the original tree was kept otherwise The random noise term ε ∼ N (0, σ) governs the stochastic part of the selection process. As discussed in the main text, it has an optimizationhalting effect: When the discrete gradients become smaller than the average noise, the noise halts the optimization. The genetic algorithm above was applied for 400 epochs (i.e., generations) on a sample of 100 of the random trees generated as baselines for each language (or all trees for those languages for which fewer than 100 trees were available). The optimization weight was set to ρ = .9, and the noise standard deviation to σ = .075. These parameter values were chosen by examining the convergence patterns of the algorithm for different values. Fig. S1 plots the evolution of the mean values H deg and H ks epoch by epoch. Values of ρ prioritizing the minimization of the production cost (i.e., ρ < .5; paths in shades of blue in Fig. S1) are clearly unsuitable, as they end up minimizing just H deg , completely ignoring H ks , towards a local minimum representing line graphs, at position (0,0) in the graph. In turn, when it favours minimizing the comprehension cost or gives equal importance to both costs (i.e., ρ ≥ .5; paths in shades of yellow and red in Fig. S1), the algorithm has a tendency to converge on the true global minimum, star graphs, which are optimal both in terms of production and comprehension and correspond to position (0,1) in the graph. The noise term σ limits the possible optimization (i.e., not all ideas can be expressed by a star graph), stopping somewhere along the paths plotted in the figure, and hovering around it from then on. Higher values of σ entail an earlier stopping point. Note that only values of ρ closely around ρ = .9 result in optimization paths going through the target value (denoted by the star in the plot). Within this path, we find that values of sigma between .07 and .1 result in the optimization halting roughly around the values found in real languages (see the green path in the graph, corresponding to the parameter values we actually employed). Importantly, although optimizing only the efficiency from the comprehender's perspective (i.e., ρ), ignoring the cost for the listener, is also bound to converge on the same maximum (star graphs; 37), the optimization path does not go through the target. In other words, although most importance should be given to the comprehension cost, obtaining distributions similar to those of actual language still requires simultaneously minimizing the production costs.
Estimation of Kullback-Leibler Divergences
In order to assess the performance of the optimization algorithms, I consider the distributions of h ks and h deg generated for each of the languages, separately for each of the four conditions (real trees, random trees, optimized trees, preferential attachment trees). For each of the last three conditions, the degree of convergence to the distribution of the real trees is measured by the Kullback-Leibler Divergence (KLD; 59), between the real distribution (p real ) and each of the conditions (p X ):
D(p real p X ) = ∞ 0 ∞ 0 p real (h ks , h deg ) log p real (h ks , h deg ) p X (h ks , h deg ) dh ks dh deg.
Rather than using numerical approximations (e.g., through nearest neighbors) for estimating KLD, I used a parametric approximation, approximating each distribution by a bivariate Gaussian distribution. For this, I estimated the vector means in each condition for each language, µ language = ( h ks language , h deg language ) T , and the corresponding 2 × 2 covariance matrices Σ language , under each condition. The KLD between two bivariate Gaussians (P , Q) with vector means µ, m and covariance matrices Σ, S, respectively, is calculated using the closed form expression
D(P Q) = 1 2 tr(S −1 Σ) − 2 + (m − µ)S −1 (m − µ) + log det(S) det(Σ) .
Even for truly identical distributions, the KLD estimates depend on the sample sizes used for their estimation, and on how well the distributions are modelled by a bivariate Gaussian. In order to provide a 'zero' baseline against which to compare the KLD estimates, I used a bootstrap method (60): For each sample of real graphs I computed its KLD to another an equallysized resampling with replacement from the original sample. Relatedly, whenever comparing distributions with different sample sizes, I downsampled the larger sample to the size of the smaller one prior to computing the KLDs. For instance, see the difference in the values of the red and grey lines between Fig. 3 and Fig. S3. These lines represent the same KLDs in both plots. However, Fig. 3 compares with the optimization results (for which there were fewer values). This required downsampling of these distributions in Fig. 3, but not in Fig. S3.
Sublinear preferential attachment tree sampling In graphs exhibiting sublinear preferential attachment, the highest degree among their vertices (k max ) can be asymptotically approximated (43) as a function of their number of vertices N and their sublinear exponent 0 < α < 1,
k max ∼ (log N ) 1 1−α .(S2)
One can derive an estimator for α from this approximation:
α = 1 − log log N log k max , for N > 1.(S3)
I computed the estimatorα using Eq. S3 for each of the dependency trees sampled from the treebanks. For each language, I estimated its α value as the mean of theα values among its sentences α language . Finally, for each dependency tree in each language I sampled a random tree by preferential attachment with the α set to the value of α language corresponding to that language. The nonlinear preferential attachment trees were sampled using Algorithm S2. Eq. S2 is valid only when a tree is actually the result of a sublinear preferential attachment process. However, the estimator in Eq. S3 can be computed for any tree, whether or not it actually results from sublinear preferential attachment. The estimator can produce apparently correct values ofα < 1 even for trees that are plainly random, but such estimates would be meaningless (i.e., estimating thatα < 1 does not constitute a valid statistical test for sublinear preferential attachment). However it is easy to distinguish estimates arising from sublinear preferential attachment from spurious ones. If a set of trees does indeed result from sublinear preferential attachment, generating a new set of trees using the estimatedα value will result in a distribution of trees very similar the original one. However, generating new trees using anα value estimated from trees not originating in sublinear preferential attachment will result in a substantially altered distribution of trees.
In the current case, the real language graphs exhibited significantly larger estimated exponent values ( α = .4197 ± .0045) than did the uniform random graphs ( α = .3041 ± .0067). The random graphs generated using the estimatedα estimates from the real graphs resulted in almost identical distributions (see Fig. 4). In contrast, generating sublinear preferential attachment graphs using theα estimates from uniform random graphs resulted in markedly different distributions (see Fig. S2). Incidentally, notice that the distribution of sublinear attachment graphs regenerated using theα values from the uniform random graphs is in fact extremely similar to the actual distribution of real graphs in Fig. 4. This is not accidental. The sublinear preferential attachment process is in fact extremely robust in terms of estimates. If, instead of estimating the value ofα for each language, I had just picked a fixed value for all languages, with the only constraint that it be sublinear (i.e., α < 1; left of the dashed line in Fig.S3), the resulting distributions would still mimic those found in actual dependency graphs. As is shown in Fig. S3, any value of α between .25 and .75 provides essentially an equally good model of the real-language dependency graph distribution, and even the values in the intervals [0, .25) and (.75, 1) remain extremely good approximations. Only when one moves into superlinear preferential attachment (43) territory (i.e., α > 1; right of the dashed line in Fig.S3) do the distributions truly diverge. Taken together, these findings reinforce the claim that the dependency graphs found in human language do indeed correspond to the result of a sublinear preferential attachment process.
Logistic classifiers
In order to assess the degree to which it is possible to distinguish between individual real and random graphs I trained four (two distinguishing between real and uniform graphs, and two distinguishing between real and sublinear preferential attachment; in each case a model used all available graphs, and the other only graphs with at least ten vertices) binary logistic classifiers on a ramdomly chosen 90% of the data, and tested it on predicting the remaining 10%. In each case, one model was trained on all sentences, and the other was trained on sentences having at least te words. The features used for prediction were the normalized entropy measures H ks and H deg . Fig. S1: Sensitivity of the optimization algorithm to variation in the parameters. The paths plot how the mean normalized values of both entropy measures change along the optimization process. All paths start at the average values for the uniform random sampled graphs (matched to the English corpus). The paths ranging in color from blue to red plot different values of ρ, with σ fixed at zero. The green path plots the actual optimization for English in this study, with parameters ρ = .9 and σ = .075. The star denotes the average values for the real English dependency graphs. Fig. S2: Normalized entropy values for a distribution (orange) generated using theα estimates obtained from uniformly sampled graphs (purple). The contours plot kernel density estimates in each condition, and the distributions on the margin are kernel density estimates for the marginal distributions. Note the strong similarity between the regenerated distribution and the distribution of sentences in real language depicted in Fig.4 Fig. S3: Estimated Kullback-Leibler divergences between the distribution of the measures for the real graphs in a language, and the graphs generated by sublinear preferential attachment with an arbitrary fixed value of α (green), and with the optimal value of α for each language (blue). The grey line denotes a 'zero' baseline. Shaded areas denote 95% C.I. of the mean. The vertical dashed line separates the sublinear zone (left) from the superlinear zone (right). Continued on next page
Figure 1 :
1Schema of the computation of the cost measures for each dependency graph. (A) Dependency graph for an English sentence. (B) Digraph skeleton of the dependency graph. (C) Histogram of the out-degrees of the digraph's nodes, from which the out-degree entropy (h deg ) is computed as the Shannon entropy of the relative histogram. (D) Undirected skeleton of the digraph. (E) Transition matrix (A) of the undirected skeleton. The largest positive eigenvalue of A -its spectral radius (λ)-is used for computing the Kolmogorov-Sinai entropy (h ks ) of the graph.
Figure 2 :
2By-language mean values of the normalized (see Materials and Methods) cost measures.
Figure 3 :
3Convergence of algorithms. (A) Convergence of the mean values of h deg (light blue) and h ks (pink) through the generations (epochs) of the genetic optimization algorithm. Shaded areas denote 95% C.I. of the mean. (B)
Figure 4 :
4Values of the normalized (see Materials and Methods) cost measures for the individual graphs.
Normalization of entropy measures The values of both entropy measures h ks and h deg are dependent of the number of vertices (N ) in a tree. To facilitate comparison of the entropies for sentences of different lengths, I used length-normalized versions of both entropies. Note that this normalization was used only for comparisons and plots, all computations were done on the untransformed measures in their natural scales. For a tree of N vertices (i.e., a sentence of N words) I normalized the measures to their relative values in the [0, 1] interval,
Mutation happens by randomly changing a single element of the extended Prüfer Sequence, obtaining what I term an extended Prüfer Neighbor. In every generation of trees, all trees were randomly mutated, and selection would take place choosing between the original and the mutated tree according to their estimated fitness levels (i.e., the values of Λ): If the original tree had cost values h ks and h deg , and the mutated tree had values h ks and h deg , the mutated tree would be selected over the original if the discrete gradient (∆Λ) is greater than zero,
Table S1 :
S1Languages used in the study. Extinct languages, in the sense of not having any remaining native speakers, are marked by † . The mean normalized entropy values H ks and H deg are followed by their standard errors. H ks ± SE H deg ± SELanguage
Family
Group
N. Sents
Mean Sent.
Length
Abaza
N.W. Caucasian Abazgi
86
7.19 .389 ± .033
.796 ± .031
Afrikaans
Indo-European
Germanic
1,000
21.64 .413 ± .003
.813 ± .004
Akkadian †
Afro-Asiatic
Semitic
1,000
12.24 .305 ± .006
.870 ± .006
Akuntsu
Tupian
Tupari
93
5.38 .401 ± .040
.666 ± .042
Albanian
Indo-European
Albanian
60
13.95 .358 ± .013
.894 ± .014
Amharic
Afro-Asiatic
Semitic
1,000
8.66 .541 ± .007
.631 ± .008
Ancient
Greek †
Indo-European
Greek
1,000
12.87 .460 ± .006
.776 ± .007
Ancient
Hebrew †
Afro-Asiatic
Semitic
1,000
21.38 .387 ± .002
.888 ± .003
Apurinã
Arawakan
Southern
98
5.90 .378 ± .036
.751 ± .037
Arabic
Afro-Asiatic
Semitic
1,000
26.01 .327 ± .004
.905 ± .005
Armenian
Indo-European
Armenian
1,000
15.35 .397 ± .006
.839 ± .006
Bambara
Mande
Western
919
11.85 .441 ± .006
.777 ± .007
Basque
isolate
1,000
11.69 .438 ± .006
.791 ± .007
Beja
Afro-Asiatic
Cushitic
52
13.06 .439 ± .027
.791 ± .034
Belarusian
Indo-European
Slavic
1,000
10.81 .378 ± .007
.826 ± .007
Bhojpuri
Indo-European
Indic
343
16.31 .446 ± .009
.794 ± .010
Breton
Indo-European
Celtic
807
10.77 .465 ± .007
.753 ± .009
Bulgarian
Indo-European
Slavic
1,000
12.45 .413 ± .006
.817 ± .007
Buryat
Mongolic
Central
844
9.60 .353 ± .008
.803 ± .009
Cantonese
Sino-Tibetan
Sinitic
835
12.76 .554 ± .008
.648 ± .011
Catalan
Indo-European
Romance
1,000
25.99 .382 ± .003
.856 ± .004
Cebuano
Austronesian
Central
Philippine
150
6.58 .352 ± .023
.809 ± .024
Chinese
Sino-Tibetan
Sinitic
1,000
19.83 .393 ± .004
.873 ± .004
Chukchi
Chukotko-
Kamchatkan
Chukotkan
653
6.62 .510 ± .013
.648 ± .015
Classical
Chinese †
Sino-Tibetan
Sinitic
1,000
5.86 .346 ± .010
.782 ± .010
Coptic †
Afro-Asiatic
Egyptian
1,000
21.90 .451 ± .003
.810 ± .004
Croatian
Indo-European
Slavic
1,000
19.06 .378 ± .003
.873 ± .004
Czech
Indo-European
Slavic
1,000
15.52 .379 ± .005
.844 ± .006
Danish
Indo-European
Germanic
1,000
16.64 .442 ± .005
.795 ± .007
Dutch
Indo-European
Germanic
1,000
14.51 .436 ± .005
.794 ± .006
Emerillon
Tupian
Maweti-
Guarani
132
4.64 .428 ± .036
.696 ± .036
English
Indo-European
Germanic
1,000
16.11 .430 ± .005
.804 ± .006
Erzya
Uralic
Mordvin
1,000
8.66 .416 ± .008
.768 ± .009
Table S1 :
S1(continued) Language
Family
Group
N. Sents
Mean Sent.
Length
Table S1 :
S1(continued) H ks ± SE H deg ± SE Continued on next page Algorithm S2 Construct a random directed tree with N vertices sampled by non-linear preferential attachment with exponent α. Returns the set of directed edges in the tree. Require: N > 0, α ≥ 0 Edges ⇐ ∅ N odes ⇐ Set with elements {0, 1, . . . , N − 1} K ⇐ Array of N integers, with indices starting at 0. root ⇐ random sample uniformly an element from N odes Linked ⇐ {root} N odes ⇐ N odes − {root} K[root] ⇐ 1 while N odes = ∅ do newnode ⇐ random sample uniformly an element from N odes source ⇐ random sample an element i from Linked with probability proportional to K[i] α N odes ⇐ N odes − {newnode} Linked ⇐ Linked ∪ {newnode} K[newnode] ⇐ 1 K[source] ⇐ K[source] + 1 Edges ⇐ Edges ∪ {(source, newnode)} end while return EdgesLanguage
Family
Group
N. Sents
Mean Sent.
Length
Marathi
Indo-European
Indic
393
7.38 .412 ± .015
.728 ± .017
Mbyá Guaraní
Tupian
Maweti-
Guarani
1,000
9.95 .447 ± .006
.778 ± .007
Moksha
Uralic
Mordvin
350
7.77 .390 ± .012
.787 ± .014
Mundurukú
Tupian
Mundurukú
113
6.81 .401 ± .031
.735 ± .032
Naija
creole
1,000
15.55 .601 ± .006
.584 ± .008
Nheengatu
Tupian
Maweti-
Guarani
178
9.62 .422 ± .016
.771 ± .021
North Sami
Uralic
Sami
1,000
7.96 .533 ± .010
.628 ± .011
Norwegian
Indo-European
Germanic
1,000
15.06 .451 ± .006
.774 ± .008
Old Church
Slavonic †
Indo-European
Slavic
1,000
10.14 .493 ± .008
.719 ± .009
Old E.
Slavic †
Indo-European
Slavic
1,000
10.02 .453 ± .008
.751 ± .009
Old
French †
Indo-European
Romance
1,000
10.39 .473 ± .007
.718 ± .009
Persian
Indo-European
Iranian
1,000
16.61 .386 ± .004
.867 ± .004
Polish
Indo-European
Slavic
1,000
10.80 .391 ± .007
.819 ± .008
Pomak
Indo-European
Slavic
1,000
12.17 .499 ± .007
.704 ± .009
Portuguese
Indo-European
Romance
1,000
15.61 .405 ± .005
.802 ± .006
Romanian
Indo-European
Romance
1,000
19.14 .409 ± .004
.843 ± .005
Russian
Indo-European
Slavic
1,000
14.45 .360 ± .005
.861 ± .006
Sanskrit †
Indo-European
Indic
1,000
7.75 .447 ± .009
.723 ± .011
Scottish
Gaelic
Indo-European
Celtic
1,000
17.28 .400 ± .005
.854 ± .006
Serbian
Indo-European
Slavic
1,000
19.15 .377 ± .004
.872 ± .004
Sinhala
Indo-European
Indic
100
7.80 .433 ± .017
.768 ± .022
Skolt Sami
Uralic
Sami
216
9.83 .490 ± .015
.705 ± .018
Slovak
Indo-European
Slavic
1,000
9.34 .420 ± .007
.778 ± .008
Slovenian
Indo-European
Slavic
1,000
16.91 .435 ± .005
.805 ± .006
S. Levantine
Arabic
Afro-Asiatic
Semitic
84
7.19 .345 ± .026
.788 ± .034
Spanish
Indo-European
Romance
1,000
24.28 .369 ± .003
.872 ± .004
Swedish
Indo-European
Germanic
1,000
15.59 .430 ± .005
.808 ± .006
Swedish Sign
Language
sign language
169
9.17 .483 ± .019
.734 ± .020
Swiss German
Indo-European
Germanic
100
12.66 .530 ± .016
.696 ± .023
Tagalog
Austronesian
Central
Philippine
178
8.12 .337 ± .018
.869 ± .016
Tamil
Dravidian
Southern
877
11.24 .410 ± .009
.782 ± .010
Tatar
Turkic
N.W.
145
12.74 .336 ± .012
.881 ± .012
Telugu
Dravidian
S.-Central
665
4.87 .398 ± .015
.689 ± .016
Thai
Tai-Kadai
Tai
995
21.94 .380 ± .003
.888 ± .004
AcknowledgmentsCode and data are available online[51]. The author is indebted to Prof. Marco Baroni for helpful suggestions on this manuscript. The author declares no competing interests.Continued on next page
Panini's conception of "Syntactic Structures. E V N Namboodiri, Interdisc. J. Linguist. 10E.V.N. Namboodiri. Panini's conception of "Syntactic Structures". Interdisc. J. Linguist., 10:1-16, 2017.
Syntactic Structures. Mouton, The Hague/Paris. Noam Chomsky, Noam Chomsky. Syntactic Structures. Mouton, The Hague/Paris, 1957.
The Minimalist Program. Noam Chomsky, MIT PressCambridge, MANoam Chomsky. The Minimalist Program. MIT Press, Cambridge, MA, 1995.
Éléments de Syntaxe Structurale. Lucien Tesnière, KlincksieckParisLucien Tesnière. Éléments de Syntaxe Structurale. Klincksieck, Paris, 1959.
Speaking: From Intention to Articulation. J M Willem, Levelt, MIT PressCambridge, MAWillem J. M. Levelt. Speaking: From Intention to Articulation. Cambridge, MA, MIT Press, 1989.
Language production: Grammatical encoding. J , Kathryn Bock, Willem J M Levelt, Handbook of Psycholinguistics. M. A. GernsbacherSan DiegoAcademic PressJ. Kathryn Bock and Willem J. M. Levelt. Language production: Grammatical encoding. In M. A. Gernsbacher, editor, Handbook of Psycholinguistics, pages 945-984. Academic Press, San Diego, 1994.
Markedness in grammar: Distributional, communicative and cognitive correlates of syntactic structure. Talmy Givón, Stud. Lang. 15Talmy Givón. Markedness in grammar: Distributional, communicative and cognitive correlates of syntactic structure. Stud. Lang., 15:335-370, 1991.
Cognitive Linguistics. William Croft, D. Alan Cruse, Cambridge University PressCambridge, UKWilliam Croft and D. Alan Cruse. Cognitive Linguistics. Cambridge, UK, Cambridge University Press, 2004.
Efficiency and Complexity in Grammars. John A Hawkins, Oxford University PressOxford, UKJohn A. Hawkins. Efficiency and Complexity in Grammars. Oxford, UK, Oxford University Press, 2004.
Least effort and the origins of scaling in human language. Ramón Ferrer I Cancho, Ricard V Solé, 10.1073/pnas.0335980100P. Natl. Acad. Sci. USA. 100Ramón Ferrer i Cancho and Ricard V. Solé. Least effort and the origins of scaling in human language. P. Natl. Acad. Sci. USA, 100:788-701, 2003. doi:10.1073/pnas.0335980100.
Why do syntactic links not cross?. Ramón Ferrer I Cancho, 10.1209/epl/i2006-10406-0Europhys. Lett. 76Ramón Ferrer i Cancho. Why do syntactic links not cross? Europhys. Lett., 76:1228-1235, 2006. doi:10.1209/epl/i2006-10406-0.
Some word order biases from limited brain resources. a mathematical approach. Ramón Ferrer I Cancho, 10.1142/S0219525908001702Adv. Complex Syst. 113Ramón Ferrer i Cancho. Some word order biases from limited brain resources. a mathematical approach. Adv. Complex Syst., 11(3):394-414, 2008. doi:10.1142/S0219525908001702.
On language 'utility': Processing complexity and communicative efficiency. T , Florian Jaeger, Hal J Tily, Wiley Interdiscip. Rev. Cogn. Sci. 2T. Florian Jaeger and Hal J. Tily. On language 'utility': Processing complexity and communicative efficiency. Wiley Interdiscip. Rev. Cogn. Sci., 2:323-335, 2011.
Speakers optimize information density through syntactic reduction. R Levy, T Florian Jaeger, Advances in Neural Information Processing Systems (NIPS). Bernhard Schölkopf, John C. Platt, and Thomas HoffmanCambridge, MAMIT Press19R. Levy and T. Florian Jaeger. Speakers optimize information density through syntactic reduction. In Bernhard Schölkopf, John C. Platt, and Thomas Hoffman, editors, Advances in Neural Information Processing Systems (NIPS), volume 19, pages 849-856. MIT Press, Cambridge, MA, 2011.
Predicting pragmatic reasoning in language games. C Michael, Noah D Frank, Goodman, 10.1126/science.1218633Science. 336998Michael C. Frank and Noah D. Goodman. Predicting pragmatic reasoning in language games. Science, 336:998, 2012. doi:10.1126/science.1218633.
Kinship categories across languages reflect general communicative principles. Charles Kemp, Terry Regier, 10.1126/science.121881Science. 336Charles Kemp and Terry Regier. Kinship categories across languages reflect general communicative principles. Science, 336:1049-1054, 2012. doi:10.1126/science.121881.
Large-scale evidence of dependency length minimization in 37 languages. Richard Futrell, Kyle Mahowald, Edward Gibson, 10.1073/pnas.1502134112P. Natl. Acad. Sci. USA. 11233Richard Futrell, Kyle Mahowald, and Edward Gibson. Large-scale evidence of dependency length minimization in 37 languages. P. Natl. Acad. Sci. USA, 112(33):10336-10341, 2015. doi:10.1073/pnas.1502134112.
Efficient compression in color naming and its evolution. Noga Zaslavsky, Charles Kemp, Terry Regier, Naftali Tishby, 10.1073/pnas.1800521115P. Natl. Acad. Sci. USA. 115Noga Zaslavsky, Charles Kemp, Terry Regier, and Naftali Tishby. Efficient compression in color naming and its evolution. P. Natl. Acad. Sci. USA, 115:7937-7942, 2018. doi:10.1073/pnas.1800521115.
Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche. Christophe Coupé, Mi Yoon, Dan Oh, François Dediu, Pellegrino, 10.1126/sciadv.aaw2594Sci. Adv. 592594Christophe Coupé, Yoon Mi Oh, Dan Dediu, and François Pellegrino. Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche. Sci. Adv., 5(9):eaaw2594, 2019. doi:10.1126/sciadv.aaw2594. URL https://www.science.org/doi/abs/10.1126/sciadv. aaw2594.
On the physical origin of linguistic laws and lognormality in speech. G Iván, Bartolo Torre, Lucas Luque, Christopher T Lacasa, Antoni Kello, Hernández-Fernández, 10.1098/rsos.191023Royal Soc. Open Sci. 6Iván G. Torre, Bartolo Luque, Lucas Lacasa, Christopher T. Kello, and Antoni Hernández-Fernández. On the physical origin of linguistic laws and lognormality in speech. Royal Soc. Open Sci., 6:191023, 2019. doi:10.1098/rsos.191023.
Universals of word order reflect optimization of grammars for efficient communication. Michael Hahn, Dan Jurafsky, Richard Futrell, 10.1073/pnas.1910923117P. Natl. Acad. Sci. USA. 1175Michael Hahn, Dan Jurafsky, and Richard Futrell. Universals of word order reflect optimization of grammars for efficient communication. P. Natl. Acad. Sci. USA, 117(5):2347-2353, 2020. doi:10.1073/pnas.1910923117.
Modeling word and morpheme order in natural language as an efficient trade-off of memory and surprisal. Michael Hahn, Judith Degen, Richard Futrell, 10.1037/rev0000269Psychol. Rev. 1284Michael Hahn, Judith Degen, and Richard Futrell. Modeling word and morpheme order in natural language as an efficient trade-off of memory and surprisal. Psychol. Rev., 128(4):726-756, 2021. doi:10.1037/rev0000269.
Languages are efficient but, for whom? Cognition. Sean Trott, Benjamin Bergen, 225105094Sean Trott and Benjamin Bergen. Languages are efficient but, for whom? Cognition, 225:105094, 2022.
Euclidean distance between syntactically linked words. Ramón Ferrer I Cancho, 10.1103/PhysRevE.70.056135Phys. Rev. E. 7056135Ramón Ferrer i Cancho. Euclidean distance between syntactically linked words. Phys. Rev. E, 70:056135, 2004. doi:10.1103/PhysRevE.70.056135.
Hubiness, length and crossings and their relationships in dependency trees. Ramón Ferrer I Cancho, Glottometrics. 25Ramón Ferrer i Cancho. Hubiness, length and crossings and their relationships in dependency trees. Glottometrics, 25:1-21, 2013.
Patterns in syntactic dependency networks. Ramón Ferrer I Cancho, Ricard V Solé, Reinhard Köhler, 10.1103/PhysRevE.69.051915Phys. Rev. E. 6951915Ramón Ferrer i Cancho, Ricard V. Solé, and Reinhard Köhler. Patterns in syntactic dependency networks. Phys. Rev. E, 69:051915, 2004. doi:10.1103/PhysRevE.69.051915.
The complexity of Chinese syntactic dependency networks. Haitao Liu, 10.1016/j.physa.2008.01.069Physica A. 387Haitao Liu. The complexity of Chinese syntactic dependency networks. Physica A, 387:3048-3058, 2008. doi:10.1016/j.physa.2008.01.069.
The ontogeny of scale-free syntax networks: Phase transitions in early language acquisition. Bernat Corominas-Murtra, Sergi Valverde, Ricard V Solé, 10.1142/S0219525909002192Adv. Complex Syst. 123Bernat Corominas-Murtra, Sergi Valverde, and Ricard V. Solé. The ontogeny of scale-free syntax networks: Phase transitions in early language acquisition. Adv. Complex Syst., 12(3):371-392, 2009. doi:10.1142/S0219525909002192.
Chromatic transitions in the emergence of syntax networks. Bernat Corominas-Murtra, Martí Sànchez Fibla, Sergi Valverde, Ricard V Solé, 10.1098/rsos.181286Royal Soc. Open Sci. 512181286Bernat Corominas-Murtra, Martí Sànchez Fibla, Sergi Valverde, and Ricard V. Solé. Chromatic transitions in the emergence of syntax networks. Royal Soc. Open Sci., 5(12):181286, 2018. doi:10.1098/rsos.181286.
An information theory of the statistical structure of language. B Benoît, Mandelbrot, Communication Theory. W. JacksonAcademic PressBenoît B. Mandelbrot. An information theory of the statistical structure of language. In W. Jackson, editor, Communication Theory, pages 503-512. Academic Press, New York, NY, 1953.
Some effects of intermittent silence. George A Miller, Am. J. Psychol. 702George A. Miller. Some effects of intermittent silence. Am. J. Psychol., 70(2):311-314, 1957.
Information content versus word length in random typing. Ramón Ferrer I Cancho, Fermín Moscoso Del Prado, J. Stat. Mech. 12002Ramón Ferrer i Cancho and Fermín Moscoso del Prado. Information content versus word length in random typing. J. Stat. Mech., page L12002, 2011.
The missing baselines in arguments for the optimal efficiency of languages. Fermín Moscoso Del Prado, Proceedings of the 35th Annual Conference of the Cognitive Science Society. M. Knauff, M. Pauen, N. Sebanz, and I. Wachsmuththe 35th Annual Conference of the Cognitive Science SocietyAustin, TXFermín Moscoso del Prado. The missing baselines in arguments for the optimal efficiency of languages. In M. Knauff, M. Pauen, N. Sebanz, and I. Wachsmuth, editors, Proceedings of the 35th Annual Conference of the Cognitive Science Society, pages 1032-1037. Cognitive Science Society, Austin, TX, 2013.
How (non-)optimal is the lexicon?. Tiago Pimentel, Irene Nikkarinen, Kyle Mahowald, Ryan Cotterell, Damián Blasi, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Tiago Pimentel, Irene Nikkarinen, Kyle Mahowald, Ryan Cotterell, and Damián Blasi. How (non-)optimal is the lexicon? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Virtual, June 2021. Association for Computational Linguistics.
Miller's monkey updated: Communicative efficiency and the statistics of words in natural language. Spencer Caplan, Jordan Kodner, Charles Yang, Cognition. Spencer Caplan, Jordan Kodner, and Charles Yang. Miller's monkey updated: Communicative efficiency and the statistics of words in natural language. Cognition, 205:104466, 2020.
Emergence of scaling in random networks. Albert-László Barabási, Réka Albert, 10.1126/science.286.5439.509Science. 286Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. Science, 286:509-512, 1999. doi:10.1126/science.286.5439.509.
Optimization in complex networks. Ramón Ferrer I Cancho, Ricard V Solé, 10.1007/978-3-540-44943-0_7In Lecture Notes in Physics. 625SpringerRamón Ferrer i Cancho and Ricard V. Solé. Optimization in complex networks. In Lecture Notes in Physics, volume 625, pages 114-126. Springer, Berlin, 2003. doi:10.1007/978-3-540-44943-0_7.
. Marie-Catherine De Marneffe, Christopher D Manning, Joakim Nivre, Daniel Zeman, 10.1162/coli_a_00402Universal Dependencies. Comput. Linguist. 472Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. Universal Dependencies. Comput. Linguist., 47(2):255-308, 2021. doi:10.1162/coli_a_00402.
A mathematical theory of communication. Claude E Shannon, Bell Systems Tech. J. 27Claude E. Shannon. A mathematical theory of communication. Bell Systems Tech. J., 27:623-656, 1948.
Robustness and network evolution -an entropic principle. Lloyd Demetrius, Thomas Manke, 10.1016/j.physa.2004.07.011Physica A. 346Lloyd Demetrius and Thomas Manke. Robustness and network evolution -an entropic principle. Physica A, 346:682-696, 2005. doi:10.1016/j.physa.2004.07.011.
Syntactic persistence in language production. J , Kathryn Bock, 10.1016/0010-0285Cogn. Psychol. 1886J. Kathryn Bock. Syntactic persistence in language production. Cogn. Psychol, 18:355-387, 1986. doi:10.1016/0010-0285(86)90004-6.
Structural priming and the representation of language. Holly Branigan, Martin Pickering, 10.1017/S0140525X17001212Behav. Brain Sci. 40313Holly Branigan and Martin Pickering. Structural priming and the representation of language. Behav. Brain Sci., 40:E313, 2017. doi:10.1017/S0140525X17001212.
Connectivity of growing random networks. Pavel L Krapivsky, Sidney Redner, François Leyvraz, doi.org/10.1103/PhysRevLett.85.4629Phys. Rev. Lett. 8521Pavel L. Krapivsky, Sidney Redner, and François Leyvraz. Connectivity of growing random networks. Phys. Rev. Lett., 85(21):4629-4632, 2000. doi:doi.org/10.1103/PhysRevLett.85.4629.
Although the sublinear preferential attachment method slightly outperforms the optimization algorithm (see Fig. 3B), it is clear that a more exhaustive fine-tuning of the optimization parameters would lead to equally good 'fitting' of the distribution by both methods. Note1, Note, however, that the sensitivity of the optimization algorithm to its parameter values contrasts with the robustness of the single parameter of the sublinear preferential attachment modelNote1. Although the sublinear preferential attachment method slightly outperforms the optimization algorithm (see Fig. 3B), it is clear that a more exhaustive fine-tuning of the optimization parameters would lead to equally good 'fitting' of the distribution by both methods. Note, however, that the sensitivity of the optimization algorithm to its parameter values contrasts with the robustness of the single parameter of the sublinear preferential attachment model.
Bayes factors. Robert E Kass, Adrian E Raftery, 10.1080/01621459.1995.10476572J. Am. Stat. Assoc. 90430Robert E. Kass and Adrian E. Raftery. Bayes factors. J. Am. Stat. Assoc., 90(430):773-795, 1995. doi:10.1080/01621459.1995.10476572.
The spandrels of San Marco and the panglossian paradigm: A critique of the adaptationist programme. J Stephen, Richard C Gould, Lewontin, 10.1098/rspb.1979.0086Proc. R. Soc. B. 205Stephen J. Gould and Richard C. Lewontin. The spandrels of San Marco and the panglossian paradigm: A critique of the adaptationist programme. Proc. R. Soc. B, 205(1161):581-598, 1979. doi:10.1098/rspb.1979.0086.
Presumptive Meanings: The Theory of Generalized Conversational Implicature. C Steven, Levinson, MIT PressCambridge,MASteven C. Levinson. Presumptive Meanings: The Theory of Generalized Conversational Implicature. MIT Press, Cambridge,MA, 2000.
Ambiguity, accessibility, and a division of labor for communicative success. S Victor, Ferreira, 10.1016/S0079-7421(08Psychol. Learn. Motiv. 49Victor S. Ferreira. Ambiguity, accessibility, and a division of labor for communicative success. Psychol. Learn. Motiv., 49:209-246, 2008. doi:10.1016/S0079-7421(08)00006-6.
How language production shapes language form and comprehension. Maryellen C Macdonald, 10.3389/fpsyg.2013.00226Frontiers Psychol. 4Maryellen C. MacDonald. How language production shapes language form and comprehension. Frontiers Psychol., 4, 2013. ISSN 1664-1078. doi:10.3389/fpsyg.2013.00226. URL https://www.frontiersin. org/articles/10.3389/fpsyg.2013.00226.
On Growth and Form. D'arcy Wenworth Thompson, Cambridge University PressCambridge, UKD'Arcy Wenworth Thompson. On Growth and Form. Cambridge University Press, Cambridge, UK, 1917.
The mean sentence lengths in each language should not be interpreted as typologically meaningful (e.g., as in highly inflected languages resulting in shorter sentences): The registers and modalities from which the corpora originate differ markedly across the languages. Note2, 10.5281/zenodo.7566835References 52. It is these factors, rather than meaningful typological differences, that affect most the sentence lengths in our datasetNote2. https://doi.org/10.5281/zenodo.7566835 . References 52. The corpora were downloaded from http://hdl.handle.net/11234/1-4923. 53. The mean sentence lengths in each language should not be interpreted as typologically meaningful (e.g., as in highly inflected languages resulting in shorter sentences): The regis- ters and modalities from which the corpora originate differ markedly across the languages. It is these factors, rather than meaningful typological differences, that affect most the sentence lengths in our dataset.
. A Cailey, Quart. J. Pure Appl. Math. 23376A. Cailey, Quart. J. Pure Appl. Math. 23, 376 (1889).
. H Prüfer, Arch. Math. Phys. 27742H. Prüfer, Arch. Math. Phys. 27, 742 (1918).
F N Abuali, D A Schoenefeld, R L Wainwright, Proceedings of the 1994 ACM Symposium on Applied Computing. the 1994 ACM Symposium on Applied ComputingF. N. Abuali, D. A. Schoenefeld, R. L. Wainwright, Proceedings of the 1994 ACM Sympo- sium on Applied Computing (1994), p. 242-246.
. G Zhou, M Gen, Eng. Design, Automat, 3157G. Zhou, M. Gen, Eng. Design Automat. 3, 157 (1997).
J Gottlieb, B A Julstrom, G R Raidl, F Rothlauf, Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO'01. the 3rd Annual Conference on Genetic and Evolutionary Computation, GECCO'01San Francisco, CA, USAMorgan Kaufmann Publishers IncJ. Gottlieb, B. A. Julstrom, G. R. Raidl, F. Rothlauf, Proceedings of the 3rd Annual Confer- ence on Genetic and Evolutionary Computation, GECCO'01 (Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2001), p. 343-350.
. S Kullback, R A Leibler, Ann. Math. Stat. 2279S. Kullback, R. A. Leibler, Ann. Math. Stat. 22, 79 (1951).
. B Effron, Ann. Stat. 71B. Effron, Ann. Stat. 7, 1 (1979).
| [] |
[
"Do self-supervised speech models develop human-like perception biases?",
"Do self-supervised speech models develop human-like perception biases?"
] | [
"Juliette Millet juliette.millet@cri-paris.org \nCoML, ENS/CNRS/EHESS/INRIA/PSL\nCoML\nENS/CNRS\nEHESS\nINRIA/PSL\nLLF\nUniversity of Paris\nCNRS\nCRI\nIIFR\nUniversity of Paris\nParis, ParisFANFrance, France\n",
"Ewan Dunbar ewan.dunbar@utoronto.ca \nUniversity of Toronto\nTorontoCanada\n"
] | [
"CoML, ENS/CNRS/EHESS/INRIA/PSL\nCoML\nENS/CNRS\nEHESS\nINRIA/PSL\nLLF\nUniversity of Paris\nCNRS\nCRI\nIIFR\nUniversity of Paris\nParis, ParisFANFrance, France",
"University of Toronto\nTorontoCanada"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | Self-supervised models for speech processing form representational spaces without using any external labels. Increasingly, they appear to be a feasible way of at least partially eliminating costly manual annotations, a problem of particular concern for low-resource languages. But what kind of representational spaces do these models construct? Human perception specializes to the sounds of listeners' native languages. Does the same thing happen in self-supervised models? We examine the representational spaces of three kinds of stateof-the-art self-supervised models: wav2vec 2.0, HuBERT and contrastive predictive coding (CPC), and compare them with the perceptual spaces of French-speaking and Englishspeaking human listeners, both globally and taking account of the behavioural differences between the two language groups. We show that the CPC model shows a small native language effect, but that wav2vec 2.0 and Hu-BERT seem to develop a universal speech perception space which is not language specific. A comparison against the predictions of supervised phone recognisers suggests that all three self-supervised models capture relatively finegrained perceptual phenomena, while supervised models are better at capturing coarser, phone-level, effects of listeners' native language, on perception. . 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477.MP Cooke and OE Scharenborg. 2008. The interspeech 2008 consonant challenge. | 10.18653/v1/2022.acl-long.523 | [
"https://www.aclanthology.org/2022.acl-long.523.pdf"
] | 248,779,985 | 2205.15819 | d4c2e2f22f00bbadc6bc566a377c6759992c35ee |
Do self-supervised speech models develop human-like perception biases?
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Juliette Millet juliette.millet@cri-paris.org
CoML, ENS/CNRS/EHESS/INRIA/PSL
CoML
ENS/CNRS
EHESS
INRIA/PSL
LLF
University of Paris
CNRS
CRI
IIFR
University of Paris
Paris, ParisFANFrance, France
Ewan Dunbar ewan.dunbar@utoronto.ca
University of Toronto
TorontoCanada
Do self-supervised speech models develop human-like perception biases?
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022
Self-supervised models for speech processing form representational spaces without using any external labels. Increasingly, they appear to be a feasible way of at least partially eliminating costly manual annotations, a problem of particular concern for low-resource languages. But what kind of representational spaces do these models construct? Human perception specializes to the sounds of listeners' native languages. Does the same thing happen in self-supervised models? We examine the representational spaces of three kinds of stateof-the-art self-supervised models: wav2vec 2.0, HuBERT and contrastive predictive coding (CPC), and compare them with the perceptual spaces of French-speaking and Englishspeaking human listeners, both globally and taking account of the behavioural differences between the two language groups. We show that the CPC model shows a small native language effect, but that wav2vec 2.0 and Hu-BERT seem to develop a universal speech perception space which is not language specific. A comparison against the predictions of supervised phone recognisers suggests that all three self-supervised models capture relatively finegrained perceptual phenomena, while supervised models are better at capturing coarser, phone-level, effects of listeners' native language, on perception. . 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477.MP Cooke and OE Scharenborg. 2008. The interspeech 2008 consonant challenge.
Introduction
Recent advances in speech recognition and representation learning show that self-supervised pretraining is an excellent way of improving performance while reducing the amount of labelled data needed for training. For example, for the Lib-riSpeech dataset (Panayotov et al., 2015), the current best word error rates (Xu et al., 2021;Zhang et al., 2020) are obtained by systems based on the self-supervised wav2vec 2.0 model (Baevski et al., 2020). Systems using self-supervised pre-training, both using wav2vec 2.0 and using HuBERT (Hsu et al., 2021a,b), show excellent word error rates after having been fine-tuned on only ten minutes of labelled data.
What is the effect of this self-supervised pretraining? What type of representational spaces are learned by these models? Lakhotia et al. (2021) compared wav2vec 2.0, HuBERT, and contrastive predictive coding (CPC: Oord et al. 2017;Rivière and Dupoux 2021) using an ABX discriminability metric (Schatz, 2016), demonstrating that all three models preserve and enhance linguistically relevant speech sound contrasts in the language they are trained on. We build on this work, asking how these representational spaces compare to the perceptual spaces of human listeners, as inferred from behaviour on phone discrimination experiments.
Human listeners develop speech perception biases under the influence of their native languages. For example, Japanese native speakers tend to confuse the English sounds /r/ and /l/ (Yamada and Tohkura, 1990) (right and light in English will be perceived as the same or very similar), and English native speakers struggle with the French contrast /y/-/u/ (Levy, 2009), having difficulty perceiving the difference between words such as rue (/y/: "street") and roue (/u/: "wheel"). These misperceptions start to show early on in the native language acquisition process: infants older than 6 months exhibit a facilitating effect at discriminating sounds from their native language, but a decline at doing so for some non-native sounds (Kuhl et al., 2006). As the importance of this improvement for native sounds and this decline for non-native sounds seems to have a positive impact on infants' future language ability (Tsao et al., 2004;Kuhl et al., 2005), having a perceptual space with native language biases is probably essential to perceive and understand correctly native speech in all situations (with environmental noises, speaker change, etc). If our goal is to have speech models that are as resilient and as adaptable as humans, it is thus interesting to see if they present the same native language specific biases.
By measuring human listeners' ability to discriminate a variety of familiar and unfamiliar speech sounds, we can create a detailed profile of listeners' perceptual biases in the form of a set of sounds' discriminabilities. We then ask whether the training language influences self-supervised speech models in the same way that human listeners' native languages do.
In order to study speech models' perception biases and compare them with humans', we use the Perceptimatic benchmark datasets, 1 a collection of experimental speech perception data intended to facilitate comparison with machine representations of speech. As of this writing, Perceptimatic contains French-and English-speaking participants' behaviour on discrimination tasks for phones in six different languages, for a total of 662 phone contrasts, along with the sound stimuli used during the experiments.
As in Lakhotia et al. (2021), we test state-of-theart self-supervised models: wav2vec 2.0 (Baevski et al., 2020), HuBERT (Hsu et al., 2021a,b) and a CPC model (Rivière and Dupoux, 2021). We train these models on English and French speech recordings (the native languages of the participants in Perceptimatic). We compare the performance of these self-supervised models with a supervised ASR model, DeepSpeech (Amodei et al., 2016), trained on the same data but using phonemic labels. To study the degree to which the models' representational space is impacted by properties of speech per se, we also train the same models on recordings of acoustic scenes not including human vocalisations (environmental noises, animal sounds, music, and so on). We use mel-frequency cepstrum coefficients (MFCCs) as an acoustic baseline.
We show that: (1) Self-supervised models trained on speech recordings are better than models trained on acoustic scenes (non-speech) to discriminate speech sounds and to predict human discrimination behaviour (2) They are good at predicting human discrimination behaviour at the stimuli level, but they are worse than neutral acoustic features when we average human results per contrast (3) They show very few native (training) language effect.
All our code and data are freely available. 2 1 https://docs.cognitive-ml.fr/ perceptimatic/ 2 https://github.com/JAMJU/Sel_
Related work
We are not the first to compare speech models' representational spaces with humans. Feather et al.
(2019) used metamers as a tool to compare deep neural networks with humans. In a comparison between three speech recognition models, including a fine-tuned wav2vec 2.0 model, Weerts et al. (2021) showed that wav2vec 2.0 was the best at matching human low-level psycho-acoustic behaviour. However, the model exhibited clear differences with respect to humans-showing, for example, heightened sensitivity to band-pass filtering and an under-reliance on temporal fine structure. To perform a comparison at a slightly higher level of speech perception, Scharenborg et al. (2018) visualised a supervised ASR model's internal representations of different speech sounds to investigate its adaptation to new ambiguous phone categories and compare it to humans' behaviour.
Multiple datasets containing human behavioural data have been collected and openly released to encourage comparison of models with humans. It is for this reason that the Interspeech 2008 Consonant Challenge (Cooke and Scharenborg, 2008) and the OLLO database (Meyer et al., 2010), containing humans' phone identification behaviour in different paradigms, were created. This is also the case for the datasets making up the Perceptimatic database (Millet et al., 2019;Millet and Dunbar, 2020a,b;Millet et al., 2021) that we employ in this article, which were individually used to study less wellperforming models than the ones we use here.
More than just informing us on the kind of information speech models learn, comparing them with humans can have a broader impact on our knowledge of how human perceive speech, and how they learn to do so. Schatz et al. (2021) showed, for example, that a simple self-supervised speech model reproduces the reduced sensitivity to the English [r]/[l] contrast when trained on Japanese speech recordings. Pointing to the fact that the model used lacks abstract phone categories, the authors proposed an alternative to standard explanations of early phonetic learning in infants, as theories about this phenomenon rely heavily on the notion of phone categories.
With a similar method, Matusevych et al. (2020) tested the ability of various self-supervised speech models to reproduce infants' discrimination behaviour in multiple languages for a small set of supervised_models_perception_biases pairs of sounds. However, no quantitative comparison with behavioural data was made. Within the same test framework, Schatz and Feldman (2018) showed that a neural network trained to perform phone recognition was better at qualitatively reproducing Japanese and English native speakers' discrimination behaviour than an HMM-GMM model, focusing once again on the [r]/[l] pair of sound and also on vowel length differences. In this paper, we decide to: (i) evaluate different self-supervised speech models on more contrasts than these previous works (ii) directly compare their results with human behaviour (iii) measure models' similarity to humans at the stimuli level on top of doing it at the contrast level.
Methods
Human ABX test
Our probes of human speech perception use ABX phone discrimination tests, in which participants hear three speech extracts: A, B and X (an A/B/X triplet). A and B always differ in exactly one phone, and X is always (a distinct recording of) the same sequence of phones as either A or B (for example, A: /pap/, B: /pip/, X: /pap/). We ask the participants to indicate which of the first two sounds (A or B) is the most similar to the last sound (X). The ability of the participants to select the correct (target) rather than the distractor (other) speech extract indicates how well the population tested can discriminate the two phone categories p 1 and p 2 that target and other belong to (in our example, /i/ and /a/). We call p 1 :p 2 a contrast. In this paper, we examine the results of monolingual French-and English-speaking participants.
Using models to predict
As in previous works (Millet et al., 2019;Millet and Dunbar, 2020a,b;Millet et al., 2021), to test models in the same way as participants, we extract a representation M for each of the three stimuli making up each A/B/X triplet in the experiment. We compute, for a triplet target/other/X, each model's ∆-value:
∆ = DT W (M other , M X ) − DT W (M target , M X )(1)
with DT W being a distance obtained using dynamic time warping to aggregate a frame-level cosine distance along the warping path. The larger (more positive) the ∆-value obtained, the better the model is at discriminating the target and other phone categories. In our comparison between humans' and models' discrimination behaviour, we will generally use the raw ∆-values. The accuracy of the model on a specific triplet, independent of human listeners' behaviour, can also be computed by considering the model to be correct if the corresponding ∆ value is greater than zero and incorrect otherwise. Below, we will refer to this objective accuracy as an ABX score.
Models
We compare self-supervised speech models to see if the representational spaces they develop during training on a language resemble humans' perceptual spaces. We choose to test three state-of-the-art self-supervised models: contrastive predictive coding (CPC), the basis for the current best-performing systems on the Zero Resource Speech Challenge evaluation (Dunbar et al., 2021); wav2vec 2.0; and a HuBERT model. These last two models obtain excellent word error rates on the task of semisupervised speech recognition (self-supervised pretraining plus supervised fine-tuning on a small corpus).
As we use behavioural data from French and English-speaking participants, models are trained on either French or English recordings. To test for the impact of training on speech recordings compared to other types of sounds, we also train the models on recordings of acoustic scenes (nonspeech). We choose one specific output layer for each model, using the one that obtains the best result in terms of human similarity.
We use classic acoustic features as a baseline, using the first 13 mel-frequency cepstrum coefficients (MFCCs), calculated using LIBROSA, 3 with a window of 25 ms and a stride of 10 ms. We also train DeepSpeech (Amodei et al., 2016) as a supervised reference.
Contrastive predictive coding
We use a light version of a model that uses contrastive predicting coding (CPC: Rivière et al. 2020). This model is smaller than HuBERT or wav2vec 2.0, as it is only made up of 5 convolutions (the encoder) and one LSTM layer (the sequence model). It is trained using a contrastive loss. For a sequential input x = (x 1 , ...x t , ..., x T ), at time t, given the output of the sequential model, the loss pushes the model to distinguish the K next outputs of the encoder in the future from randomly sampled outputs from another part of x. The detailed loss can be found in Appendix A. We use the output of the sequence model as representations for the CPC model.
Wav2vec 2.0
We test wav2vec 2.0 (Baevski et al., 2020). The model is made up of three elements: an encoder, a quantizer, and a decoder. The encoder is made up of five convolutional layers, the quantizer is a dictionary of possible representations, and the decoder is made up of 12 transformer layers. When an input z is given to the quantizer, it outputs the representation q from the dictionary that is the closest to the input. For an input x, wav2vec 2.0 uses the encoder to transform it into z, which is then quantized into q, and in parallel z is directly passed to the decoder to obtain a context representation c.
Like the CPC model, wav2vec 2.0 is trained using a contrastive loss L m . Unlike the CPC model, it uses masking. Given a decoder representation of the context around some masked time step t, the loss pushes the model to identify the true quantized speech representation q t from among a set of K +1 quantized candidate representationsq ∈ Q t including q t and K distractors uniformly sampled from other masked time steps in the same utterance (see Appendix A for details). We analyse the fifth layer of the decoder.
HuBERT
We also test a HuBERT model (Hsu et al., 2021a,b). This model uses exactly the same architecture as wav2vec 2.0 (except for the quantizer, which is not used), but with a different objective. Its training relies on an unsupervised teacher h (in our case, a K-means algorithm) that assigns a cluster label to each frame. Formally, we have h(
X) = Z = [z 1 , ...z T ], with z t a C-class categorical variable.
HuBERT is trained to guess this cluster assignment for masked and unmasked frames at the same time. The detailed loss can be found in Appendix A.
The unsupervised teacher h is initially a Kmeans clustering on MFCCs. After a round of training using this initial teacher, h is replaced by a K-means model trained on the output of the sixth transformer layer of the model, and training restarts from scratch. We analyse the output of the sixth transformer layer.
Supervised reference: DeepSpeech
As a supervised reference system, we test a trained DeepSpeech model (Amodei et al., 2016). This model is not too intensive to train, is known to obtain reasonable ASR results, and has previously been compared to human speech perception (Millet and Dunbar, 2020b;Weerts et al., 2021). We train it to generate phonemic transcriptions.
DeepSpeech is composed of two convolutional layers followed by five RNN layers and a fully connected layer. The model is trained using spectrograms as input and a CTC loss, without a language model. We use representations extracted from the fourth RNN layer of the model, as it seems to give the best results, both in terms of absolute phone discriminability and for predicting human behaviour.
Comparing humans and models' perceptual space
In order to compare humans' and models' perceptual spaces, we use two metrics: the log-likelihood ( ) of a binary regression model on the experimental responses, and the Spearman's ρ correlation between the average of the model's ∆-values and participants' accuracies averaged within each phone contrast. These allow for predictions at two levels of granularity: the discriminability of individual experimental items ( ) and the overall discriminability of pairs of phones (ρ). In the default (native) setting, French-trained models are used to predict French-speaking participants' discrimination results, and similarly for English. See below for details. For each model tested (see Section 3.3), we fit a probit regression to predict the binary responses of the participants (coded as correct or incorrect) using as a predictor the ∆ values obtained from the model's representational space. In addition to a global intercept, the regression has other predictors to account for various nuisance factors: whether the right answer was A (1) or B (0); the order of the trial in the experimental list; a categorical predictor for the participant; and another for the Perceptimatic subset the result belongs to. We fit the model with an L1 regularisation (lasso). The is obtained from the fitted regression model: the larger (less negative) the , the better the given model's ∆ values predict the experimental data; thus, the more similar the model's representational space is to the perceptual space of the experimental participants.
We complement the log-likelihood metric with a correlation statistic. We compute the Spearman correlation (ρ), a correlation between the ranks of participants' accuracies (using their gradient results if available) and models' ∆-values, both averaged at the level of the phone contrast (zero indicates no correlation, one indicates a perfect monotonic relation). This measure averages out effects of individual A/B/X stimuli below the level of the phone contrast.
Comparing native language biases
Beyond global measures of how well models' representational spaces correspond to human listeners' perceptual spaces, we seek to assess how well the models reproduce group differences caused by the participants' native languages. One could think that humans are very good at discriminating all the sounds from their native language, and that they struggle to differentiate all the sounds from other languages. But reality is more complex than that: some contrasts are equally difficult or easy (even if they are not native) to discriminate for different language groups. The only way to study accurately native language biases is to focus on the relative discrimination difficulties shown by different language groups when listening to the same contrasts.
We present a method which evaluates the ability of the models to directly predict the relative difficulty of contrasts across the two language groups we have in the dataset we use. In other words, we measure if the models, when trained on French and English, show the same discrimination behaviour differences than French-and English-speaking participants.
We first normalise the ∆ values obtained by each model by dividing by their standard deviation (within model/training condition, across all A/B/X triplets), in order to put the ∆ values on the same scale for the two models. We average the normalised ∆ values by contrast. We then calculate the overall accuracies for each phone contrast in the listening experiment.
We calculate difference scores: for each phone contrast, we subtract an English model's average ∆ values from the average ∆ value for the corresponding French-trained model. We do the same with the English-speaking and the French-speaking participants' contrast-level accuracy scores. This yields a measure of the native language effect for each phone contrast, for each model, and similarly for the human participants.
For each model, we compute a Pearson correlation between its contrast-level native language effects and those of human listeners. The closer the correlation is to one, the better the phone-level native language effects are captured by a given model.
Because this score calculates a native language effect independently for the models and for the participants, it is not susceptible to the same confounds as an approach which would derive the native language effect from a comparison of two different (and thus not necessarily comparable) models' fit to the data. Note, however, that the approach we propose is restricted to predicting contrast-level effects of native language.
Experiments
The Perceptimatic dataset
For the human data, we use five experiments from the Perceptimatic benchmark dataset, 4 containing the results of French-and English-speaking participants results on ABX phone discrimination experiments. Stimuli come from French, English, Brazilian Portuguese, Turkish, Estonian, and German, and test a variety of contrasts between vowel and consonant sounds, some of which are familiar, and some of which are unfamiliar, to the listeners. The five datasets use different kinds of stimulus triplets, including short three-phone extracts cut from running speech (Zero Resource Speech Challenge 2017 and Pilot July 2018 datasets), as well as readspeech nonwords, which highlight English consonants and vowels (Pilot August 2018), compare English with French vowels in a crosslinguistic task (Cogsci-2019), or highlight vowel contrasts in a variety of languages (WorldVowels). The combined dataset contains 4231 distinct triplets (each of which is sometimes presented to participants in the order target/other/X, sometimes in the order other/target/X), which test 662 phone contrasts, and contains data from 259 French-speaking participants and 280 English-speaking participants (not the same participants for all stimuli).
Models' training
The speech models we use are trained on 600-hour subsets of either the English or the French Com-monVoice datasets (Ardila et al., 2019). To train DeepSpeech as a phone recognizer, the text transcriptions included in CommonVoice are phonemized using eSpeakNG. 5 When English-trained models are used to predict English-speaking participants' results and French-trained for Frenchspeaking participants', we refer to the trained models as nat-cpc, nat-w2v, nat-hub, and nat-deep.
To measure the impact of training on speech versus non-speech audio, the self-supervised models are also trained on a 595-hour subset of the Audioset dataset (Gemmeke et al., 2017) containing no human vocalizations. 6 We refer to these models as aud-cpc, aud-w2v, and aud-hub.
Each dataset is split randomly into train (80%), test (10%) and validation (10%). All recordings are resampled at 16000Hz and transformed into mono channel using sox. 7 For the CPC model, we use the Facebook Research implementation 8 with all the default parameters. We train the model for 110 epochs and take the models that present the best loss on the validation set.
For wav2vec 2.0, we use the Fairseq Base implementation, 9 using the LibriSpeech configuration. As (Baevski et al., 2020), we train the models for 400k updates and take the model with the best loss on the validation set.
For HuBERT, we also use the Fairseq Base implementation 10 and the LibriSpeech configuration. We follow all the training settings of (Hsu et al., 2021a): our first-pass training takes its unsupervised teacher labels from a K-means algorithm with 50 clusters on the MFCCs for 10% of the training set, training for 250k updates. We then extract the representation of the training set from the sixth transformer layer and use these representations to train a new K-means with 100 clusters and re-train the model using these categories as the teacher for 450k updates. We use the model with the best loss on the validation set.
We use a PyTorch implementation of Deep-
Results
In all graphs, statistical significance of comparisons is evaluated by bootstrapping over participants' results (N = 10000); redundant statistical comparisons are omitted for clarity (i.e. C > A is omitted when C > B and B > A). Confidence intervals shown are 95% bootstrap intervals.
Overall accuracy
Before using models' representational spaces to predict human discrimination behaviour, we look at how well models discriminate phones in their training language. We use the sign (positive/negative) of the ∆ values to calculate the objective accuracy of selecting the target phone (ABX scores). For interpretability, we calculate scores only on the subsets of Perceptimatic containing monolingual English and French stimuli which were presented to listeners in their native language (Zero Resource Speech Challenge 2017, WorldVowelsn and Pilot August). Results are shown in Table 1. In general, native self-supervised models obtain scores as good as or better than the supervised reference and human listeners, with a small preference for the nat-w2v model. They show a clear improvement over the corresponding models trained on acoustic scenes (non-speech). Certain datasets present more difficulties for the self-supervised models relative to nat-deep-notably, the English readspeech nonwords (from the WorldVowels and Pilot August subsets). Further details and comparison of ABX scores between native and non-native settings can be found in Appendix C.
Predicting human listeners
To assess how well self-supervised models' representational spaces match humans' perceptual spaces for speech, we compute the log-likelihood ( ) and the Spearman correlation (ρ) metrics over the entire Perceptimatic dataset (see Section 3.4) in the native-language training condition. Results can be seen in Figure 1. First, we need to note that the models' performance appears to be importantly tied to training on speech, rather than simply on natural audio. Indeed, the models trained on acoustic scenes (non-speech) consistently perform worse than the native-trained models and MFCCs, on both measures.
For the metric, nat-w2v does at least as well as, or (for French) somewhat better than, the supervised reference at modelling human listeners' perceptual confusions; most native self-supervised models perform similarly. Self-supervised models appear to learn representational spaces at least as similar to human native listeners' as our supervised phone recogniser when measured in this way.
The ρ metric, which correlates models' with humans' average dissimilarity (∆ or accuracy) for each phone contrast, reveals a different pattern. Here, nat-deep performs best. Furthermore, native self-supervised models perform worse than generic MFCC features. This suggests a component of human speech perception that is poorly captured by self-supervised models at the contrast level. (On some subsets-notably the WorldVowels set of familiar and unfamiliar vowel contrastsself-supervised models are better than MFCCs, but are still worse than our supervised reference; see Appendix B.)
To confirm the difference of result for the contrast level (the ρ metric) and the stimuli level (the metric), we compute the Spearman correlation metric at the stimuli level, averaging participants' results over the stimuli, instead of doing it, for models and humans, over contrasts. The results of this analysis can be found in Figure 2. We notice that this new analysis, done at the stimuli level, gives similar results than our log-likelihood metric. This supports the idea that the bad results for the original ρ metric of the self-supervised models we consider are due to the averaging over contrast.
To illustrate the comparisons at the level of phone contrasts, in Figure 3 we plot the average accuracy (per contrast) for French-speaking participants results against (left) DeepSpeech trained on French, one of the best-performing models, and (right) wav2vec 2.0 trained on AudioSet (aud-w2v), one of the models that is the least similar to humans.
Native language biases
To look for the presence of human-like native language biases, we look at the ability of native models to predict the difference in behaviour between the French-and the English-speaking groups (see Section 3.5). Figure 4 (left) shows the native language effect assessed over the entire Perceptimatic dataset-that is, the correlation, at the contrast level, between the differences in ∆ across languagetraining conditions, on the one hand, and the differences in accuracy for the two listener groups, on the other. Nat-cpc is competitive with nat-deep at predicting differences in groups. Nat-hub and nat-w2v, on the other hand, show very native language effect. Figure 4 (right) shows the same analysis, but on only the WorldVowels dataset. The stimuli in this dataset are constructed to specifically induce different discrimination behaviour between the two language groups. Here, nat-deep shows a much better ability to predict native language effects, both in the absolute, and relative to the other models.
As this analysis is done at the level of phone contrasts, and not individual stimuli, we could think that as our supervised reference model is trained to produce phonemic transcriptions, it probably gives it a head start at predicting differences in discrimination behaviour driven by phone categories. To look more precisely at this, we compute our na-Native effect All WorldVowels Figure 4: Native language effect for each model, the bigger the bar, the better the models capture language specificities in the discrimination behaviour between the two groups. Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised models in light grey. tive effect at the stimuli level instead of the contrast level. The results of this analysis can be seen in Figure 5, for the all dataset and for the WorldVowels subset. Going to the stimuli level reduces radically the native effect measured. This is expected, as the number of participants' result per stimulus is small, and the effect measured on humans is thus very noisy when measured at this level, and therefore harder to reproduce for the models. However, we can notice that our supervised reference and the CPC model are still the ones that exhibit the most native language effect.
Native effect All WorldVowels Figure 5: Native language effect for each model, the bigger the bar, the better the models capture language specificities in the discrimination behaviour between the two groups. Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised models in light grey.
Discussion
We showed that the self-supervised models we tested seem to learn representational spaces relevant for predicting human phone discrimination at the stimuli level. However, while humans show consistent discrimination behaviour for certain contrasts, whatever the stimuli, the self-supervised models we test do not capture systematic effects of contrasts between specific pairs of phones. Unlike our supervised reference, their similarity to human perceptual spaces is limited to capturing the discriminability of specific individual stimuli. The models tested were similar, but wav2vec 2.0 showed a slight advantage for predicting this kind of behaviour.
We have also shown that training on speech data is essential to obtaining a human-like perceptual space: for all of our metrics (ABX accuracy or similarity to humans), training on speech leads to better results than training on acoustic scenes (nonspeech). This strongly suggests that the benefits of self-supervised speech models comes from learning characteristics of human speech, not simply the fact that they are better general audio features. We speculate that this is not just important to their ability to predict human speech perception and to discriminate phones, but also of their (related) utility for doing downstream tasks such as ASR.
What these models learn about speech, however, is not typically language-specific-at least, not in the same way that human perception is. Wav2vec 2.0 and HuBERT do not model language-specific differences in human speech perception, and can be seen as modelling a language-neutral or universal speech perception space. Indeed, they exhibit very few native language effect (see Figure 4 and 5). We note that the idea of self-supervised models learning universal speech features is consistent with the fact that models trained on one language, or multilingually, have proven useful for representing speech in unseen languages (Riviere et al., 2020).
CPC does capture effects of native language on perception at the contrast level, but to a far lesser extent than our supervised reference when we focus on a subset of Perceptimatic designed to capture important differences in discrimination behaviour for our two groups of participants (WorldVowels). Our CPC model differs from the other models tested in its small size, its causal architecture (wav2vec and HuBERT use transformers), and in that it does not use masking during its training. Its architecture is probably the most biologically plausible of the three self-supervised models we tested. We should note, however, that it does not make it the best predictor of human discrimination behaviour among the three models (see Figure 1 and 2). One possible explanation for the self-supervised models' limitations we observe is insufficiency of training data: the models in question have generally shown good performance on downstream tasks when pre-trained on large amounts of data. We tested this using available pretrained wav2vec and HuBERT models trained on much larger amounts of data. The detailed results can be found in Appendix E. The models show a slight improvement, but, when looking at the ρ statistic at the phone contrast level, they are still worse than MFCCs.
Contrary to previous results (Millet and Dunbar, 2020a,b), our supervised reference system is quite good at predicting human discrimination behaviour (in particular at the contrast level), and clearly predicts a native language effect. The main differences in our experiment with (Millet and Dunbar, 2020b) are the type of model (DeepSpeech instead of HMM-GMM), and with (Millet and Dunbar, 2020a) the type of training objective (phone recognition rather than prediction of orthographic text), and the size of the training corpora (we use fewer data). Predicting phones rather than orthography seems to be critical (as we demonstrate in Appendix F), and using a neural network instead of a Bayesian model (HMM-GMM) leads to a more human-like representational space, as already highlighted by (Schatz and Feldman, 2018).
Given the advantage supervised phone recognizers show, a different approach to developing more human-like representational spaces in selfsupervised models might be the inclusion of tasks or constraints that push them to take into account longer time scales in order to encourage them to construct longer, more phone-like units.
B Predicting human results: results on sub-datasets
We present the results on the different Perceptimatic subsets. The results for Cogsci 2019 can be seen in Figure 7, for WorldVowels in Figure 6, for Zerospeech in Figure 10, for pilot-july in Figure 8, and for pilot-august in Figure 9. These results should be taken carefully, in particular for the Cogsci subset and the pilots, as not much contrasts and stimuli were tested for these subsets compared to the others.
Spearman correlation
Log-likelihood French English Figure 6: Results on the WorldVowels subset. Loglikelihood values (top: shorter bars are better) and Spearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised model trained on speech recordings (in light grey), and the baselines in darker grey (neutral acoustic features and models trained on acoustic scenes).
C Difference in ABX score between French and English models
To complete Table 1, we present in Figure 11 the detailed ABX score difference between a native discrimination setting (English models and participants discriminating English contrasts and same for French) and a non-native discrimination setting (English models and participants discriminating French contrasts and vice-versa). Humans' ABX scores differences show that Englishspeaking participants are not always better than French-speaking participants at discriminating En-
D Language preference
A possible approach to study models' language specificity would be to see if English-trained models predict English-speaking participants better than French-trained models, and vice versa. We assess whether models in the native training condition predict discriminability better than the corresponding models in the non-native training condition. Figure 12 plots the subtraction of the and ρ scores in the non-native setting from the corresponding scores in the native setting (across the entire Perceptimatic dataset). For both the (experimental item-level) and the (phone contrast-level) ρ score, DeepSpeech consistently outperforms over wav2vec 2.0. This is in contrast with the overall prediction performance reported above, where wav2vec 2.0 was on par with DeepSpeech, DeepSpeech generally shows a relative advantage for predicting the behaviour of listeners whose native language is the same as the training language, while wav2vec 2.0 does not.
There is a striking difference between languages in the performance of DeepSpeech: for English, the native DeepSpeech shows a substantial advantage over the non-native (French-trained) Deep-Speech which is not present for the French datasets. Similarly, in French, the native HuBERT shows an advantage over the non-native (English-trained) HuBERT, while the reverse is true in English. However, these two major differences may be in part explained by global effects: the French-trained HuBERT model is better at predicting the results for all participants (not just French-speaking participants), as is the English-trained DeepSpeech model.
E Using pretrained models on more data
We compare our models with pretrained models available online. For English, we tested a wav2vec and a HuBERT model trained on Librispeech (Panayotov et al., 2015) (960 h) and for French, we tested a wav2vec model trained on the French Voxpopuli dataset (Wang et al., 2021) (4.5k h). The results of these models compared to ours and MFCCs can be seen in Figure 13. Their different ABX scores can also be seen in Table 2 Spearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised model trained on speech recordings (in light grey), and the baselines in darker grey (neutral acoustic features and models trained on acoustic scenes). Figure 11: ABX score difference between native setting and non-native setting for the different models tested. The bigger the bar above zero, the bigger difference.
native ρ -non-native ρ native ll -non-native ll French English Figure 12: Native minus non-native log-likelihood values (top) and Spearman correlations (bottom) for French (left) and English participants (right). The higher the bar above zero, the better the native setting is compared to the non-native setting. The supervised reference is in white, the self-supervised models are in light grey. Black lines indicate 95% confidence intervals.
F Testing DeepSpeech using orthographic transcriptions
We tested two kinds of supervised references: one trained to produce phonemic transcriptions (the one used in the main article) and another trained to produce orthographic transcriptions. In general, training on phonemic transcriptions led the internal representations of the model to be closer to humans' perceptual space, as it can be seen in Figure 14. A comparison of English-speaking participants' discrimination ability and the two supervised models' ∆-values can also be seen in Figure 15. Models trained on phonemic transcriptions are better at predicting human behaviour than the ones trained on orthographic transcriptions. These results highlight on the one hand the impact of the labels used during supervised training, which can lead to non human-like speech representational space, and on the other hand the fact that humans probably use informations more similar to phoneme categories than possible orthographic transcriptions during a discrimination task. The amount of training data may also play a role, as large training sets could lead to "overfitting," in a loose sense, to fine "superhuman" acoustic details of phone classification. Appendix E shows that training size does not have this effect on the selfsupervised models studied here. We leave analysis of the supervised case for future work.
Figure 1 :
1Log-likelihood values (top: shorter/higher bars are better) and Spearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the native self-supervised models in light grey and the baselines in darker grey (neutral acoustic features and models trained on acoustic scenes).
Figure 2 :
2Spearman correlation at the stimuli level (taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant.
Figure 3 :
3Average of French listeners' results (higher: better discrimination) against average δ from (left) supervised reference trained on phonemic transcriptions (right) wav2vec trained on non-speech recordings. Each point is a contrast. Measures are normalised by dividing by standard deviation over the entire data set, so the two scales are comparable. Black circles are non-native contrasts, white ones are native (French).
Figure 10 :
10Results on the Zerospeech subset. Loglikelihood values (top: shorter bars are better) and
Figure 15 :
15Average of English listeners' results (higher: better discrimination) against average δ from (left) supervised reference trained on phonemic transcriptions (right) trained on orthographic transcriptions. Each point is a contrast. Measures are normalized by dividing by standard deviation over the entire data set. Black circles are non-native contrasts, white ones are native (English).
Speech.11 We train the models for 150 epochs (to reach an overfitting point), saving a checkpoint of the model for each epoch. We then take the checkpoint that produces the best result in terms of Phone Error Rate (PER) on the validation set. We use specaugment(Park et al., 2019) to improve the model performance. The French model obtains 7.8% PER on the French test set and the English model obtains 22.75% PER on the English test set.5 https://github.com/espeak-ng/
espeak-ng
6 A complete list of the labels kept can be found
in our github: https://github.com/JAMJU/Sel_
supervised_models_perception_biases
7 http://sox.sourceforge.net/
8 https://github.com/facebookresearch/
CPC_audio
9 https://github.com/pytorch/fairseq/
tree/master/examples/wav2vec
10 https://github.com/pytorch/fairseq/
tree/master/examples/hubert
the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised model trained on speech recordings (in light grey), and the baselines in darker grey (neutral acoustic features and models trained on acoustic scenes).Spearman correlation
Log-likelihood
French
English
Figure 7: Results on the Cogsci-2019 subset. Log-
likelihood values (top: shorter bars are better) and
Spearman correlation (bottom: taller bars are better) for
French (left) and English participants (right). Stars in-
dicate that glish sounds (for the Zerospeech subsets for exam-
ple).
Figure 8: Results on the pilot-july-2018 subset. Loglikelihood values (top: shorter bars are better) and Spearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised model trained on speech recordings (in light grey), and the baselines in darker grey (neutral acoustic features and models trained on acoustic scenes).French
English
. Models trained on English are evaluated on English-Figure 9: Results on the pilot-august-2018 subset. Log-likelihood values (top: shorter bars are better) andSpearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant. The supervised reference is in white to distinguish it from the self-supervised model trained on speech recordings (in light grey), and the baselines in darker grey (neutral acoustic features and models trained on acoustic scenes).Spearman correlation
Log-likelihood
French
English
Spearman correlation
Log-likelihood
French
English
Figure 13: Log-likelihood values (top: shorter bars are better) and Spearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant. The pretrained models are in white to distinguish it from our self-supervised models trained on only 600h of speech.Spearman correlation
Log-likelihood
French
English
Models
Zerospeech WorldVowels PA
FR
EN
FR
EN
EN
w2v-nat 0.88 0.88 0.71 0.83
0.84
w2v-pret 0.85 0.86 0.69 0.84
0.86
hub-nat
0.87 0.87 0.76 0.83
0.82
hub-pret -
0.89 -
0.89
0.90
mfccs
0.76 0.77 0.73 0.76
0.88
Table 2 :
2ABX scores of our self-supervised models (nat) compared to pretrained ones (-pret). Best results for each subset is in bold speaking participants (and English contrast for the ABX scores), and same for French.
Figure 14: Results of DeepSpeech trained on phonemic transcriptions (phon) or orthographic (orth), compared with MFCCs. Log-likelihood values (top: shorter bars are better) and Spearman correlation (bottom: taller bars are better) for French (left) and English participants (right). Stars indicate that the pairwise difference is significant.French
English
https://librosa.org/
See https://docs.cognitive-ml.fr/ perceptimatic/ for access to, and more detailed descriptions of, the data.
https://github.com/SeanNaren/ deepspeech.pytorch
AcknowledgementsThis research was supported by the École Doctorale Frontières du Vivant (FdV) -Programme Bettencourt, by the Connaught Fund and the Arts and Science Tri-Council Bridging Fund, University of Toronto, and by French Agence Nationale de la Recherche grants ANR-17-CE28-0009 (GE-OMPHON), ANR-11-IDFI-023 (IIFR), ANR-11-IDEX-0005 (USPC), ANR-10-LABX-0083 (EFL), ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute. This work was performed using HPC resources from GENCI-IDRIS (Grant 20XX-AD011012415).A Detailed losses used by the modelsThe loss used by the CPC model is the following:Where A k is a learned linear classifier, φ is the encoder, and N i is the set of negative examples. With an input x t and an output z t = ψ (φ (x 1 ) , . . . , φ (x t )), with ψ the sequential model, it pushes the model to identify the K next outputs φ (x t+k ) in the future, in comparison with randomly sampled outputs from another part of x. The loss used by the wav2vec 2.0 model is the following:for a masked time step t, the model has to choose the true quantized speech representation q t in a set of K + 1 quantized candidate representations q ∈ Q t which includes q t and K distractors. The model also use a diversity loss so the representation in the quantizer dictionary be as diverse as possible, for more details, see(Baevski et al., 2020).The loss used by HuBERT is the following:With α ∈ [0, 1], M the set of masked frames, f the cluster assignment predictor, andX masked frames.
Hubert: Self-supervised speech representation learning by masked prediction of hidden units. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, arXiv:2106.07447Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021a. arXiv preprintWei-Ning Hsu, Benjamin Bolte, Yao-Hung Hu- bert Tsai, Kushal Lakhotia, Ruslan Salakhutdi- nov, and Abdelrahman Mohamed. 2021a. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. arXiv preprint arXiv:2106.07447.
Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021b. Hubert: How much can a bad teacher benefit asr pre-training?. Wei-Ning Hsu, Yao-Hung Hubert Tsai, Benjamin Bolte, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEWei-Ning Hsu, Yao-Hung Hubert Tsai, Benjamin Bolte, Ruslan Salakhutdinov, and Abdelrahman Mo- hamed. 2021b. Hubert: How much can a bad teacher benefit asr pre-training? In ICASSP 2021- 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6533-6537. IEEE.
Early speech perception and later language development: Implications for the" critical period. Barbara T Patricia K Kuhl, Denise Conboy, Tobey Padden, Jessica Nelson, Pruitt, Language learning and development. 13-4Patricia K Kuhl, Barbara T Conboy, Denise Padden, To- bey Nelson, and Jessica Pruitt. 2005. Early speech perception and later language development: Impli- cations for the" critical period". Language learning and development, 1(3-4):237-264.
Infants show a facilitation effect for native language phonetic perception between 6 and 12 months. Erica Patricia K Kuhl, Akiko Stevens, Toshisada Hayashi, Deguchi, Developmental science. 92Shigeru Kiritani, and Paul IversonPatricia K Kuhl, Erica Stevens, Akiko Hayashi, Toshisada Deguchi, Shigeru Kiritani, and Paul Iver- son. 2006. Infants show a facilitation effect for na- tive language phonetic perception between 6 and 12 months. Developmental science, 9(2):F13-F21.
Adelrahman Mohamed, et al. 2021. Generative spoken language modeling from raw audio. Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, arXiv:2102.01192arXiv preprintKushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Adelrahman Mohamed, et al. 2021. Generative spoken lan- guage modeling from raw audio. arXiv preprint arXiv:2102.01192.
On the assimilation-discrimination relationship in american english adults' french vowel learning. S Erika, Levy, The Journal of the Acoustical Society of America. 1265Erika S Levy. 2009. On the assimilation-discrimination relationship in american english adults' french vowel learning. The Journal of the Acoustical So- ciety of America, 126(5):2670-2682.
Evaluating computational models of infant phonetic learning across languages. Yevgen Matusevych, Thomas Schatz, Herman Kamper, H Naomi, Sharon Feldman, Goldwater, arXiv:2008.02888arXiv preprintYevgen Matusevych, Thomas Schatz, Herman Kam- per, Naomi H Feldman, and Sharon Goldwater. 2020. Evaluating computational models of infant phonetic learning across languages. arXiv preprint arXiv:2008.02888.
Human phoneme recognition depending on speech-intrinsic variability. T Bernd, Tim Meyer, Thorsten Jürgens, Thomas Wesker, Birger Brand, Kollmeier, The Journal of the Acoustical Society of America. 1285Bernd T Meyer, Tim Jürgens, Thorsten Wesker, Thomas Brand, and Birger Kollmeier. 2010. Human phoneme recognition depending on speech-intrinsic variability. The Journal of the Acoustical Society of America, 128(5):3126-3141.
Predicting non-native speech perception using the perceptual assimilation model and state-of-the-art acoustic models. Juliette Millet, Ioana Chitoran, Ewan Dunbar, CoNLL 2021 Proceedings, 25th Conference on Computational Natural Language Learning. Juliette Millet, Ioana Chitoran, and Ewan Dunbar. 2021. Predicting non-native speech perception using the perceptual assimilation model and state-of-the-art acoustic models. In CoNLL 2021 Proceedings, 25th Conference on Computational Natural Language Learning.
Perceptimatic: A human speech perception benchmark for unsupervised subword modelling. Juliette Millet, Ewan Dunbar, Interspeech Conference Proceedings. Juliette Millet and Ewan Dunbar. 2020a. Perceptimatic: A human speech perception benchmark for unsuper- vised subword modelling. 2020 Interspeech Confer- ence Proceedings.
The perceptimatic english benchmark for speech perception models. Juliette Millet, Ewan Dunbar, Cogsci Conference Proceedings. Juliette Millet and Ewan Dunbar. 2020b. The percepti- matic english benchmark for speech perception mod- els. 2020 Cogsci Conference Proceedings.
Comparing unsupervised speech learning directly to human performance in speech perception. Juliette Millet, Nika Jurov, Ewan Dunbar, CogSci 2019-41st Annual Meeting of Cognitive Science Society. Juliette Millet, Nika Jurov, and Ewan Dunbar. 2019. Comparing unsupervised speech learning directly to human performance in speech perception. In CogSci 2019-41st Annual Meeting of Cognitive Science So- ciety.
Aaron Van Den Oord, arXiv:1711.00937Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. arXiv preprintAaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. arXiv preprint arXiv:1711.00937.
Librispeech: an asr corpus based on public domain audio books. Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEEVassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr cor- pus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206-5210. IEEE.
Specaugment: A simple data augmentation method for automatic speech recognition. S Daniel, William Park, Yu Chan, Chung-Cheng Zhang, Barret Chiu, Zoph, D Ekin, Quoc V Cubuk, Le, arXiv:1904.08779arXiv preprintDaniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. arXiv preprint arXiv:1904.08779.
Towards unsupervised learning of speech features in the wild. Morgane Rivière, Emmanuel Dupoux, 2021 IEEE Spoken Language Technology Workshop (SLT). IEEEMorgane Rivière and Emmanuel Dupoux. 2021. To- wards unsupervised learning of speech features in the wild. In 2021 IEEE Spoken Language Technol- ogy Workshop (SLT), pages 156-163. IEEE.
Unsupervised pretraining transfers well across languages. Morgane Riviere, Armand Joulin, Pierre-Emmanuel Mazaré, Emmanuel Dupoux, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEMorgane Riviere, Armand Joulin, Pierre-Emmanuel Mazaré, and Emmanuel Dupoux. 2020. Unsuper- vised pretraining transfers well across languages. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 7414-7418. IEEE.
Unsupervised pretraining transfers well across languages. Morgane Rivière, Armand Joulin, Pierre-Emmanuel Mazaré, Emmanuel Dupoux, Morgane Rivière, Armand Joulin, Pierre-Emmanuel Mazaré, and Emmanuel Dupoux. 2020. Unsuper- vised pretraining transfers well across languages.
Visualizing phoneme category adaptation in deep neural networks. Odette Scharenborg, Sebastian Tiesmeyer, Mark Hasegawa-Johnson, Najim Dehak, In Interspeech. Odette Scharenborg, Sebastian Tiesmeyer, Mark Hasegawa-Johnson, and Najim Dehak. 2018. Visu- alizing phoneme category adaptation in deep neural networks. In Interspeech, pages 1482-1486.
ABX-discriminability measures and applications. Thomas Schatz, Université Paris 6 (UPMCPh.D. thesisThomas Schatz. 2016. ABX-discriminability measures and applications. Ph.D. thesis, Université Paris 6 (UPMC).
Neural network vs. hmm speech recognition systems as models of human cross-linguistic phonetic perception. Thomas Schatz, H Naomi, Feldman, Proceedings of the conference on cognitive computational neuroscience. the conference on cognitive computational neuroscienceThomas Schatz and Naomi H Feldman. 2018. Neu- ral network vs. hmm speech recognition systems as models of human cross-linguistic phonetic percep- tion. In Proceedings of the conference on cognitive computational neuroscience.
Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic input. Thomas Schatz, H Naomi, Sharon Feldman, Xuan-Nga Goldwater, Emmanuel Cao, Dupoux, Proceedings of the National Academy of Sciences. 1187Thomas Schatz, Naomi H Feldman, Sharon Goldwa- ter, Xuan-Nga Cao, and Emmanuel Dupoux. 2021. Early phonetic learning without phonetic categories: Insights from large-scale simulations on realistic in- put. Proceedings of the National Academy of Sci- ences, 118(7).
Speech perception in infancy predicts language development in the second year of life: A longitudinal study. Feng-Ming Tsao, Huei-Mei Liu, Patricia K Kuhl, Child development. 754Feng-Ming Tsao, Huei-Mei Liu, and Patricia K Kuhl. 2004. Speech perception in infancy predicts lan- guage development in the second year of life: A longitudinal study. Child development, 75(4):1067- 1084.
VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, Emmanuel Dupoux, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsChanghan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. 2021. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 993-1003, Online. As- sociation for Computational Linguistics.
The psychometrics of automatic speech recognition. Lotte Weerts, Stuart Rosen, Claudia Clopath, Dan Fm Goodman, bioRxivLotte Weerts, Stuart Rosen, Claudia Clopath, and Dan FM Goodman. 2021. The psychometrics of au- tomatic speech recognition. bioRxiv.
Selftraining and pre-training are complementary for speech recognition. Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, Michael Auli, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEQiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, and Michael Auli. 2021. Self- training and pre-training are complementary for speech recognition. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3030-3034. IEEE.
Perception and production of syllable-initial english/r/and/l/by native speakers of japanese. A Reiko, Yoh'ichi Yamada, Tohkura, First international conference on spoken language processing. Reiko A Yamada and Yoh'ichi Tohkura. 1990. Perception and production of syllable-initial en- glish/r/and/l/by native speakers of japanese. In First international conference on spoken language pro- cessing.
Pushing the limits of semisupervised learning for automatic speech recognition. Yu Zhang, James Qin, S Daniel, Wei Park, Chung-Cheng Han, Ruoming Chiu, Pang, V Quoc, Yonghui Le, Wu, arXiv:2010.10504arXiv preprintYu Zhang, James Qin, Daniel S Park, Wei Han, Chung- Cheng Chiu, Ruoming Pang, Quoc V Le, and Yonghui Wu. 2020. Pushing the limits of semi- supervised learning for automatic speech recogni- tion. arXiv preprint arXiv:2010.10504.
| [
"https://github.com/JAMJU/Sel_",
"https://github.com/espeak-ng/",
"https://github.com/JAMJU/Sel_",
"https://github.com/facebookresearch/",
"https://github.com/pytorch/fairseq/",
"https://github.com/pytorch/fairseq/",
"https://github.com/SeanNaren/"
] |
[
"Leveraging Language to Learn Program Abstractions and Search Heuristics",
"Leveraging Language to Learn Program Abstractions and Search Heuristics"
] | [
"Catherine Wong ",
"Kevin Ellis ",
"Joshua B Tenenbaum ",
"Jacob Andreas "
] | [] | [] | Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems. Effective program synthesis depends on two key ingredients: a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-theart library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains -string editing, image composition, and abstract reasoning about scenes -even when no natural language hints are available at test time. | null | [
"https://arxiv.org/pdf/2106.11053v2.pdf"
] | 235,489,941 | 2106.11053 | d301054c2819e1a21480800fdabbe5ae909abe09 |
Leveraging Language to Learn Program Abstractions and Search Heuristics
Catherine Wong
Kevin Ellis
Joshua B Tenenbaum
Jacob Andreas
Leveraging Language to Learn Program Abstractions and Search Heuristics
Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems. Effective program synthesis depends on two key ingredients: a strong library of functions from which to build programs, and an efficient search strategy for finding programs that solve a given task. We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis. When integrated into a state-of-theart library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization on three domains -string editing, image composition, and abstract reasoning about scenes -even when no natural language hints are available at test time.
Introduction
Machine learning approaches based on program synthesisthe automatic inference of symbolic programs-can offer robustness, interpretability, verifiability, and strong generalization in few-shot learning settings (Appel et al., 2017;Lake et al., 2017). Many machine learning tasks can be formulated as program synthesis problems, including data manipulation (Delaware et al., 2015;Gulwani et al., 2017), semantic parsing (Artzi & Zettlemoyer, 2013;Liang, 2016), structured visual understanding (Johnson et al., 2017b;Yi et al., 2018), image generation (Ellis et al., 2017;Ganin et al., 2018), and policy learning (Fikes & Nilsson, 1971;Cropper & Muggleton, 2015;Silver et al., 2020). This paper introduces Language for Abstraction and Program Search (LAPS), a framework for improving the efficiency and generalizability of learned program synthesis 1 MIT 2 Cornell University 3 Center for Brains, Minds and Machines (CBMM) -MIT. Correspondence to: Catherine Wong <catwong@mit.edu>.
Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). models using natural language supervision. In LAPS, language guides learning of both libraries of reusable program abstractions and heuristics for searching in the space of programs. High-quality program libraries and search methods are the main ingredients of effective program synthesis approaches (Gulwani et al., 2017). Recent approaches to program synthesis have attempted to learn search models (Gulwani et al., 2015;Polozov & Gulwani, 2015;Balog et al., 2016;Devlin et al., 2017), program libraries, or both jointly from data (Shin et al., 2019;Dumancić & Cropper;Ellis et al., 2021;2020;Lázaro-Gredilla et al., 2019), but even the current best learning approaches can be computationally inefficient (often requiring upwards of thousands of CPU hours to bootstrap learning) and do not always discover generalizable libraries or search strategies.
LAPS builds on the intuition that natural language offers a powerful source of information for tackling both learning problems. Language simultaneously provides an efficient channel for communicating the structure of the search space (an instruction like draw a large hexagon next to a small pentagon decomposes a complex graphics task into highlevel parts) and a lexicon that names important reusable concepts in a given domain (for instance, suggesting that a function to draw variable-sized polygons might be useful for future graphics tasks). In this work we show how inducing jointly compositional generative models over natural language and programs provides a strong scaffold for library and search model learning in a hierarchical program induction model. When integrated into a state-of-the-art learning algorithm, DreamCoder (Ellis et al., 2021;, our approach dramatically improves performance on three different synthesis domains: string editing, structured image generation and scene understanding. Compared to the base synthesis approach, LAPS solves and learns more quickly from synthesis tasks, and produces higher-quality libraries that improve generalization to downstream tasks without natural language hints. LAPS builds on several recent developments in (nonlanguage-based) program synthesis, so we begin with a review of related work (Sec. 2), then formalize the search and library learning problems (Sec. 3) and base synthesis algorithm (Sec. 4). We then describe how LAPS extends the base algorithm to include language in learning (Sec. 5) and conclude with empirical results (Sec. 6).
arXiv:2106.11053v2 [cs.LG] 18 Apr 2022
Leveraging Language to Learn Program Abstractions and Search Heuristics + Iteratively learned jointly compositional generative models over program library and language (ii) Abstraction is structured over language to learn functions that compose like language (i) Neural search learns from generated language-annotated programs to condition on language as a high-level training signal Language-annotated training tasks + Iteratively learned library as a generative prior over programs (ii) Abstractions learned from training programs may be overfit to training tasks (i) Conditional neural search learned from program samples can struggle to generalize to hard training tasks Training tasks with no ground truth programs
A. Base learned synthesis algorithm (DreamCoder)
a small nine gon next to a small square four nested squares a five sided snowflake with a short line and a small seven gon as arms move_pen (for ∞ (move_pen (* unit_line 3) (/ 2π 6))) learned_fn_0 = (for ∞ (move_pen (* unit_line 3) (/ 2π x ))) gon_fn = (for ∞ (move_pen x (/ 2π y ))) We give an extended formulation (B, left) defined jointly over the program library and natural language descriptions of synthesis tasks, that can be used to incorporate natural language into both abstraction and search heuristic learning. When incorporated into a concrete learning algorithm, DreamCoder (A, right) we show that LAPS allows the model to leverage language richly during training to improve the generalization of both the learned neural search model and the learned library of program abstractions.
Related Work
Our work draws on recent program synthesis approaches that learn to synthesize programs from examples using neural models to guide search (Gulwani et al., 2015;Balog et al., 2016;Parisotto et al., 2016;Devlin et al., 2017;Polosukhin & Skidanov, 2018;Abolafia et al., 2018;Nye et al., 2019;Ellis et al., 2019;Si et al., 2019;Ye et al., 2020a); and learn libraries of symbolic abstractions from a collection of related programs or tasks (Dechter et al., 2013;Zhang et al., 2017;Shin et al., 2019;Dumancić & Cropper;Ellis et al., 2018;2021). Our formulation builds on hierarchical Bayesian formulations of program learning that frame both synthesis and library learning as probabilistic inference (Liang et al., 2010;Lake et al., 2015;Ellis et al., 2021).
Natural language has also been used to scaffold latent representation learning (Frome et al., 2013;Jia & Liang, 2016;Andreas et al., 2017;Ye et al., 2020b;Goyal et al., 2020;Liang et al., 2020;Mu et al., 2019;Luketina et al., 2019), and as a high-level specification for program synthesis tasks (Ye et al., 2020a;Nye et al., 2019;Polosukhin & Skidanov, 2018;Ye et al., 2020b;Desai et al., 2016;Srivastava et al., 2017). Here we present an approach that integrates language annotations in training for learning a more generalizable library and program search model that can be used after training with no additional annotations for new tasks.
Inductive synthesis and library learning
Consider the problem of writing a graphics program to draw the large hexagon image in the left column of Fig. 1. This is an inductive program synthesis problem: a task t (like draw a large hexagon) is specified with examples of what a program should do, where each example is given as an input x (in this case, the blank image canvas) and the desired output y (the large hexagon image). A program ρ solves the task if it produces outputs that are consistent with the specification when executed -that is, if evaluating ρ under an execution model E yields ρ E (x) = y.
Program synthesis begins with a library L = {l 0 , ..l n } containing the set of primitives that can be combined to produce solution programs, such as the (pseudo-code) primitive functions in a simple graphics language:
L = move pen|unit line|for| * |π|∞|0|1|2|...
which draw lines on a canvas parameterized by their length and angle. Given a library, there is also the problem of search: effective program synthesis requires a search strategy S that can be given a task specification (such as the image of a hexagon) and automatically discover a solution program like the one shown in Fig. 1:
(for ∞(move pen( * unit line 3)(/ 2π 6)) by searching over programs built from functions in L.
Both of these ingredients -the library L, and the search strategy S -can be made much more efficient if the synthesis engine will be expected to solve multiple related problems. In the graphics domain, for example, synthesis of the various images depicted in Fig. 1 is much more easily accomplished using a library like
L = polygon|large line|small line...
in which the original hexagon task can be expressed as polygon(6, large line)
A good library already provides a foundation for efficient search by making solutions easier to express. Even with such a library, search can be further guided by information about the prior structure of programs (for example, the fact that polygon is typically called with a large line or small line function as a second argument) and by information about the target task itself (for example, the fact that the target image contains six line segments). Thus, one way to describe an effective search strategy S is via a prior over programs P [ρ|L] in the library and a conditional inference model for inferring P[ρ|t, L], the distribution over programs likely intended by the observed task examples t.
The foregoing discussion lays out the basic ingredients of a hierarchical Bayesian formulation of program synthesis (used in learning algorithms like (Ellis et al., 2021;Lake et al., 2015;Dechter et al., 2013); see the graphical model in Fig. 1A, left) for jointly learning a library and conditional search model from a dataset T of synthesis tasks. We denote a prior over programs as P[ρ|L, θ L ], on a library L with parameters θ L . Given the observed tasks, we define the likelihood of the latent library and parameters as:
Φ(L, θ L ) = P[L, θ L ] t∈T ρ P[t|ρ]P[ρ|L, θ L ](1)
where P[L, θ L ] is a prior over all possible libraries and parameterizations, and P[t|ρ] is the likelihood that each inductive task t is consistent with a program ρ (for our purposes, P[t|ρ] = 1 if the program produces the desired output examples and 0 otherwise.) Learning in this model means estimating the optimal library and its parameters
L * , θ L * = arg max L,θ L Φ(L, θ L )(2)
along with a conditional model P[ρ|t, L * ] that can infer programs for new tasks.
This formulation also foreshadows a straightforward way in which linguistic descriptions of tasks (like those in the first column of Fig. 1) could be integrated into learning: we could simply extend the conditional model as P[ρ|t, d t , L * ] to include a task's description d t . We come back to this (and describe a more complete integration) in our approach, but first describe a concrete implementation of Eq. 2 on which we can realize the language-enriched model.
Base learning algorithm: DreamCoder
The LAPS framework we describe in this paper is a general one for extending Bayesian models of program learning like the one in Eq. 2 to incorporate information from language. For concreteness, however, our presentation and experiments build on the specific DreamCoder algorithm of Ellis et al. (2021), which we briefly review here. We choose DreamCoder because it exposes a modular implementation of the library and search learning problems in Eq. 2 and has previously demonstrated state-of-the-art performance across a variety of synthesis domains (Ellis et al., 2021;2020).
DreamCoder is initialized with a base library L 0 of starting primitives and a dataset of training tasks T . It returns a learned final library L f augmented with program abstractions and a learned neural search model Q(ρ|t, L) that predicts high probability programs conditioned on the task examples. Learning is iterative: DreamCoder alternately searches for solution programs to the training tasks (given a current library L i and search model Q i ) and updates the library and search model based on new solved tasks. We give details on each component below.
Program prior
DreamCoder defines the prior over programs as a probabilistic context free grammar (PFCG; Johnson 1998) for programs generated as productions from a library L of functions l ∈ L 1 . Formally, DreamCoder assigns a real-valued weight θ Li to each library function, which when normalized yields a production probability P[l|L, θ L ]. The prior probability of a program ρ is given by
P[ρ|L, θ L ] = l∈ρ P[l|L, θ L ](3)
the weighted product of probabilities of all of its constituent library functions. As all P[l|L, θ L ] < 1, this is equivalent to a description length prior over programs: longer programs (with more constitutent elements) will have lower prior probability under Eq. 3 since P[l|L, θ L ] monotonically decreases as |ρ| = |{l ∈ ρ}| increases.
Amortized conditional inference
To identify programs that solve tasks t while obtaining high probability under P[ρ|L, θ L ], DreamCoder trains a neural search heuristic Q i (ρ|t, L i ) at each iteration i to approximate the inverse conditional model. The heuristic uses a neural model trained to predict programs written in the current library L i according to the posterior:
Q i (ρ|t, L i ) ≈ P[ρ|t, (L i , θ Li )] ∝ P[t|ρ]P[ρ|(L i , θ Li )]
(4) conditioned on an encoding of the training examples (e.g. an embedding of the image in the task specification). This model is trained in the distant supervision setting (which begins with no supervised program data) by leveraging the forward generative model: sampling programs from the prior, executing them to produce observed tasks, and then minimizing Q(ρ|t, L) in Eq. 4 on the sampled programs, conditioned on their executions. This generative training procedure is generally applicable to any neural implementation of Q(ρ|t, L). (But see Ellis et al. (2021) and our supplementary material for additional details on the model architecture, which we reimplement in our experiments.)
Abstraction learning as program compression
(maximizing the likelihood of programs)
The DreamCoder algorithm also iteratively updates the library (L i , θ Li ) to approximately optimize Eq. 2 (finding L * , θ * L which maximize the likelihood of the inferred latent programs). Ellis et al. (2021) leverage equivalence to a compression problem defined over programs and the library. As discussed in 4.1, the PCFG program prior is equivalent to a description length prior over programs. Ellis et al. (2021) place an additional Dirichlet prior over the library description length:
P [L] ∝ exp −λ ρ∈L size(ρ) (5)
Estimating the optimal library then becomes the problem of inferring new library abstractions which can jointly compress the latent training programs (rewritten under the new library L i+1 ) and the description length |L i+1 | of the updated library (to optimize for shared abstractions across programs). This objective would still require inference over all possible ways of refactoring the latent programs under the updated library. Ellis et al. (2021) approximate this by only considering candidate abstractions and program refactorings that can be found via an efficient lambda-abstraction algorithm. As an example, this could refactor the large hexagon program while also rewriting the original program using this abstraction. Notably, this fragment -which draws polygons with lines of length 3 for sides -is not the most intuitively generalizable for the graphics domain. A programmer with more domain-specific prior knowledge would probably prefer an abstraction like λxy.(for ∞(move pen( * unit line y)(/ 2π x)) which additionally parameterizes the polygon by the length of its sides, and is semantically equivalent to the high-level polygon fn described in the problem setup in Sec. 3. However, learning abstractions by compressing the library and current solved training tasks may actually disfavor this more intuitively generalizable (but less compressive) candidate. Our second key goal in introducing language will be to leverage it as an additional source of prior knowledge to improve abstraction generalization.
Our Approach: Language for Abstraction and Program Search
Our work considers how the general learning problemjointly learning the library L which defines the prior over programs and the conditional search strategy S which inverts from tasks to programs -can be enriched in the language-annotated setting. Here, at least a subset of the training tasks are additionally annotated with a natural language description d t (such as the natural language description large six gon for the large hexagon drawing task in Fig. 1B). Language offers a more direct source of information for discovering a library like the one in our setup,
L = polygon|large line|small line...
if we leverage the expectation that generalizable abstractions (like a candidate polygon function) should correspond systematically to named fragments in natural language (like the token gon).
Language can also be leveraged by the conditional search model: learning systematic correspondences between language and programs from descriptions like large six gon should inform search on new tasks (like the one described as a small nine gon next to a small square in Fig. 1B) on the basis of shared language (like gon).
Our approach, LAPS (Language for Abstraction and Program Search) formalizes these intuitions by extending the hierarchical Bayesian problem formulation over programs given in Sec. 3 to additionally generate natural language task descriptions (see graphical model in Fig 1B, left). In particular, we assume the existence of a jointly generative model J(ρ, d t ) over latent programs that solve tasks, and corresponding natural language descriptions. We rewrite the original prior over programs P[ρ|L, θ L ] defined on a library L to a joint prior P[ρ, d t |J, θ J ], and extend the distribution in Eq. 1 over the latent joint model J with parameters θ J , written as
Φ(J, θ J ) = P[J, θ J ] t∈T ρ P[t|ρ]P[ρ, d t |J, θ J ](6)
Learning in the language-augmented setting now involves estimating the optimal joint model and its parameters
J * , θ J * = arg max J,θ J Φ(J, θ J )(7)
along with a language-conditioned model P[ρ|t, d, J * ] that can infer programs for new tasks based on both specification examples and task descriptions.
In the remainder of this section we first describe a general joint model formulation that can be learned from languageannotated training tasks. We then show how the joint framework allows natural language to inform learning at both the abstraction and search level in a concrete example, using DreamCoder as the base hierarchical algorithm.
Joint prior over programs and language
Base prior We formulate our joint prior over language and programs as
P[ρ, d t ] = P[ρ|L, θ L ]P[d t |ρ, L](8)
decomposed as the product of the original program prior defined on a program library P[ρ|L, θ L ], and a learned program-to-natural-language "translation" model T (d t |ρ, L) ≈ P[d t |ρ, L] which describes how natural language descriptions are generated for latent programs (in our running example, this model would describe how the large six gon description was generated conditioned on the program solution for that task.) This decomposition builds modularly on the original program prior defined only on the library L. Learning T (d t |ρ, L) formalizes the intuition that there should be a learnable relationship between language that describes tasks and latent programs that solve them.
T (d t |ρ, L) can be implemented in many ways (e.g. (Wong & Mooney, 2007;Joshi & Schabes, 1997;Bahdanau et al., 2014;Chen et al., 2018)), compatible with the vast literature on structured translation between languages, including natural languages and programming languages. Our experiments use the translation model popularly known as IBM Model 4 (Brown et al., 1993), one of a class of well-studied Bayesian machine translation models (Gal & Blunsom, 2013)
which decompose T (d t |ρ, L) into T (d t |ρ, L) ∝ w∈dt,l∈ρ P T [w|l](9)
a product of learned token-level translation probabilities P T [w|l] between individual functions l in a task's latent program ρ and words w in the task description d t . (See supplementary materials for model implementation and training details.) This token-level decomposition more directly captures the intuition in our setup: that abstractions in a programming library generally correspond systematically to individual names in natural language descriptions, and that the inverse conditional search can be guided based on a generally compositional relationship between program primitives and words. This formulation also allows these compositional relationships to be inferred from fewer observed examples than would be possible with other translation models with weaker inductive biases. However, Eq. 8 should extend to include any similar translation model and need not include this stronger decomposition.
Adding richer priors In LAPS, the joint model can also provide a controllable interface for incorporating additional prior knowledge about language into learning. Learned translation models are often fit to only maximize the likelihood of the observed language (here, with respect to inferred latent training programs). However, our formulation also supports T (d t |ρ, L) enriched to include additional priors over language (such as speaker-specific language usage, or pragmatics models that capture a speakers' other communicative goals (Grice, 1989;Goodman & Frank, 2016).)
In our experiments (Sec. 6.1) we showcase this with results from an extended model incorporating an additional mutual exclusivity prior. Mutual exclusivity models the expectation that newly encountered words should correspond to different meanings than known ones. This prior has been shown to play an important role in language learning in cognitive science (Frank et al., 2009;Markman & Wachtel, 1988), and in machine learning models (Gandhi & Lake, 2019).
In the synthesis setting, mutual exclusivity can capture the expectation that "new" words (which appear in descriptions of currently unsolved tasks) are more likely to correspond to different program components than those used in solved training tasks (and for which there would otherwise be no signal to learn a translation model in the distant setting). Our extended model incorporates this prior by updating Eq. 9 to distinguish between W known (words that appear in solved training tasks with latent programs) and W new (newly encountered words) as
T M E (d t |ρ, L) ∝ w∈d,l∈ρ (1[w ∈ W known ]P T [w|l]) (1[w ∈ W new ]P[l|L, θ L ] −1 ])(10)
where new words are modeled as inversely related to primitives under the program prior (fit to previously solved tasks) -modeling the expectation that new words more likely relate to less-used program components than those used so far.
Integrating the joint model into amortized conditional search
The joint model allows LAPS to incorporate natural language into the learned conditional search model over programs. In place of the original neural amortized model in the base algorithm (Sec. 4.2), we train an extended, languageconditioned model Q i (ρ|t, d t , J i ) at each iteration to predict programs according to:
Q(ρ|t, d t , J i ) ≈ P[ρ|t, d t , J, θ J ] ∝ P[t|ρ]P[ρ, d t |J, θ J ] ∝ P[t|ρ]P[d t |ρ]P[ρ|L, θ L ] ≈ P[t|ρ]T (d t |ρ, L)P[ρ|L, θ L ](11)
which amortizes program inference under our joint model formulation. Importantly, we can train this neural model using samples from the joint generative model, consisting of sampled programs and corresponding generated language.
As with the original learning setting, this sample-based training allows LAPS to learn a generalizable, languageconditioned neural search heuristic, capable of leveraging compositional patterns in natural language, from very few examples in the distant supervision setting. We can also now see the benefits of richer language-specific priors (such as mutual exclusivity): the neural model trained to amortize inference from the joint generative model can also approximate the mutual exclusivity bias, enabling better exploration and generalization in the presence of new words.
Abstraction learning as joint model compression
The extended joint model objective in Eq. 2 and 7 also allows LAPS to incorporate natural language into abstraction learning. Extending the compression-based abstraction objective in the base algorithm -which optimized for libraries that maximally compress the latent training programs and library -requires defining a prior over the language-program translation model T in terms of the optimal program library.
We place a prior over T defined on a program library L and a natural language token vocabulary W as
P[T |L] ∝ l∈L,w∈W −I(P T [w|l])(12)
where
−I(P T [w|l]) = − log(P T [w|l]
). This models the intuition that a good library contains program abstractions which correspond well to individual language tokens, and reduce entropy in the compositional translation model. Defining the prior compositionally also allows the algorithm to maintain the desirably property from (Ellis et al., 2021), in which the joint likelihood can be efficiently re-approximated with respect to individual candidate program abstractions based on their constituent subcomponents l and corresponding translation distributions P T [w|l] under the current translation model. As in the base synthesis algorithm, we
Algorithm 1 Input: Initial library L 0 , annotated training tasks (T, D)
Initialize J ← uniform; training task solutions p ← {} for i ≤ f do Q i (ρ|t, d t ) ← Train on (p, T, d t ) and samples ∼ J p ← programs from search amortized with Q i L i ← abstractions optimized over (p, J) p ← programs rewritten using abstractions from L i J i ← Fit θ L and T (d t |ρ) to (p, d t ) end for Return Q f , L f fully re-estimate a new translation model at each iteration T i+1 (d t |ρ i+1 , L i+1 )
to fit the updated library and refactored programs. See the supplement for extended details.
Taken together, Alg. 1 summarizes the concrete algorithm using LAPS to incorporate language into (Ellis et al., 2021).
Experiments
We demonstrate LAPS on three different domains: string editing, compositional graphics drawing, and scene reasoning, which we choose to represent a diverse range of tasks and accompanying language (Fig. 2). In all three domains, we find that compared to the base synthesizer, LAPS learns and solves heldout synthesis problems faster (Table 1, Sec. 1-2), and produces higher-quality libraries that improve generalization even when natural language hints are not available after training (Table 1, Sec. 3).
Below we summarize each domain. We then discuss results showing that LAPS is effective because of how the hierarchical model incorporates language during learning: we find that (1) LAPS searches more effectively during training, enabling it to solve and learn from more diverse training tasks than the baseline model; (2) LAPS abstracts more effectively during training, adding in more generalizable library routines as it learns; and (3) LAPS can use language during testing if it is available, as an important additional source of high-level information during synthesis.
Domains
All three domains consist of a dataset of inductive synthesis tasks t specified as input/output examples; procedurally generated synthetic language annotations; and human language annotations sourced from Mechanical Turk. We use synthetic language as our primary evaluation benchmark: we are interested in a controlled probe of learning when words are systematically reused and composed, but refer to more abstract concepts than in the initial base programming language. However, we also use human language to evaluate the practicality of our approach in real-world settings.
A. String Editing (shown with sample I/O examples of n=30 and random human description of n=3)
3 3 0 a small semicircle (f19 (f9 0 x)) a medium semicircle (f3 (f9 0 x)) a big semicircle (f9 (* (/ ε 1) 5) x) f0=(λ (x y z) (for x (λ (u v) (move z y v)))) 1 . . . for pen-up f4=(λ (x y z) (f0 x (/ 2π y) 1 z)) f5=(λ (x y) (f4 x x y)) 0.27 | gon 0.22 | small f9=(f0 ∞ ε) 0.09 | small 0.07 | semicircle f24=(λ (x y) (f23 (λ (z u) (f21 y 0 x u)))) f17=(λ (x) (pen-up (λ (y) (f16 x y)))) 0.67 | separated 0.06 | space 0.09 | snowflake 0.09 | arms
D. Example initial graphics primitives
shown with learned high probability p (word | primitive) ...
...
a small five gon (f5 5 x) a small nine gon (f5 9 x) a medium seven gon (f5 2 (f20 7 x)) eight sided snowflake with a small seven gon as arms
(f24 7 8 x)
five sided snowflake with a short line and a medium five gon as arms Additional information for all domains is in the supplement. (2017) contains human annotations; synthetic language annotations are generated over the ground-truth regexes using templates based on the original human annotations. We initialize synthesizers with functional programming primitives (map, fold, cons, car, cdr, length, index) and character constants (following the simpler text editing domain in the baseline paper (Ellis et al., 2021)). The neural search model encodes the I/O task examples as character arrays with a bidirectional GRU.
Compositional graphics: inverse graphics problems (n=200 train; n=111 test) where each task is specified by an image and solved by synthesizing a program in LOGO Turtle graphics (Abelson & DiSessa, 1986). This is inspired by the graphics domain in (Ellis et al., 2021) but re-designed to be more challenging (ground-truth programs are much longer on average in the base programming language) and explicitly compositional. Synthetic language annotations are generated with high-level templates over the objects and relations in each task; human annotations are sourced as image descriptions from MTurk. We initialize synthesizers with the graphics primitives in (Ellis et al., 2021 (2018)). We include these to demonstrate a key feature of our approach: the ability to learn generalizable libraries from a basic but expressive set of primitives, rather than restricting the program space pre-emptively with a hand-designed language. We use synthetic language annotations from the original templates in (Johnson et al., 2017a) (and templates written in the same style for the extended tasks); human annotations are sourced from annotators shown the same tasks. We initialize synthesizers with functional programming primitives similar to the stringediting domain, with domain-specific query functions and constants (get color(x); get shape(x); blue; cube). The neural model encodes the task examples as flattened arrays of object attributes using a bidirectional GRU.
Results
On all three domains, we compare our model against the baseline synthesizer (Table 1, DreamCoder, no language); a multimodal baseline ( We find that:
(1) LAPS searches more effectively during training, enabling it to solve and learn from more training tasks than the baseline synthesizer. Under the hierarchical model formulation, search and abstraction are closely related: successfully solving tasks is the basis for abstraction learning.
Comparing the model learning trajectories (Fig. 3) on training tasks shows that the LAPS models consistently search more effectively during training: at each iteration they solve more tasks within a given time budget. Fig. 3 also highlights that LAPS models improve training robustness in the distant learning setting: as in the baseline paper (Ellis et al., 2021), we find the baseline model learning to be highly variable without a training curriculum (compare training curves from Fig. 3 with different random seed replications; and the best vs. mean performance, have argued for a curriculum), we also test a simple curriculum by ordering tasks according to their natural language token length (which can be evaluated without ground truth programs). Table 1 shows that our model is still more effective, and that non-curriculum performance is in fact comparable to curriculum performance.
(2) LAPS abstracts more effectively during training, adding in more generalizable library routines as it learns. The Table 1, showing % heldout tasks solved on the graphics domain over random training task orderings. (Mean results in Table 1 shows average test-time performance from the trained model replications.) variability across training replications in the baselines also highlights a challenge for abstraction learning: not all shared subroutines encountered in training generalize well to new tasks. Adding poor abstractions can actually be detrimental: they increase the combinatorial search space. We find that our approach produces higher-quality libraries after training: Table 1 (no language at test time section) shows that we consistently improve performance in a head-to-head comparison using enumerative search from the library priors alone -in some domains, enumerative search with our model's library outperforms neurally guided search from the baseline model. We also find the learned library is effective for neurally-guided synthesis when no language hints are available after training (Table 1, no language at test, example-guided synthesis), showing that LAPS incorporates language to learn a more effective library overall, which generalizes to the non-language setting. See supplement for example learned abstractions from L f .
(3) LAPS can use language during testing if it is available, though it doesn't need to for competitive performance. Clearly, language can provide a useful source of high-level information if it is available for new tasks. Our approach produces a neural synthesizer pre-trained to condition on language where available. Results on all three domains show that the model can use it to achieve additional performance gains (Table 1, see language at test rows). We also find that the models trained on synthetic annotations generalize effectively to natural human language at test (Table 1, synth train, human test), suggesting that even if human annotation is too costly, in many cases hand-writing natural language tem-plates to accompany a few ground-truth programs is likely sufficient (and easier than hand designing a full DSL).
Conclusion
We presented Language for Abstraction and Program Search (LAPS). LAPS builds on hierarchical Bayesian models of program learning: we offer a general framework for introducing jointly generative models over programs and language into learned synthesis. Going forwards, an important avenue for future work will be exploring different concrete implementations of the base algorithm and translation model which relates programs to language. A promising future direction could leverage recent structured, neural joint models that can learn the compositional units of language, and incorporate pre-trained language representations (Joshi & Schabes, 1997;Wiseman et al., 2018;Kim et al., 2019).
The hierarchical Bayesian framing also draws connections to computational cognitive models which model human conceptual representations and learning (Goodman et al., 2014;Fodor, 1975;Rule, 2020) as inference over program-like representations. Future human experiments could explore LAPS as a cognitive model, combining paradigms for studying language learning with those for studying non-linguistic abstraction and search (e.g. Smith et al. 2003;Hawkins et al. 2019;Lake et al. 2015;Tian et al. 2020 Liang, W., Zou, J., and Yu, Z. Alice: Active learning with contrastive natural language explanations. arXiv preprint arXiv:2009.10259, 2020.
Luketina, J., Nardelli, N., Farquhar, G., Foerster, J., Andreas, J., Grefenstette, E., Whiteson, S., and Rocktäschel, T. A survey of reinforcement learning informed by natural language. arXiv preprint arXiv:1906.03926, 2019.
Mao, J., Gan, C., Kohli, P., Tenenbaum, J. B., and Wu, J. The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. arXiv preprint arXiv:1904.12584, 2019.
Supplemental: Leveraging Language to Learn Program Search Heuristics and Abstractions
This contains the supplemental appendix to the 2021 ICML paper. It is organized sequentially in reference to the main text; S{N} refers back to section N in the main text. A complete release of code for our implementation, including command line scripts to replicate the experiments in the paper and links to the datasets, can be found at: https://bit.ly/3g9361W.
S4. Base learning algorithm: DreamCoder
The LAPS framework described in the main paper (Sec. 5) is a general one for extending Bayesian models of program learning to incorporate information from natural language (see (????)). Our concrete implementation and experiments use the DreamCoder approach of (??) as the base synthesis algorithm, which implements the hierarchical Bayesian formulation of program learning. It defines a modular interface with two primary learning components: a learned conditional inference model for search (as a neural search heuristic); and a learned abstraction algorithm for updating the program prior (based on program refactoring and compression) (?). Each of these learning components has been additionally implemented in other work (such as (?????) for neurally guided synthesis, and (?????) for program abstraction learning).
This supplementary section provides theoretical and implementation details on the DreamCoder algorithm we use in our experiments (summarized in Sec. 4). We match our implementation as closely as possible to the original work for comparison with published baselines. We provide key details relevant to the language-guided extension, but strongly recommend the original works which introduce the DreamCoder algorithm (??) for further reference.
S4.1 Program prior and MDL equivalence
Hierarchical Bayesian program learning formulations require a prior over expressible programs. DreamCoder is learned iteratively: it is initialized with a base library L 0 and returns a library L f containing program abstractions learned from solving training tasks. Therefore, Dream-Coder defines its program prior with respect to the current library L i maintained at each iteration. This is parameterized as a simple PCFG P[ρ|L, θ L ] whose productions are of the form l i → l j ∈ L, each with a real-valued weight θ Ll , where the probability of a program ρ is given by P[ρ|L, θ L ] = l∈ρ P[l|L, θ L ] (Sec. 4.1).
Minor complexity arises in order to support typing (?): following (?), the library L i is implemented as a set of poly-morphically typed λ-calculus expressions. The only change this produces to the original prior definition is to restrict the set of possible productions under the PCFG: that is, permissible productions are of the form l i → l j ∈ {L|l i → l j is well typed}. The prior probabilities of programs are therefore calculated with respect to the set of well-typed productions.
As discussed in the main paper, this prior definition is equivalent to a minimum description-length prior over programs under (L, θ L ) when all θ L < 1.0, as the product of additional productions in an expression will strictly decrease as the number of productions in an expression increases. (t)). This component receives the task encoding as input and outputs a distribution over programs. Following (?), this is a 2-layer fully-connected MLP (with 64 hidden units and a final tanh activation layer) that outputs a fixeddimensional real-valued tensor encoding a distribution over programs in the library L as output. The realvalued tensor corresponds to weights over program primitives conditioned on their local context in the syntax tree of the program, consisting of the parent node in the syntax tree and which argument is being generated. This functions as a 'bigram transition model' over trees that encodes the likelihood of transitions from one primitive to the next. Q returns this as a (|L| + 1) × (|L| + 2) × A-dimensional tensor, where A is the maximum arity of any primitive in the library.
S4.2 Amortized conditional inference
This parameterization supports fast sampling of programs during conditional synthesis: the neural model runs once per task (to encode the task examples and produce the bigram transition model) and the resulting parameterization can then be used to sample programs during synthesis (e.g. by enumerating programs by expanding trees (as 'bigrams' over parent and children primitives) ranked in order of their likelihood starting from the program root.)
Following (?), the neural model is trained to optimize the following MAP inference objective on the training tasks and the sampled tasks from the prior:
L MAP = E t∼(L,θ L ) log Q arg max ρ P[ρ|t, L, θ L ] t
(1)
S4.3 Abstraction learning as program compression
DreamCoder learns new abstractions to approximately optimize for Eq. 2 (main paper), which infers an optimal library and parameters with respect to the observed programs on the training tasks.
The DreamCoder abstraction algorithm is a primary contribution of the original work in (?), and is discussed extensively in (?). We therefore provide additional technical details here that are relevant to its integration with LAPS in our experiments, but strongly encourage referencing (?) for the full implementation.
As discussed in (?) and our main work, DreamCoder approaches abstraction using an equivalence between Eq. 3 and the minimum description length of the prior (as the description length of the library) and the programs produced from the prior (under the PCFG definition of the prior). Therefore, in practice, inferring the optimal library is equivalent to inferring the library which maximally compresses the description length of the library and the description length of programs which explain the training tasks. In particular, DreamCoder optimizes the following compression objective with respect to the training tasks T and the finite beam B t of program solutions discovered for each training task during learning:
log P[L] + arg max θ L t∈T log ρ∈Bt P[t|ρ] max ρ −→ * ρ P[ρ |L, θ L ] + log P[θ L |L] − |θ L | 0 (2)
The key aspect of this algorithm is that it considers abstractions which compress not only the programs as they are currently written, but any semantically equivalent refactorings of these programs. Specifically, as programs are written in a λ-calculus, refactoring refers to any program which is equivalent up to β-reduction (i.e., function application/variable substitution (?)). A primary contribution of the original work in (?) is an efficient algorithm for computing these refactorings that is unchanged when we integrate language; we refer to the original text for details.
In our work, the primary important aspect of this aspect is that refactorings are defined compositionally over the existing program primitives. Specifically, refactorings can be efficiently calculated according to semantic equivalences in the the λ-calculus (namely, that function application and variable substitution guarantee that the resulting refactored programs are equivalent. Abstractions created by variable substitution will always be composed of subcomponents from the initial library.) We take advantage of this compositionality when defining our joint abstraction algorithm over natural language. Defining an initial compositional translation model between language and the program components ensures that we can approximate compression in the joint model after the programs are refactored, without needing to induce an entirely new translation model over language and the refactored programs.
S5. Our Approach: Language for Abstraction and Program Search
This section now describes technical details for the concrete LAPS implementation in our reported experiments, which is defined over the DreamCoder implementation. We structure this section according to the parallel implementations in the base algorithm for clarity. However, except for the specifics of the joint-abstraction algorithm, the technical implementation of each component should extend directly to most other similar learned synthesis algorithms (e.g. the joint model implementation should be reusable in any synthesis algorithm that uses an explicit symbolic library of primitives.)
S5.1 Joint prior over programs and language
LAPS extends the prior P[ρ] over programs under the library to a joint prior J(ρ, d t ) over programs for a given task and their natural language descriptions d t (Sec. 5.1). We formulate this prior as
J(ρ, d t ) = P[ρ|L, θ L ]P[d t |ρ, L]
the product of the original prior over programs P [ρ|L, θ L ] defined on the program library, and a program to descriptions "translation" model T (d t |ρ, L) ≈ P[d t |ρ, L] that describes how descriptions are generated for programs written in the library.
The concrete implementation described in the main paper uses a translation model that additionally decomposes compositionally over language and programs-in particular, on the basis of token-token translation distributions P T [w|l] between words w ∈ d t and l ∈ L. Many available translation and semantic parsing models (such as synchronous grammars over natural language and programs) preserve this further compositional requirement (e.g. (??)).
See Figure S3 (supplement) for example samples from the generative model on the graphics domain at earlier and later stages of training.
Our implementation uses a classical statistical machine translation model (the Model 4 version of the IBM Statistical Machine Translation models (?)) whose parameters can be tractably estimated from very few paired programs and descriptions (in the distant supervision setting used in the original work, there may be no more than a couple of hundred training tasks in the full dataset, and fewer than 10 solved tasks on which to train the translation model at any given time.) In addition to inference in small data settings, this translation model has a fully compositional generative definition (?) that allows it to be easily used to train the neural amortized inference model which conditions on language.
Despite this, however, this translation model (and the further inductive biases used to specifically relate program trees to sentences) make strong compositonality assumptions about the relationship between program primitives and words as a joint generative model of programs and language; we find that these inductive biases are useful in the small data setting and produce empirically successful results. However, this is likely because of how the joint model is used during training, which does not require a perfect generative model of language (or language with respect to programs) for either amortizing inference or abstraction in order to use language as a heuristic during learning.
A full definition of the statistical translation model we use can be found in (?). We re-summarize important details here. The IBM family of translation models estimates the conditional token-token probabilities P T [w|l] on the basis of alignment variables a l,d , which specify a direct correspondence between tokens in parallel texts (e.g. a word in a task description and a program primitive.) These alignments are many:many between tokens in programs and natural language sentences -a given word can correspond to multiple primitives, and vice versa. Conditioned on a set of alignments from paired programs and descriptions, the conditional probabilities in both directions (the probability of generating a program primitive in a program based on the presence of a word in a sentence, and vice versa) are defined by marginalizing over the alignment variables. We provide one direction (P T [w|l]), as the other is symmetrical:
P T [w|l] ∝ a1 ... am P[w, a 1 ...a m |l] ∝ m i=1 q(a i |i, l, m)
where a i are alignment variables inferred over a paired corpus and q(j|i, l, m) can be interpreted as the probability of alignment variable a i (for the token with index i in a program) taking value j (where j is an index into the corresponding sentence) conditioned on the lengths l and m of the program and natural language sentence (?).
These alignments are inferred by approximately inverting the generative model in (?) to maximize the likelihood of the observed paired sentences and programs. One implementation detail: the alignment algorithm operates over pairs of strings. For convenience we infer alignments between sentences and linearized token sequences in the program tree (which can be done with complete recoverability of the original program tree (?)). This is another inductive assumption that we choose after preliminary experimentation and find that our implementation yields strong empirical results regardless.
The IBM translation model is a noisy-channel generative model that requires an additional language model p(d) to generate language (??). We use an efficient parallelized implementation for inferring the translation model pa-rameters from (?), which also contains a basic language model inference algorithm inferred over the full corpus of training task sentences (as a trigram model, which we again find simple but effective for our very small data setting). Specific model hyperparameters for all experiments are available in the released code repo (in the experiment runtime commands.)
Mutual exclusivity: Section 5.1 of the main paper also describes how the joint model can be modified to include language-specific priors, such as a simple implementation of the well-known mutual exclusivity prior documented in the cognitive language-learning literature (??) and given a Bayesian formulation in (?). We provide an implementation to demonstrate that the joint model can be easily extended: specifically, a simple mutual exclusivity assumption can be added into the joint model by simply updating the compositional translation model to include additional distributions t M E (d new |l) where d new are words that only appear in unsolved training tasks and
t M E (d new |l) ∝ αP[l|L, θ L ] −1
new words are now assumed to correspond to primitives inversely proportional to their current usage under the learned program prior. As we show in the next section, incorporating this prior at the level of the joint model can be used to approximate mutual exclusivity assumptions in the learned search heuristic, encouraging exploration in the presence of new words.
Practically, we calculate the mutual exclusivity prior in our concrete implementation by leveraging the alignments upon which our token-token translation probabilities are defined. Specifically, we add pseudoalignments between each d new and each l ∝ αP[l|L, θ L ] −1 ; when the token-token translation probabilities marginalize over the latent alignments and these pseudo alignments, the resulting translation probabilities encode the mutual exclusivity prior.
S5.2 Integrating the joint model into amortized conditional search
The amortized conditional inference model Q(ρ|t) (Sec. 4.2) extends straightforwardly in LAPS to condition on language Q(ρ|d, t) (Sec. 5.2). Importantly, the training procedure in Sec. 4.2 (training the neural model on samples from the prior) also extends to the language-enriched condition (training the neural model on samples from the joint prior, which include generated language annotations.)
In our experiments we implement the concrete neural model Q(ρ|d, t) in our experiments by extending modularly on the original model in (?) (and in the supplemental S4.2) for direct comparison. Our full architecture therefore has three modular components to additionally condition on language: Figure 2. Architecture of the language-conditioned neural model Q(ρ|d, t). The model takes as input task examples t. These are encoded using a domain-specific encoder E(t). The model additionally takes in task descriptions d, encoded using a languag encoder ED(t) (implemented as a GRU). Task encodings are concatendated and feed to an MLP and activation layer and output a tensor Q. This parameterizes a distribution over program bigrams in the final DSL, which defines a conditional distribution from which to enumerate programs during search.
A natural language task descriptions encoder E D (d).
This receives the task description d as input. We implement this as an RNN model using a bidirectional GRU (?) with 64 hidden units; we embed natural language symbols as 64-dimensional vectors, and randomly initialize and backpropagate through the embedding during training. We tokenize the sentences in u on whitespace and concatenate each sentence, delimited by special start and end of sentence tokens. At test time, we replace any OOV tokens with a special UNK token.
2. A domain-specific task encoder E(t), following S4.2.
3. A bigram transition model over program primitives, following S4.2. To condition jointly on E D (d) and E(t) we simply concatenate these two embeddings and update the first layer of the MLP to take the 128dimensional concatenated embeddings as input.
Abstraction learning as joint model compression
Finally, the abstraction learning model in (?) can also be generalized to condition on language, by extending the optimal library inference algorithm with respect to the program prior to an optimal library inference algorithm with respect to the joint model over language and programs (Eq. 6 and 7, main text.)
In our concrete implementation with respect to the Dream-Coder algorithm, this means extending the descriptionlength compression objective -originally defined over the program library and training task programs -to include the translation model definition. The main paper defines a description-length prior over the compositional translation model (Eq. 10). Optimizing this tractably requires redefining the abstraction algorithm in (?) -which refactors λ-calculus programs via lambda-abstraction (see S4.3 for a summary) -to also jointly re-estimate the description length of the translation model T (d t |ρ, L ) using the refactored programs under the new candidate library L .
We implement an efficient approximation that can be calculated with respect to the classical statistical translation model described in S4.1 (?). In particular, we leverage the alignment-based definition (which uses latent correspondences inferred between program tokens and sentence tokens in paired programs and descriptions) to approximate −H(P T [w|l]) = − log(P T [w|l]), the entropy of the tokentoken translation probabilities.
Specifically, as the IBM model defines the conditional tokentoken probabilities
P T [w|l] ∝ a1 ... am P[w, a 1 ...a m |l]
marginalized over alignments, where (slightly abusing notation) in any given paired program and sentence description we will have estimated a set of alignments a wj ,l k ...ln between the j-th token in the description corresponding to one or more tokens l k ...l n in the paired program. We therefore define the description-length of each token-token translation as the sum of the description lengths of the alignments which express it under a library L:
and we say that a primitive l ∈ L is a subcomponent of a refactored abstraction l ∈ L if the abstraction can be β-reduced such that l appears in it. That is, a refactored alignment a : w i → {l ...l n } is compressed only when a new abstraction l encapsulates over a strict subset of the constituent program primitives already aligned to the word in the original alignment. This allows us to re-approximate the description length of the new translation model with respect to a semantically-equivalent program refactoring without inducing P T [w|l] from scratch (which would require retraining the full translation model over the sentences and refactored programs.)
S6. Experiments
This section describes additional details on each of the domains -string editing, compositional graphics, and scene understanding -in Section 6 of the main paper (see Figure 2, main text for examples from all three domains, shown along with the synthetic and human language annotations).
We also provide additional details on the model and baseline hyperparameters available for each domain. All datasets generated for these experiments (including human language annotations) are released and links to static repositories are provided in the code release. We also release a complete set of commands to exactly replicate all model experiments.
All experiments for were conducted on a high-powered computing cluster using a fixed training budget of wall-clock search time per task for all models and baselines in a given domain (
S6.1 Domains
(See Figure 2, main text for examples from all three domains, shown along with the synthetic and human language annotations.) As discussed in the main paper, each domain consists of a dataset of tasks; a set of procedurally generated synthetic language annotations; and a set of human language annotations provided by Mechanical Turk workers; we also described the base primitives L 0 with which all models (including baselines and ablations) were initialized for each domain.
S6.1.1 STRING EDITING
Tasks: structured string transformation problems taken from a publicly released dataset in (?) (n=1000 train; n=500 test). Tasks consist of input dictionary strings transformed using randomly sampled regular expression transducer (n=30 examples per task). Transducers were sampled according to abstract templates defined in (?) and required identifying matched sequences of characters and adding letters before them; removing sequences; replacing them with new sequences, or doubling the sequence each time they appeared (See Figure 2A, main text).
Language data: The human language dataset for this domain was previously collected by (?). We defined a synthetic grammar of high-level templates over the ground truth regular expression transducers (corresponding to the original templates used to generate the tasks.) The synthetic templates were defined based on language from the original human annotations, and in most cases closely matched the true human provided annotations (which were generally quite structured), though with significantly less variation (the original language contained multiple human descriptions per task. We generate a single synthetic for each one.
The synthetic dataset has a vocabulary size of n=44 for both train and test. We use the human annotations in the original dataset when evaluating on human data, which have a vocabulary of n=727 (train) and n=622 (test).) We generate a synthetic dataset on this domain partly because of inaccuracies noted in (?). The released code contains the complete generation procedure for these synthetic annotations. See Figure 2A for representative tasks with examples, synthetic language, and human descriptions.
Initial program primitives:
We initialize all models with a set L 0 of LISP-like primitives that operate over substring sequences to both construct regular expression match sequences and manipulate strings, augmented with three text manipulation-specific primitives intended for executing constructed regular expression sequences; t is a polymorphic type variable using standard Hindley-Milner polymorphism typing (?). The execution engine does include a regexmatching model; however, the synthesis model is naive to this execution engine and simply searches for manipulations over the input strings and the regexes as data arrays.
L 0 contains 14 substring manipulation primitives, given below with type information. We also give a semantic gloss for primitives that are not standard LISP primitives.
• if (bool → t → t → t) • cons (t → list(t) → list(t)) • car (list(t) → t) • cdr list(t) → list(t • map ((t 0 → t 1 ) → list(t 0 ) → list(t 1 )) • tail (list(t) → t) • append (t → list(t) → list(t))
Appends element to end of list.
• revcdr (list(t) → list(t)) Takes all except the last element of the list.
• match (substr → substr → bool) Returns true if the first argument, when executed as a regular expression, matches the second argument.
• regexsplit (substr → fullstr → list(substr)) Attempts to execute the first argument as a regular expression, and splits the second argument into a list of substrings, using the regular expression match as a delimiter (and includes the matched sequences in the returned list.)
• flatten (list(substr) → fullstr) Flattens a list of substrings back into a string.
• rconcat (substr → substr → substr) Concatenates two substrings.
• rnot (substr → substr) Takes a substring argument s and returns the substring literal [ˆs]
• ror (substr → substr → substr) Takes substring literals a and b and returns the substring literal ((a)-(b))
We also include 26 character constants of type substr and constants dot (regular expression wildcard character) and empty (empty string).
Domain hyperparameters
We largely follow prior work (?) to set algorithm training parameters; the earlier (?) uses a 720s enumerative search budget for solving both text editing and general list manipulation tasks. We use the same 720s enumerative budget here.
The encoder E(t) follows the domain-specific encoder used for text and list editing problems in (?), a 2-layer GRU with 64 hidden units. The model is trained for a fixed gradient step budget (10,000 gradient steps) and we sample equally at random between supervision on the solved training tasks (and their solution programs in the current DSL) and samples from the joint generative model. As with (?), when generating tasks from the generative model, we use randomly sample inputs (on which we execute generated programs to produce an output.)
S6.1.2 COMPOSITIONAL GRAPHICS
Tasks: inverse graphics problems (n=200 train; n=111 test) where each synthesis problem is specified by an image and solved by synthesizing a program in LOGO Turtle graphics (?). The domain is inspired by the graphics domain in (?) but intentionally re-designed to be much more challenging (ground-truth programs are much longer on average in the base programming language) and explicitly compositional: the training and testing tasks contain simple shape tasks defined by compositional parameters for a set of basic shapes (a small triangle, a medium square; a small semicircle); complex shape tasks that require inferring more challenging (and longer) parameterized shapes (a greek spiral with eight turns); and compositional tasks defined by geometric rules and relations over the simple shapes (a seven sided snowflake with a short line and a small triangle as arms; a small triangle connected by a big space from a small circle) (See Figure 2C).
Simple parameterized shapes are either polygons (triangle, square, [n] gon), curves (semicircle, circle) or lines. Simple shapes are parameterized by one of three sizes (small or short; medium; and big). When generating synthetic language descriptions, pluralized objects are tokenized with separate tokens for the noun lemma and a token for the plural suffix (e.g. square s).
Complex parameterized shapes require constructing more complex images out of basic lines, and are intended to evaluate performance on tasks that pose a greater search challenge in the initial DSL, and whose structure is not directly cued by compositional relationships over easier components. Further, the complex shapes can be solved using abstractions (e.g. for repeatedly rotating a pen at right angles) that are not directly cued by shared lexical names -we evaluate the algorithm's ability to learn and use abstractions that correspond to useful sublexical structures shared across multiple lexemes. We define four template families for complex shapes: spirals, staircases, zigzags, and stars. Compositional graphics tasks invoke compositional relationships over the simple parameterized shapes. We define templates for generating 6 families of compositional tasks: nested, next to, separated by, connected by, in a row, and snowflakes.
Language data: We gather human language annotations by asking Mechanical Turk workers to write an image description for the rendered graphics images that specify each task. Each worker labeled 20 training and 10 testing images after viewing a disjoint, randomly sampled set of 15 example images paired with their synthetic language captions. (Workers were asked to write a short, clear description that a person or robot could use to recreate the picture, and told that the examples were paired with automatically generated captions as an example of the kinds of descriptions you could write for this picture.) We control for description quality by requiring workers to complete a reference task on their own descriptions: after writing their initial annotations, workers were required to correctly match each annotation to the target image (from amidst a set of 12 distractors drawn heuristically from similar images on the full task dataset, and other images they themselves had described), and only annotations correctly matched to the target image were retained (workers were given a chance to redescribe pictures they failed to match to their own captions.) We preprocess the human dataset minimally to standardize number terms (e.g. we use the same token type for both 3 and three) and to split plurals into a lemma and suffix, as in the synthetic dataset. The final dataset has a vocabulary size of n=562 for both train and test.
As with the string editing domain, we define a synthetic dataset using parameterized templates based on systematic language reused in the human annotations (see Figure 2A for a comparison between human annotations and synthetic language); as with that domain, we choose a synthetic dataset to ensure systematic re-use of high level terms for repeated compositional objects (such as the "n-gon" or "snowflake" terminology.)
We then generate graphics tasks by defining parameterized templates over ground truth programs in L 0 , and a corresponding generator for synthesizing natural language descriptions based on each ground truth program. It is important to note that the templates are defined at any extremely high level and were written with respect to low-level programs in a simple graphics language (many of which were derived by generalizing compositionally over complex structures in (?), such as the 'snowflake' images).
Initial program primitives: For comparison with prior work, our initial library on this domain (and the base language used to generate the ground truth graphics programs) is an implementation of the LOGO Graphics DSL used in (?), which consists of four typed, imperative primitives modeled within the λ−calculus with a state monad S: Domain hyperparameters We largely follow prior work (?) to set algorithm training parameters. Consistent with the graphics program experiments in (?), we train all models, including baselines and ablations, using an enumerative search budget of 1800s per task (both when using pure enumerative search from the DSL prior, and neurally-guided search conditioned on the task examples and language descriptions); the results in Table 1 compare the relative advantage of our model given this fixed search time. We train all models on 48 CPUs during parallel enumerative search, and run the algorithm for a maximum of 27 iterations (see learning curves. As we run multiple random seed replications of models in this domain, we tuned the iteration limit based on performance on the first replication, allowing models models to train while performance continued to increase. To conserve computational resources, we later stopped several of our own model replications before 27 iterations, as they had reached near ceiling performance. As we report the best held-out test score across all 27 iterations for any one model, the early stopping would only serve to give a conservative estimate on performance for these models.) We randomly reorder the training set of tasks once before the first loop, then iterate through batches of n=40 tasks at each iteration; learning curves show results from evaluating on held-out tasks every n=3 iterations.
The encoder E(t) follows the domain-specific encoder used for the original graphics domain in (?) for a more direct comparison: we use a 6-layer CNN, where each layer consists of a 64x64 2D convolutional sublayer with kernel size = 3, a RELU activation sublayer, and a max-pooling sublayer with kernel size = 2. The model is trained for a fixed gradient step budget (10,000 gradient steps) and we sample equally at random between supervision on the solved training tasks (and their solution programs in the current DSL) and samples from the joint generative model.
S6.1.3 SCENE REASONING
Tasks: inductive scene reasoning tasks (n= 212 train; n=115 test) where each synthesis problem is specified by a structured input scene, and outputs can be a number (how many red rubber things are there?), a boolean value (are there more blue things than green things?), or another scene (what if all of the red things turned blue?). This domain is modeled on CLEVR (?) but designed to support non-linguistic, inductive synthesis in the programming-by-example paradigm: each task is specified with n=7 paired input output examples. See Figure 2B, main text for example tasks showcasing the original and extended templates, synthetic language annotations, and human language annotations.
The dataset includes questions randomly generated from the following subset of the original CLEVR question templates (see (?) for additional details on the task generation process and question templates; we also release our own augmented question generation code and the full dataset):
• zero hop: questions that require counting or answering an attribute query about a subset of objects in the scene. (e.g. How many small cylinders are there?; What material is the purple thing?).
• one hop: questions similar to the zero hop tasks, but that require reasoning over an additional relational query (e.g What number of things are right the small gray thing?).
• single or: questions that additionally introduce a disjunction between sets of objects. (e.g. How many objects are either large metal spheres or large rubber things?)).
• (compare integer: questions that additionally introduce a ≥ or ≤ operator between counts of sets of objects. (e.g. Is the number of large rubber cubes less than the number of large green rubber things?)
• same relate: questions that additionally require reasoning about other objects with the same attribute as a specified object. (e.g. How many other things are there of the same size as the cyan thing?).
We choose these templates as a representative subset of the style of the full CLEVR dataset, that requires the full language of high-level primitives in (?) to solve. We omit some longer questions in the same format (e.g. two hop) as our intention is to compare synthesis baselines, rather than to achieve SOTA performance on CLEVR: this would likely only increase the computing resources needed to compare the various methods and we already found a significant differential between our model and the baselines on the shorter questions.)
We also add new question templates generated in the style of the original CLEVR tasks, but designed to model other common AI tasks (such as generating new scenes based on existing ones) and to require new abstractions (that were not expressible in the original restricted symbolic language used to generate scenes in (?)):
• localization: questions for object localization. These return an output scene consisting of a localized set of objects based on a set of query attributes (e.g. Find the gray rubber thing.).
• remove: questions that either return an output scene with a subset of the objects removed, or that query about latent scenes where a subset of objects has bee removed. (e.g What if you removed all of the gray metal things?; If you removed the green cubes, how many cubes would be left?).
• transform: questions that either return an output scene where a subset of the objects has been transformed to set new attributes, or that query about latent scenes where a subset of objects has been modified this way.
(e.g What if all the blue metal things became rubber things?; If all of the large yellow rubber things became gray spheres, how many gray spheres would there be?).
We treat these as program synthesis tasks: the input scenes are specified as symbolic scene graphs consisting of an array of structured, objects defined as a dictionary of their attributes, and programs are designed to manipulate these structured arrays (this data structure is the original format in which scenes themselves are generated in (?); the images displayed in Figure 3, main text are rendered using the original image rendering pipeline). Our intention is not to build a visual reasoning architecture: rather, we are interested in learning structured manipulations of scenes. We see work in inverse graphics (such as (?)) which outputs a structured scene graph based on pixel images as the first step in a symbolic processing and reasoning pipeline as analogous; we are interested in the structured manipulation of these scene representations.
Language data: Synthetic language annotations are generated based on the original high-level templates in (?), as well as additional templates we define for the extended questions in the same style. We gather human language annotations by asking Mechanical Turk workers to write an instruction or question describing the set of inductive examples. However, due to the difficulty of solving certain tasks in a limited time frame based on the inductive examples alone (such as the questions about disjunctions over scenes), we show Mechanical Turk workers the synthetic descriptions for this domain and ask them to write a semantically similar description that changes more than one word in the original caption, and that would be "more natural for a human to understand". This paraphrasing paradigm is similar to that used in (?), though we find that in comparison to other domains it generates less diverse language data.) We remove all punctuation, tokenize on spaces, and use an additional domain heuristic to stem all plurals (e.g. cubes).
Initial program primitives:
We initialize all models with a set L 0 of LISP-like primitives. These are similar to the initial list manipulation primitives used in the string editing domain: as both domains can be treated as manipulating structured arrays, we are interested in learning differentiated, domain-specific abstractions based on a very similar base language. L 0 also includes primitives for querying attributes of objects on the domain (these are typed getters that simply query the object dictionary of attributes) and several domain-specific functions necessary for manipulating these attribute. We deliberately use a much more base level programming language than the high-level, domainspecific language hand-designed in (?); our goal is to learn the necessary abstractions.
We give a semantic gloss for primitives that are not standard LISP primitives.
• if (bool → t → t → t)
• cons (object → list(object) → list(object))
• car (list(object) → object)
• map ((t 0 → t 1 ) → list(t 0 ) → list(t 1 ))
• fold ((list(t) → list(t)) → (t → list(t) → list(t)) → list(t))
• len (list(t) → int)
• > (list(t) → bool)
• < (list(t) → bool)
• set union (list(t) → list(t) → list(t))
• set intersect (list(t) → list(t) → list(t))
• set difference (list(t) → list(t) → list(t))
• relate (object → relation → list(t)) Returns an array of objects that satisfy a spatial relation with respect to an input object.
We also include equality comparators for each of the attribute types (e.g. eq color?; getters for each attribute, and setters for each attribute. We also include integer constants 0-9 for counting and constants for the attributes (blue, red, big, small, rubber, metal) based on the original object and spatial relation constants (?). Domain hyperparameters: We run a coarse hyperparameter search based on the baseline model to set the domain hyperparameters. We train all models, including baselines and ablations, using an enumerative search budget of 1000s per task and run the models for a maximum of 5 iterations. we run multiple random seed replications reordering the training set, in the same way as the compositional graphics domain. The results in Table 1 also compare a curriculum ordering of the training set based on the number of tokens in the synthetic language captions (split on spaces.)
The encoder E(t) is a variant of the RNN-based domainspecific encoder used for text and list editing problems in (?) (as well as the string editing domain). The model is trained for a fixed gradient step budget (10,000 gradient steps) and we sample equally at random between supervision on the solved training tasks (and their solution programs in the current DSL) and samples from the joint generative model. As with (?), when generating tasks from the generative model, we use randomly sample inputs (on which we execute generated programs to produce an output.) We encode the symbolic scene data structures with the RNN by encoding a flattened version of the scene graph. The scene graph is originally stored as a dictionary of attributes; when flattened, we indicate the dictionary structure using special tokens to denote the keys and the start and end of any array delimiters (the original scene graph is fully reconstructable from the flattened version.)
S 6.2 Results and Additional Qualitative Results
In this section, we discuss additional qualitative results from an in depth exploration of the graphics domain that were omitted from the main paper for space, but provide additional insight on the behavior of the learned model in the hardest learning domain (based on the differential between baseline and LAPS-augmented performance.)
Learned abstractions and synthesized programs. Joint generative model samples. Figure S3 (supplement) shows samples from the joint generative model on the graphics domain (programs from the library which are executed to produce the task example image, and translated to produce language annotations) at early and later stages of training, indicating that the joint model itself improves as learning improves, which itself allows better training for the conditional inference model and better abstraction guiding based on language.
.Figure 1 .
1...leverages compositional generativity of programs to learn ....leverages compositional generativity of language to learn programs Our model, Language for Abstraction and Program Search (LAPS) integrates natural language into base learned synthesis algorithms formulated as hierarchical Bayesian inference (A, left) for jointly learning a library of program abstractions and a neural search heuristic for synthesis.
(
for ∞(move pen( * unit line 3)(/ 2π 6)) to expose a candidate abstraction like λx.(for ∞(move pen( * unit line 3)(/ 2π x))
.Figure 2 .
2...and example program abstractions learned with language (A, B, C) Example tasks from all three synthesis domains shown with synthetic and sample human language annotations. Inductive synthesis domains are shown with a random subset (n=3) of the paired input/output examples. Human language annotations are also randomly sampled (all domains were annotated by multiple people for a broader range of language.) (D) Representative initial program primitives and library abstractions learned with LAPS for the graphics domain. Shown with example tasks solved with synthesized programs containing the learned abstractions and high probability natural language learned from the joint model.
Figure 3 .
3Learning curves comparing baselines and LAPS models in
Goodman, N. D. and Frank, M. C. Pragmatic language interpretation as probabilistic inference. Trends in cognitive sciences, 20(11):818-829, 2016. Goodman, N. D., Tenenbaum, J. B., and Gerstenberg, T. Concepts in a probabilistic language of thought. Technical report, Center for Brains, Minds and Machines (CBMM), 2014. Goyal, P., Niekum, S., and Mooney, R. J. Pixl2r: Guiding reinforcement learning using natural language by mapping pixels to rewards. arXiv preprint arXiv:2007.15543, 2020. Grice, P. Studies in the Way of Words. Harvard University Press, 1989. Gulwani, S., Hernández-Orallo, J., Kitzelmann, E., Muggleton, S. H., Schmid, U., and Zorn, B. Inductive programming meets the real world. Communications of the ACM, 58(11):90-99, 2015. Gulwani, S., Polozov, O., Singh, R., et al. Program synthesis. Foundations and Trends® in Programming Languages, 4 (1-2):1-119, 2017. Hawkins, R. X., Goodman, N. D., and Goldstone, R. L. The emergence of social norms and conventions. Trends in cognitive sciences, 23(2):158-169, 2019. Jia, R. and Liang, P. Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622, 2016. Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., and Girshick, R. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2901-2910, 2017a. Johnson, J., Hariharan, B., Van Der Maaten, L., Hoffman, J., Fei-Fei, L., Lawrence Zitnick, C., and Girshick, R. Inferring and executing programs for visual reasoning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2989-2998, 2017b. Johnson, M. Pcfg models of linguistic tree representations. Computational Linguistics, 24(4):613-632, 1998. Joshi, A. K. and Schabes, Y. Tree-adjoining grammars. In Handbook of formal languages, pp. 69-123. Springer, 1997. Kim, Y., Dyer, C., and Rush, A. M. Compound probabilistic context-free grammars for grammar induction. arXiv preprint arXiv:1906.10225, 2019. Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. Building machines that learn and think like people. Behavioral and brain sciences, 40, 2017. Lake, B. M., Linzen, T., and Baroni, M. Human few-shot learning of compositional instructions. arXiv preprint arXiv:1901.04587, 2019. Lau, T. A. and Weld, D. S. Programming by demonstration: An inductive learning formulation. In Proceedings of the 4th international conference on Intelligent user interfaces, pp. 145-152, 1998. Lázaro-Gredilla, M., Lin, D., Guntupalli, J. S., and George, D. Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs. Science Robotics, 4(26), 2019. Liang, P. Learning executable semantic parsers for natural language understanding. Communications of the ACM, 59(9):68-76, 2016. Liang, P., Jordan, M. I., and Klein, D. Learning programs: A hierarchical bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 639-646, 2010.
Figure 1 .
1Architecture of the neural model Qi(ρ|t, Li). The model takes as input task examples t. These are encoded using a domainspecific encoder E(t). Task encodings feed to an MLP and activation layer and output a tensor Q. This parameterizes a distribution over program bigrams in the final DSL, which defines a conditional distribution from which to enumerate programs during search.To identify programs that solve tasks t while obtaining high probability under P[ρ|L, θ L ], DreamCoder trains a neural search heuristic Q i (ρ|t, L i ) at each iteration i to approximate the inverse model.The training procedure in (?) (summarized in Sec. 4.2) is a key contribution of the original work for learning in the distant supervision setting. The model is trained on samples from the generative prior (providing an endless training stream of random synthesis tasks); and this procedure should generalize immediately to any neural model for predicting programs conditioned on the task specification (e.g. (?????)). The model is also supervised on any original training task examples and their program solutions discovered during learning.
d, a 1 ...a m |l, L] ∝ a1 ... am |a i | L and the description lengths under the refactored library L containing new abstractions compresses according to |a wj ,l k ...l n | L < |a wj ,l k ...ln | L ⇐⇒ {l i contains only l k ...l n as subcomponents|l k ...l n }
determined via hyperparameter search using the baseline model per domain, and reported on a per-domain basis below). The experiments on the string editing and graphics domains used models trained using 48 CPUs for search (using the original parallel enumerative search implemented in the released code for the DreamCoder model in (?)); and the experiments trained on the scene reasoning task used 24 CPUs (as preliminary experiments revealed that these experiments required shorter search time for our main model, and we wished to reduce the carbon footprint of the remaining experiments after our first two domains.)For all experiments we train the neural models for 1 ×10 4 gradient steps. For experiments with language-guided compression, we use an upper bound of 5 new abstractions introduced per iteration. For mutual exclusivity experiments, we set α M E = 0.1. For all experiments, during program-only compression (see (?) for a discussion of program-only compression hyperparameters) we use the hyperparameters from (?) for parsimony with earlier work: a structure penalty of 1.5 and pseudocounts = 30.
move: distance → angle → S → S pen-up: (S → S) → S → S for: int → (S → S) → S → S get/set: (S → S) → S → Sas well as four arithmetic operators (+, -, *. /), integer constants (1-9), unit distances and angles (1 meter and 2π radians), and special values ∞ and .
Figure 3 (
3main text) shows examples of the graphics tasks, synthetic descriptions, human descriptions, and sample programs in the ground truth initial DSL.
show sample abstractions in the final libraries L f for the best performing models in the graphics domain as a concrete exemplar of abstractions that are learned and how they are used, along with sample tasks solved with these abstractions. The figures are shown as dependency graphs to indicate how progressively more complex abstractions build on abstractions at prior iterations of learning; we also show selected probabilities from the translation model (depicted are examples from the top-3 primitive translations for a given word; some primitives are not high probability translations for any word.)
if there is u any letter double that the next letter with the letter u should be repeated as a pair for this transformation C. Compositional Graphics (shown with random human description of n=3)pavings → pavinb
forgiveness → forgiveneb
enterprises → enterprises
if the word ends with consonant s
replace that with b
if the word ends with a consonant
and s then change them both to b
cools → gcools
cultivator → gcultivator
bloomed → bloomed
(Synth) if the word starts with
consonant vowel add g before that
(Human) if word begins with
consonant followed by vowel , add
an g to the beginning
topazes -> topaz
suburbs -> suburbs
reckless -> reckls
if there is e s remove that
remove the e s from the word
shouldering -> shoululdering
hath -> hath
outrun -> oututrunun
Simple shapes
Complex objects
Compositional objects and relations
a small triangle
small triangle
a medium square
one medium square
a medium eight gon
octogon
a big circle
just a circle
a seven pointed star
a seven sided snowflake with
long triangles as arms
a four stepped zigzag
four step ladder going from
top to bottom
a greek spiral with eight turns
a long line that curls in on
itself at right angles
a small five gon next to a
small seven gon
a five sided gon beside a
seven sided gon
a small nine gon separated
by a big space from a small
circle
nine gon on left with small
circle on right not connected
a small triangle connected by a
big line to a medium triangle
a small triangle with a long line
and a medium triangle
six small five gons in a row
six overlapped pentagons
going left to right
seven sided snowflake with a
short space and a short line
and a short space and a
small triangle as arms
a seven sided snowflake with
seven triangles and line
four nested squares
four stacked squares
B. Scene Reasoning (shown with sample I/O examples of n=7 and random human description of n=2)
Original CLEVR (sample templates from full set)
Extended scene manipulation and counterfactuals
What number of gray rubber cubes are there?
how many grey rubber cubes do you see
2
1
2
There is another thing that is the same
color as the large rubber thing; what is it
made of?
what material is the other object that is
the same color as the large rubber object
metal
rubber
metal
What if the gray sphere became a small
green metal sphere?
what if the grey ball morphed into a small
green ball
If you removed the red things, how many
spheres would be left?
count the spheres would be left after
removing the red things
String editing: structured string transformation problems taken from(Andreas et al., 2017) (n=1000 train; n=500 test). Tasks consist of input dictionary strings transformed using randomly sampled regular expression transducer (30 I/O examples per task). We choose this domain to demonstrate LAPS on an important classic synthesis domain(Lau & Weld, 1998). The dataset of Andreas et al.
). The neural model encodes image examples with a CNN. Structured scene reasoning: inductive scene reasoning tasks (n= 212 train; n=115 test) where each synthesis problem is specified by a structured input scene, and outputs can be a number (how many red rubber things are there?), a boolean value (are there more blue things than green?), or another scene (what if all of the red things turned blue?). This domain is modeled on CLEVR (Johnson et al., 2017a) but designed to support inductive synthesis tasks specified over the symbolic scene representations (an array of objects represented as dictionaries of attributes) from the original CLEVR task generator in Johnson et al. (2017a). We also add new tasks that require generating or imagining latent scenes (how many metal things would be left if all the blue cylinders were removed?), which are not solvable in the original high-level DSL hand-designed for Johnson et al. (2017b) (and used in synthesis-based approaches like Yi et al.
Table 1 ,
1multimodal, no generative model) that trains a neural model directly on solved training tasks (similar to neural synthesis models like Deep-Coder(Devlin et al., 2017) but augmented to condition on language); and ablated LAPS variants(Table 1; LAPS rows) to evaluate the additive contributions of the individual learning components. We compare all models using a matched search budget per task and number of training iterations overall, determined using a hyperparameter search with the baseline. The supplement contains full details (and code) to replicate all experiments; and additional qualitative results.
Table 1 .
1) Comparing the LAPS ablations also suggests that linguistic priors (like mutual exclusivity) can indeed be practically useful here during learning (Table 1, compare LAPS with ME and without). if we do use a curriculum? In the scene reasoning domain (where previous approaches (e.g. Mao et al. 2019)What
Table 1 .
1% held-out test-tasks solved. To compare robustness, we run random seed replications in the graphics domain for the synthetic language dataset. Best reports the best model across replications; Mean averages across replications.Language
Model
Strings (ntest = 500)
Graphics (ntest = 111)
Scenes (ntest = 115)
% Solved
% Solved (Best) % Solved (Mean) % Solved (Curric.) % Solved (Mean.)
Synth train/test
DreamCoder (no language)
33.4
49.55
42. 64
67.80
73.9
Synth train/test
Multimodal (no generative translation model)
46.00
26.12
23.20
76.50
49.5
Synth train/test
LAPS in neural search
52.20
92.79
52.93
95.6
88.1
Synth train/test
LAPS + mutual exclusivity
57.00
86.49
80.18
96.5
82.3
Synth train/test
LAPS + ME + language-program compression
54.60
98.19
81.98
95.6
95.9
Synth train/human test
LAPS + ME + language-program compression
54.60
89.20
-
97.4
-
Human train/human test
LAPS + ME + language-program compression
48.60
58.55
-
95.6
-
No language at test
No language on train/test
Original DSL; Enumerative
0.06
0.00
-
27.8
-
No language on train/test
DreamCoder (best library): Enumerative
27.2
41.44
-
53.6
-
No lang at test
LAPS (best library): Enumerative
33.2
62.16
-
93.04
-
No lang at test
LAPS (best library): example-only neural synthesis
52.4
91.0
-
95.6
-
LAPS + ME + lang. compression
% Solved (0 -100%)
# Learning Iterations (0 -27)
DreamCoder (no language)
LAPS + mutual exclusivity
LAPS in neural search
Multimodal (no generative)
) .
)Gal, Y. and Blunsom, P. A systematic bayesian treatment of the ibm alignment models. In Proceedings of the 2013Conference of the North American Chapter of the Associ-
ation for Computational Linguistics: Human Language
Technologies, pp. 969-977, 2013.
Gandhi, K. and Lake, B. M. Mutual exclusivity as a
challenge for deep neural networks. arXiv preprint
arXiv:1906.10197, 2019.
Ganin, Y., Kulkarni, T., Babuschkin, I., Eslami, S. A., and
Vinyals, O. Synthesizing programs for images using rein-
forced adversarial learning. In International Conference
on Machine Learning, pp. 1666-1675. PMLR, 2018.
In addition to initial and learned functions,Ellis et al. (2021) define L to also include any initial literals and a rule for generating variables, such that programs can be completely generated as productions from the PCFG. We use the same formulation.
... f24=(λ (x y) (f23 (λ (z u) (f21 y 0 x u)))) 0.09 | snowflake 0.09 | arms eight sided snowflake with a small seven gon as arms (f24 7 8 x)five sided snowflake with a short line and a medium five gon as arms (f24 5 (λ (x) (get/set (λ (y) (f2 1 (f41 5 y))) x)) z) f32=(λ (x) (for x (λ (y z) (move 1 (/ 2π 4) (move 1 (-2π (/ 2π 4)) z))))) (f14 ε (f14 ε (f17 2 (f5 6 x)))) a small nine gon next to a medium square (f5 9 (f5 1 (f17 1 (f20 4 x))))Figure 4. Abstractions and programs learned for the graphics domain. Sample abstractions (right) learned from a minimal starting DSL (left) for solving progressively more complex graphics program synthesis tasks with language annotations. Also shown with translation probabilities. Our iterative algorithm learns alignment-based translation probabilities between natural language words and program primitives to guide program search and abstraction (depicted are examples from the top-3 primitive translations for a given word; some primitives are not high probability translations for any word.
Turtle geometry: The computer as a medium for exploring mathematics. H Abelson, A A Disessa, MIT pressAbelson, H. and DiSessa, A. A. Turtle geometry: The computer as a medium for exploring mathematics. MIT press, 1986.
D A Abolafia, M Norouzi, J Shen, R Zhao, Q V Le, arXiv:1801.03526Neural program synthesis with priority queue training. arXiv preprintAbolafia, D. A., Norouzi, M., Shen, J., Zhao, R., and Le, Q. V. Neural program synthesis with priority queue train- ing. arXiv preprint arXiv:1801.03526, 2018.
Learning with latent language. J Andreas, D Klein, S Levine, arXiv:1711.00482arXiv preprintAndreas, J., Klein, D., and Levine, S. Learning with latent language. arXiv preprint arXiv:1711.00482, 2017.
Position paper: the science of deep specification. A W Appel, L Beringer, A Chlipala, B C Pierce, Z Shao, S Weirich, S Zdancewic, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 37520160331Appel, A. W., Beringer, L., Chlipala, A., Pierce, B. C., Shao, Z., Weirich, S., and Zdancewic, S. Position paper: the science of deep specification. Philosophical Transac- tions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 375(2104):20160331, 2017.
Weakly supervised learning of semantic parsers for mapping instructions to actions. Y Artzi, L Zettlemoyer, Transactions of the Association for Computational Linguistics. 1Artzi, Y. and Zettlemoyer, L. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Lin- guistics, 1:49-62, 2013.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintBahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
M Balog, A L Gaunt, M Brockschmidt, S Nowozin, D Tarlow, Deepcoder, arXiv:1611.01989Learning to write programs. arXiv preprintBalog, M., Gaunt, A. L., Brockschmidt, M., Nowozin, S., and Tarlow, D. Deepcoder: Learning to write programs. arXiv preprint arXiv:1611.01989, 2016.
The mathematics of statistical machine translation: Parameter estimation. P F Brown, S A Della Pietra, V J Della Pietra, R L Mercer, 19Computational linguisticsBrown, P. F., Della Pietra, S. A., Della Pietra, V. J., and Mercer, R. L. The mathematics of statistical machine translation: Parameter estimation. Computational linguis- tics, 19(2):263-311, 1993.
Tree-to-tree neural networks for program translation. X Chen, C Liu, D Song, arXiv:1802.03691arXiv preprintChen, X., Liu, C., and Song, D. Tree-to-tree neural networks for program translation. arXiv preprint arXiv:1802.03691, 2018.
Learning efficient logical robot strategies involving composable objects. A Cropper, S H Muggleton, /International Joint Conferences on Artificial Intelligence. AAAI PressCropper, A. and Muggleton, S. H. Learning efficient logical robot strategies involving composable objects. AAAI Press/International Joint Conferences on Artificial Intelli- gence, 2015.
Bootstrap learning via modular concept discovery. E Dechter, J Malmaud, R P Adams, J B Tenenbaum, Twenty-Third International Joint Conference on Artificial Intelligence. Dechter, E., Malmaud, J., Adams, R. P., and Tenenbaum, J. B. Bootstrap learning via modular concept discovery. In Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
Deductive synthesis of abstract data types in a proof assistant. B Delaware, C Pit-Claudel, J Gross, A Chlipala, Fiat, 1918839 and NSF-funded Center for Brains, Minds, and Machines, the MIT-IBM Watson AI Lab, Google, Microsoft and Amazon. 50Delaware, B., Pit-Claudel, C., Gross, J., and Chlipala, A. Fiat: Deductive synthesis of abstract data types in a proof assistant. Acm Sigplan Notices, 50(1):689-700, 2015. 1918839 and NSF-funded Center for Brains, Minds, and Machines, the MIT-IBM Watson AI Lab, Google, Microsoft and Amazon.
Program synthesis using natural language. A Desai, S Gulwani, V Hingorani, N Jain, A Karkare, M Marron, Roy , S , Proceedings of the 38th International Conference on Software Engineering. the 38th International Conference on Software EngineeringDesai, A., Gulwani, S., Hingorani, V., Jain, N., Karkare, A., Marron, M., and Roy, S. Program synthesis using natural language. In Proceedings of the 38th International Conference on Software Engineering, pp. 345-356, 2016.
Robustfill: Neural program learning under noisy i/o. J Devlin, J Uesato, S Bhupatiraju, R Singh, Mohamed, P Kohli, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Devlin, J., Uesato, J., Bhupatiraju, S., Singh, R., Mohamed, A.-r., and Kohli, P. Robustfill: Neural program learning under noisy i/o. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 990- 998. JMLR. org, 2017.
Inventing abstractions by refactoring knowledge. S Dumancić, A Cropper, Dumancić, S. and Cropper, A. Inventing abstractions by refactoring knowledge.
Learning to infer graphics programs from handdrawn images. K Ellis, D Ritchie, A Solar-Lezama, J B Tenenbaum, arXiv:1707.09627arXiv preprintEllis, K., Ritchie, D., Solar-Lezama, A., and Tenenbaum, J. B. Learning to infer graphics programs from hand- drawn images. arXiv preprint arXiv:1707.09627, 2017.
Learning libraries of subroutines for neurally-guided bayesian program induction. K Ellis, L Morales, M Sablé-Meyer, A Solar-Lezama, J Tenenbaum, Advances in Neural Information Processing Systems. Ellis, K., Morales, L., Sablé-Meyer, M., Solar-Lezama, A., and Tenenbaum, J. Learning libraries of subroutines for neurally-guided bayesian program induction. In Ad- vances in Neural Information Processing Systems, pp. 7805-7815, 2018.
K Ellis, M Nye, Y Pu, F Sosa, J Tenenbaum, A Solar-Lezama, Write, arXiv:1906.04604execute, assess: Program synthesis with a repl. arXiv preprintEllis, K., Nye, M., Pu, Y., Sosa, F., Tenenbaum, J., and Solar- Lezama, A. Write, execute, assess: Program synthesis with a repl. arXiv preprint arXiv:1906.04604, 2019.
Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning. K Ellis, C Wong, M Nye, M Sablé-Meyer, L Cary, L Morales, L Hewitt, A Solar-Lezama, J Tenenbaum, Dreamcoder, ArXiv preprintEllis, K., Wong, C., Nye, M., Sablé-Meyer, M., Cary, L., Morales, L., Hewitt, L., Solar-Lezama, A., and Tenen- baum, J. Dreamcoder: Growing generalizable, inter- pretable knowledge with wake-sleep bayesian program learning. ArXiv preprint, 2020.
Bootstrapping inductive programsynthesis with wake-sleep library learning. K Ellis, C Wong, M Nye, M Sablé-Meyer, L Cary, L Morales, L Hewitt, A Solar-Lezama, J Tenenbaum, Dreamcoder, 2021PLDI 2021Ellis, K., Wong, C., Nye, M., Sablé-Meyer, M., Cary, L., Morales, L., Hewitt, L., Solar-Lezama, A., and Tenen- baum, J. Dreamcoder: Bootstrapping inductive program- synthesis with wake-sleep library learning. PLDI 2021, 2021.
Strips: A new approach to the application of theorem proving to problem solving. R E Fikes, N J Nilsson, Artificial intelligence. 23-4Fikes, R. E. and Nilsson, N. J. Strips: A new approach to the application of theorem proving to problem solving. Artificial intelligence, 2(3-4):189-208, 1971.
The language of thought. J A Fodor, Harvard university press5Fodor, J. A. The language of thought, volume 5. Harvard university press, 1975.
Using speakers' referential intentions to model early crosssituational word learning. M C Frank, N D Goodman, J B Tenenbaum, Psychological science. 205Frank, M. C., Goodman, N. D., and Tenenbaum, J. B. Us- ing speakers' referential intentions to model early cross- situational word learning. Psychological science, 20(5): 578-585, 2009.
Devise: A deep visualsemantic embedding model. A Frome, G S Corrado, J Shlens, S Bengio, J Dean, M Ranzato, T Mikolov, Advances in neural information processing systems. Frome, A., Corrado, G. S., Shlens, J., Bengio, S., Dean, J., Ranzato, M., and Mikolov, T. Devise: A deep visual- semantic embedding model. In Advances in neural infor- mation processing systems, pp. 2121-2129, 2013.
Children's use of mutual exclusivity to constrain the meanings of words. E M Markman, G F Wachtel, Cognitive psychology. 202Markman, E. M. and Wachtel, G. F. Children's use of mutual exclusivity to constrain the meanings of words. Cognitive psychology, 20(2):121-157, 1988.
Shaping visual representations with language for few-shot classification. J Mu, P Liang, N Goodman, arXiv:1911.02683arXiv preprintMu, J., Liang, P., and Goodman, N. Shaping visual represen- tations with language for few-shot classification. arXiv preprint arXiv:1911.02683, 2019.
Learning to infer program sketches. M Nye, L Hewitt, J Tenenbaum, A Solar-Lezama, arXiv:1902.06349arXiv preprintNye, M., Hewitt, L., Tenenbaum, J., and Solar-Lezama, A. Learning to infer program sketches. arXiv preprint arXiv:1902.06349, 2019.
. E Parisotto, Mohamed, R Singh, L Li, D Zhou, P Kohli, arXiv:1611.01855Neuro-symbolic program synthesis. arXiv preprintParisotto, E., Mohamed, A.-r., Singh, R., Li, L., Zhou, D., and Kohli, P. Neuro-symbolic program synthesis. arXiv preprint arXiv:1611.01855, 2016.
Neural program search: Solving data processing tasks from description and examples. I Polosukhin, A Skidanov, Polosukhin, I. and Skidanov, A. Neural program search: Solving data processing tasks from description and exam- ples. 2018.
Flashmeta: a framework for inductive program synthesis. O Polozov, S Gulwani, Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming. the 2015 ACM SIGPLAN International Conference on Object-Oriented ProgrammingPolozov, O. and Gulwani, S. Flashmeta: a framework for inductive program synthesis. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object- Oriented Programming, Systems, Languages, and Appli- cations, pp. 107-126, 2015.
The child as hacker: building more human-like models of learning. J S Rule, Massachusetts Institute of TechnologyPhD thesisRule, J. S. The child as hacker: building more human-like models of learning. PhD thesis, Massachusetts Institute of Technology, 2020.
Program synthesis and semantic parsing with learned code idioms. E C Shin, M Allamanis, M Brockschmidt, A Polozov, Advances in Neural Information Processing Systems. Shin, E. C., Allamanis, M., Brockschmidt, M., and Polo- zov, A. Program synthesis and semantic parsing with learned code idioms. In Advances in Neural Information Processing Systems, pp. 10824-10834, 2019.
Learning a meta-solver for syntax-guided program synthesis. X Si, Y Yang, H Dai, M Naik, L Song, International Conference on Learning Representations. Si, X., Yang, Y., Dai, H., Naik, M., and Song, L. Learning a meta-solver for syntax-guided program synthesis. In International Conference on Learning Representations, 2019.
Few-shot bayesian imitation learning with logical program policies. T Silver, K R Allen, A K Lew, L P Kaelbling, J Tenenbaum, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Silver, T., Allen, K. R., Lew, A. K., Kaelbling, L. P., and Tenenbaum, J. Few-shot bayesian imitation learning with logical program policies. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 34, pp. 10251- 10258, 2020.
Complex systems in language evolution: the cultural emergence of compositional structure. K Smith, H Brighton, S Kirby, Advances in Complex Systems. 604Smith, K., Brighton, H., and Kirby, S. Complex systems in language evolution: the cultural emergence of compo- sitional structure. Advances in Complex Systems, 6(04): 537-558, 2003.
Joint concept learning and semantic parsing from natural language explanations. S Srivastava, I Labutov, Mitchell , T , Proceedings of the 2017 conference on empirical methods in natural language processing. the 2017 conference on empirical methods in natural language processingSrivastava, S., Labutov, I., and Mitchell, T. Joint concept learning and semantic parsing from natural language ex- planations. In Proceedings of the 2017 conference on empirical methods in natural language processing, pp. 1527-1536, 2017.
Learning abstract structure for drawing by efficient motor program induction. L Y Tian, K Ellis, M Kryven, J B Tenenbaum, arXiv:2008.03519arXiv preprintTian, L. Y., Ellis, K., Kryven, M., and Tenenbaum, J. B. Learning abstract structure for drawing by efficient motor program induction. arXiv preprint arXiv:2008.03519, 2020.
S Wiseman, S M Shieber, A M Rush, arXiv:1808.10122Learning neural templates for text generation. arXiv preprintWiseman, S., Shieber, S. M., and Rush, A. M. Learn- ing neural templates for text generation. arXiv preprint arXiv:1808.10122, 2018.
Learning synchronous grammars for semantic parsing with lambda calculus. Y W Wong, R Mooney, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsWong, Y. W. and Mooney, R. Learning synchronous gram- mars for semantic parsing with lambda calculus. In Pro- ceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pp. 960-967, 2007.
Benchmarking multimodal regex synthesis with complex structures. X Ye, Q Chen, I Dillig, G Durrett, arXiv:2005.00663arXiv preprintYe, X., Chen, Q., Dillig, I., and Durrett, G. Benchmark- ing multimodal regex synthesis with complex structures. arXiv preprint arXiv:2005.00663, 2020a.
Optimal neural program synthesis from multimodal specifications. X Ye, Q Chen, I Dillig, G Durrett, arXiv:2010.01678arXiv preprintYe, X., Chen, Q., Dillig, I., and Durrett, G. Optimal neural program synthesis from multimodal specifications. arXiv preprint arXiv:2010.01678, 2020b.
Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. K Yi, J Wu, C Gan, A Torralba, P Kohli, J Tenenbaum, Advances in Neural Information Processing Systems. Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., and Tenen- baum, J. Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. In Advances in Neural Information Processing Systems, pp. 1031-1042, 2018.
Macro grammars and holistic triggering for efficient semantic parsing. Y Zhang, P Pasupat, P Liang, arXiv:1707.07806arXiv preprintZhang, Y., Pasupat, P., and Liang, P. Macro grammars and holistic triggering for efficient semantic parsing. arXiv preprint arXiv:1707.07806, 2017.
| [] |
[
"UBARv2: Towards Mitigating Exposure Bias in Task-Oriented Dialogs",
"UBARv2: Towards Mitigating Exposure Bias in Task-Oriented Dialogs"
] | [
"Yunyi Yang \nSun Yat-sen University\n\n",
"Hong Ding \nSun Yat-sen University\n\n",
"Qingyi Liu \nSun Yat-sen University\n\n",
"Xiaojun Quan \nSun Yat-sen University\n\n"
] | [
"Sun Yat-sen University\n",
"Sun Yat-sen University\n",
"Sun Yat-sen University\n",
"Sun Yat-sen University\n"
] | [] | This paper studies the exposure bias problem in task-oriented dialog systems, where the model's generated content over multiple turns drives the dialog context away from the ground-truth distribution at training time, introducing error propagation and damaging the robustness of the TOD system. To bridge the gap between training and inference for multiturn task-oriented dialogs, we propose sessionlevel sampling which explicitly exposes the model to sampled generated content of dialog context during training. Additionally, we employ a dropout-based consistency regularization with the masking strategy R-Mask to further improve the robustness and performance of the model. The proposed UBARv2 achieves state-of-the-art performance on the standardized evaluation benchmark MultiWOZ and extensive experiments show the effectiveness of the proposed methods. | 10.48550/arxiv.2209.07239 | [
"https://export.arxiv.org/pdf/2209.07239v1.pdf"
] | 252,280,306 | 2209.07239 | 9b674bcfac64b964ed80d9ee8c85253ba3ecf29a |
UBARv2: Towards Mitigating Exposure Bias in Task-Oriented Dialogs
Yunyi Yang
Sun Yat-sen University
Hong Ding
Sun Yat-sen University
Qingyi Liu
Sun Yat-sen University
Xiaojun Quan
Sun Yat-sen University
UBARv2: Towards Mitigating Exposure Bias in Task-Oriented Dialogs
This paper studies the exposure bias problem in task-oriented dialog systems, where the model's generated content over multiple turns drives the dialog context away from the ground-truth distribution at training time, introducing error propagation and damaging the robustness of the TOD system. To bridge the gap between training and inference for multiturn task-oriented dialogs, we propose sessionlevel sampling which explicitly exposes the model to sampled generated content of dialog context during training. Additionally, we employ a dropout-based consistency regularization with the masking strategy R-Mask to further improve the robustness and performance of the model. The proposed UBARv2 achieves state-of-the-art performance on the standardized evaluation benchmark MultiWOZ and extensive experiments show the effectiveness of the proposed methods.
Introduction
Task-oriented dialog (TOD) systems assist users with various tasks via natural language conversations. The traditional task-oriented dialog systems follow a pipeline approach which consists of several consecutive modules. First, a dialog state tracker (DST) is to estimate the belief state from the user utterance. The belief state is then used to query a task-related database (DB) like the number of entities that match the user's goal. Subsequently, a dialog policy learning module is applied to determine the next system act, followed by a natural language generation (NLG) module that converts the system action to a natural language response.
Recently, task-oriented dialog systems have achieved promising results by leveraging pretrained language models (Radford et al., 2018) for end-to-end modeling in a unified way (Ham et al., 2020;Hosseini-Asl et al., 2020;Peng et al., 2020;Yang et al., 2021). These works cast task-oriented dialogs as a unified language generation task and fine-tune models with the language modeling objective. Particularly, UBAR (Yang et al., 2021) models task-oriented dialogs on a dialog session level, which is trained on the sequence of the entire dialog session composed of user utterance, belief state, database result, system act, and system response of every dialog turn. During inference, the dialog context uses the generated content rather than the ground-truth annotations. The successive works MTTOD (Lee, 2021), PPTOD (Su et al., 2021) and GALAXY (He et al., 2021) all follow such session-level modeling as the fundamental design when developing their methods. They achieve increasingly competitive performances via multi-task learning and large-scale in-domain pre-training.
Despite the effectiveness of session-level modeling, bringing in generated content at inference time inevitably introduces a gap between training and inference. Since the distribution of the groundtruth annotations at training time is different from the distribution of the model predictions at inference time. If a mistake has occurred in the dialog context, there can be error propagation which causes the model generation to continue to deviate from the optimal distribution. This problem is often referred to as exposure bias for auto-regressive models. In the case of TOD systems, the exposure bias problem can take place across multiple modules over multiple dialog turns. For example, a TOD system is asking about the food style of the requested restaurant, but the user replies with the price range. Though the model might be able to update the belief state correctly, it could be confused when generating the system action and response of the next turn, given that the training data is coherent and consistent while the model has not seen such off-the-mark answers during training. What's more, being exposed to an unfamiliar situation where the dialog context contains low quality and erroneous generated content is detrimental to the model's performance and robustness.
In an attempt to mitigate the exposure bias problem that exhibits in task-oriented dialog systems, we follow the session-level modeling of UBAR and propose a learning framework UBARv2 that explicitly exposes the model to heterogeneous data at training time. Specifically, we explore the sampling strategy for constructing the session-level training sequence and perform mixed training from both the distribution of the annotation and the distribution of the model prediction. In the initial stage of training, the ground truth sequence is learned to help the model converge quickly, and then the content generated by the model is sampled on turn level with a certain probability for mixed training. We further employ a dropout-based consistency regularization with a masking strategy named R-Mask, which carries out the forward pass twice for the partially masked session-level sequence and optimize the bi-directional KL divergence loss of the two distributions, helping to bridge the gap between training and inference.
We conduct experiments on the MultiWOZ dataset (Budzianowski et al., 2018) and use the standardized evaluation of Context-to-Response generation (Nekvinda and Dušek, 2021) to compare UBARv2 with UBAR and other competitive baselines. UBARv2 greatly improves its predecessor UBAR and outperforms other state-of-the-art baselines in the end-to-end modeling setting. We perform a thorough analysis to verify the effectiveness of the proposed method. In summary, our main contributions are as follows:
• To the best of our knowledge, we are the first to study the exposure bias problem in taskoriented dialogs.
• We propose two effective strategies bridging the gap between training and inference of taskoriented dialog systems.
• Experiments show that UBARv2 achieves state-of-the-art performance on the standardized MultiWOZ benchmark.
Methodology
In this section, we introduce the session-level modeling as the building block of UBARv2 and describe the proposed session-level scheduled sampling strategy and consistency regularization method R-Mask. Figure 1 is an overview of UBARv2.
Session-Level Modeling
Session-level modeling is first introduced by Yang et al. (2021) and adopted by numerous successive methods (Lee, 2021;Su et al., 2021;He et al., 2021). Two key factors of session-level modeling contribute to the effectiveness of a task-oriented dialog system: the incorporation of intermediate information such as belief state and system action into dialog context, and using all generated content in the dialog context. As illustrated in Figure 1 (a), given a dialog session composed of multiple turns, the session-level modeling operates the process of a task-oriented dialog session as follows: In the first turn τ = 0, the user puts in user utterance U 0 , based on U 0 , the model generates a belief stateB 0 . The belief state is applied to query a database to retrieve database search resultD 0 , which is the matched number of entities that satisfy the constraint inflicted by the belief state. Based on {U 0 ,B 0 ,D 0 }, the model generates system action 0 and system responsê R 0 , accomplishing the interaction of the first turn. As the dialog proceeds to turn τ , the model generatesB τ , τ andR τ based on context of user utterances and all previous generated outputs
{U 0 ,B 0 ,D 0 , 0 ,R 0 , ..., U τ −1 ,B τ −1 ,D τ −1 , τ −1 ,R τ −1 , U τ }, eventually concluding the entire dialog session.
The model can be trained with language modeling objective (Bengio et al., 2003) for GPT-2-based architecture. The idea of session-level modeling also applies for Seq2Seq architectures (Su et al., 2021;Lee, 2021) or unified language model (He et al., 2021).
Session-Level Sampling
To bridge the gap between training and inference, we can draw inspiration from the domain of neural machine translation, where scheduled sampling is employed such that the input to the decoder at time step t is chosen randomly between the ground-truth word and the model's prediction (Bengio et al., 2015;Zhang et al., 2019).
Instead of simply considering the word-level exposure bias of autoregressive generation by sampling context words, this work focuses on addressing the discrepancy across multiple turns in a dialog session. Therefore, we propose to sample content in a turn-wise and modular-wise manner and construct session-level sequence mixed with ground-truth and generated modular spans for training. Specifically, as shown in Figure 1 (b), in every turn we can decide with a certain probability whether to sample a generated modular span, such as belief state, database result, system action and system response. Take belief state for example, which is the determined sampling target of our method. As the dialog proceeds to turn τ , UBARv2 takes dialog context {U 0 , B 0 , D 0 , A 0 , R 0 , ..., U τ −1 , B τ −1 , D τ −1 , A τ −1 , R τ −1 , U τ } and generatesB τ . We choose with probability to sample the generated belief state spanB τ and with probability (1 − ) to sample the ground-truth span B τ .
Performing sampling with chance every turn results in a full dialog session-level training sequence of M turns:
{U 0 ,B 0 , D 0 , A 0 , R 0 , ..., U M ,B M , D M , A M , R M }, whereB τ is the generated content.
At the early stage (Stage 1) of learning, the model is trained with ground-truth sequences so that UBARv2 can effectively learn task-oriented dialogs. At the late stage (Stage 2), the model employs mixed training with sampling rate and exposes itself to the inference setting, learning to deal with inconsistent and incoherent dialogs. The objective is to minimize the negative loglikelihood of the session-level sampled sequencê x = {x 0 ,x 1 , ...,x T }:
L N LL = − T t=1 log P θ (x i |x <t )(1)
It is important to note that the model is trained on the sampled sequence instead of always trained on the ground-truth tokens based on the last sampled word like previous methods (Bengio et al., 2015;Zhang et al., 2019). What's more, We fix the sampling rate at Stage 2 instead of using scheduled rate or with decay for simplicity, while we focus more on the strategy such as which component of the dialog context to sample.
R-Mask
Inspired by R-Drop (Wu et al., 2021), which attempts to make models with dropout (Srivastava et al., 2014) be more consistent during training and inference, we explore dropout-based consistency training in helping mitigate the exposure bias problem in task-oriented dialogs. Other than explicitly making the model learn its sampled generated content, we hope such consistent regularization could expose the model to more non-ground-truth data, eventually reducing the gap.
There are two scenarios applying consistency regularization to the generation task of TOD systems: Stage 1 training and Stage 2 training. At the Stage 1, which is the early stage when training on the ground-truth sequences X = {x 0 , x 1 , ..., x T }, the model goes through the forward pass twice and acquires two distinct distributions of the same sequence from the randomness in the model. The objective is to minimize the bidirectional KL diver-gence between the two distributions:
L KL = T t=1 1 2 [D KL (P θ 1 (x t | x <t ) P θ 2 (x t | x <t )) +D KL (P θ 2 (x t | x <t ) P θ 1 (x t | x <t ))]
(2) In essence, by adding a KL divergence regularization term, R-Drop increases the robustness to dropout and forces the model output to be consistent under different dropouts.
Consistency training can introduce model-level regularization and data-level regularization which involves modification of the input data. Therefore, at Stage 2, the late training stage, we employ an addition masking strategy R-Mask to the sampled sequence to obtain different distributions. As shown in figure 1 (c), we randomly replace certain elements in the sampled sequence with the special token "[MASK]" and with more variants for the regularization term:
L KL = T t=1 1 2 [D KL (P θ 1 (x t |x <t ) P θ 2 (x t |x <t )) +D KL (P θ 2 (x t |x <t ) P θ 1 (x t |x <t ))]
(3) It is important to maintain the KL divergence throughout training at both stages so that the model can be trained properly. If we apply this consistency training midway, the KL divergence loss would be too large and severely hinder the language modeling optimization. The training loss is a combination of language modeling loss and the KL divergence loss with hyper-parameter α as regularization weight:
L Stage1 = L N LL + αL KL L Stage2 = L N LL + αL KL(4)
3 Experiments
Dataset and Evaluation Metrics
MultiWOZ ( We follow the automatic evaluation metrics to evaluate task completion and response quality: Inform measures whether the system provides an appropriate entity, Success measures whether the system answers all the requested attributes, and BLEU (Papineni et al., 2002) is used to measure the fluency of the generated responses (Budzianowski et al., 2018). The BLEU score is calculated with references obtained from the MultiWOZ 2.2 span annotations (Nekvinda and Dušek, 2021). A combined score: (Inform + Success) × 0.5 + BLEU is also reported as an overall quality measure suggested in Mehri et al. (2019).
Implementation Details
We initialize UBARv2 with DistilGPT2 (Sanh et al., 2019) and develop out method with HuggingFace's Transformers (Wolf et al., 2019). Following , the dataset is preprocessed using domain-adaptive delexicalization. We reimplement UBAR (Yang et al., 2021) as UBARv1 and develop the two proposed method session-level sampling (SS) and R-Drop/R-Mask. Typically, UBARv1 and UBARv1+R-Drop and UBARv1+R-Drop are trained at Stage 1 for 60 to 75 epochs. UBARv1+SS is trained on top of UBARv1 at Stage 2 for 5 epochs. UBARv2, the final model is trained with R-Mask strategy at Stage 2 on top of UBARv1+R-Drop. UBARv2 uses the strategy of sampling the belief state every turn for sessionlevel sampling and masking the ground-truth belief state for R-Mask. We select the model with the best performance on the validation set and evaluate it on the test set to get final results. The results in section 4 are mainly from the validation set. The batch size is 8, initial learning rate for AdamW is 1.5e-4, sampling rate for SS is 0.01, regularization weight α is 0.01 and the masking rate for R-Mask is 0.02. Code and models are included in the supplement and will be released 3 .
Baselines
We compare UBARv2 with strong baselines on the benchmark as follows: DAMD (Su et al., 2021), BORT (Sun et al., 2022), MTTOD (Lee, 2021). UBARv2 is evaluated and compared in the end-to-end modeling setting of MultiWOZ, the system has to generate the belief state based on the context, query the database with that belief state, then generate the act and response.
We also report the result of policy optimization which requires the model to generate system action and response based on ground-truth belief state in Appendix 7.1.
Overall Results
As shown in Table 1, the proposed UBARv2 achieves the state-of-art performance in terms of inform rate, success rate, and combined score, surpassing the previous models MTTOD and GALAXY, raising the combined score by 1.4 points, which indicates that attempting to mitigate exposure bias in task-oriented dialogs can effectively improve the task completion ability of TOD systems. Note that UBARv2 does not require pre-training on supplementary data like SOLOIST, PPTOD, and GALAXY. The results in the second group show the variations of UBARv2, which serves as an ablation study. For starters, UBARv1+SS scores higher inform rate and success rate and lifting the combined score by 1.2 over UBARv1, which demonstrates the effectiveness of session-level sampling. Introducing R-Drop to the training process can bring a significant perfor-mance boost, UBARv1+R-Drop jumps the combined score from 93.8 to 100.3, which shows the effectiveness of the dropout-based consistency regularization. Combining R-Mask and SS at Stage 2, UBARv2 shows that the two proposed methods are complementary to each other and can further push the state-of-the-art performance.
To examine the domain transfer ability of UBARv2 generalizing to unseen domains, we perform zero-shot and few-shot experiments in Appendix 7.2.
Analysis and Discussion
In this section, we provide a detailed discussion of the sampling strategy and the context used in constructing a mixed training sequence. We investigate how the Sampling rate and regularization weight α affect the model performance. We also discuss the R-Mask strategy and provide case study to show how can UBARv2 improve task completion and mitigate exposure bias.
Sampling Strategy
Sampling task-oriented dialogs and constructing mixed training sequences require a more finegrained sampling strategy considering different TOD components such as belief state and system action with a dependent relationship. Additionally, we need to consider the attribute of the dialog context on which the sampling is conditioned. First, we list five sampling strategies based on which components to sample in the current turn when constructing a sampled sequence:
• Sampling only the belief state: The annotated belief state will be replaced by the generated one in the corresponding position.
• Sampling only the system action: The annotated system action and response will be replaced.
• Sampling at most one: First determine whether to sample the belief state, and if so, use the annotated action. Otherwise, sample the action and response.
• System action follows the belief state: Sample the belief state, action and response.
• Random Sampling: The sampling of the action is independent of the one of the belief state.
Then, we divide the dialog context into the context of the previous turns and the context of the current turn, and consider whether they are (1) mixed, with some elements sampled, or (2) ground-truth, with all elements from the dataset.
As shown in Figure 2, the dashed baseline is the validation score of UBARv1 + R-Drop and the histogram shows the score of UBARv1 + R-Drop after 5 epochs of mixed training with session-level sampling. For the sampling strategy, it can be seen that "Sampling only the belief state" and "System action follows the belief state" are more effective than the others. The effect of "Sampling only the belief state" is generally better than the one of "Sampling only the system action", which indicates that sampling the belief state is more meaningful than the action. For the context attributes, using the mixed context is always better than using the ground-truth one, generating content that is more fluent and more relevant to the previous context, which aligns with the intuition of session-level modeling. Note that when the sampling strategy is fixed as "Sampling only the belief state", the score is same for "Mixed cur" and "GT cur". This is because only the ground-truth user utterance is available when generating the belief state. Therefore, the sampling strategy for UBARv2 is "Sampling only the belief state" and "Mixed context". Figure 3 shows the effect of different sampling rate of mixed training. With UBARv1 + R-Drop as the baseline, we explore ranging from 0% to 5%. When the sampling rate = 1%, the combined score reaches the highest, exceeding the baseline. of 2.5% or 5% can also lead to improvements. The sampling rate can hurt the system's performance if not appropriate, and whether too small or too large rate can lead to a decrease in the score compared to the baseline. Note that = 1% may seem small, but we believe that exposing the model to a small amount of data can make a difference, helping to mitigate the exposure bias. We provide more detailed results regarding the sampling rate in Appendix 7.3.
Sampling Rate
Regularization Weight
We discuss the effect of the weight α in the KLdivergence regularization term of either R-Drop or R-Mask during the training of UBARv2. Here, we use UBARv1+SS as the baseline and add the KL-divergence regularization term to compare the As shown in Figure 4, UBARv2 achieves the best results when α is 0.01. We observe that all the different KL-divergence weights α except for 5e-5 provide performance gains. This is because the KL-divergence regularization term exists throughout the course of training , allowing the model to adaptively adjust itself. This also reflects the generalization ability of the R-Drop method, which can lead to a relatively stable boost. We provide more detailed results regarding the regularization weight in Appendix 7.4.
R-Mask Strategy
The strategies for R-Mask tie closely with the strategies for session-level sampling as we have already identified belief state as the sampling target. R-Mask also requires a thorough discussion on how to construct the two sequences for the KLdivergence term. Specifically, based on UBARv1 + R-Drop, we add a regularization term with R-Mask to UBARv2 at Stage 2 of mixed training. We investigate the impact of different R-Mask strategies: (1) For the mask target, it can be either the sampled generated belief state or the ground-truth belief state.
(2) For the mask position, there are two options: the two sequences for the regularization term are masked at the same positions or the two sequences are masked at different positions with the same mask rate. We search for suitable masking rates for different strategies. As shown in Table 2, the best strategy is to mask the ground-truth belief state at different positions for the two sequences at Stage 2, which offers more diversity to the two sequences and thus improves the model's generalization ability. We provide results regarding the masking rate in Appendix 7.5
Mask Target
Case Study
In this section, we present further discussions and empirical analyses of the effectiveness of the proposed method for mitigating the exposure bias in dialogs through case study. As UBARv2 achieves a decent improvement on the combined score over UBARv1, it understandably has more correct cases than UBARv1. By looking at the cases in which UBARv1 predicted incorrectly but UBARv2 predicted correctly, we find that, in the majority of cases, UBARv1 just incorrectly predicts information like the belief state, while UBARv2 can get it right in the first place. Therefore, we are more concerned about whether UBARv2 can make the dialog context stay more consistent and coherent, and whether it can really bridge the gap between distributions in training and inference.
As shown in Table 3, in the first turn, according to the ground truth, the user should be informed of the name and address of the hotel, but both models choose to ask for the hotel star rating to narrow down the choices. In the second turn, UBARv2 does a better job than UBARv1 at finding the missing hotel name in the context and providing it to the user in time. This case shows that UBARv1 still suffers from not being able to supplement entity names. While UBARv2 can supplement entity name appropriately, which reflects the fact that UBARv2 also can adaptively supplement and make amends in response to the current user utterance in order to stay consistent and coherent throughout the entire session and do it better than UBARv1. However, it can be noted that the user also needs the address of the hotel, but UBARv2 does not supplement the address information. This indicates that UBARv2 can be further improved. We also want to address that current automatic evaluation metrics and static human evaluation are not adequate to quantitatively measure whether the exposure bias problem has been mitigated or not. There is a call for a more sophisticated and less labor-extensive evaluation. We provide two more case study in Appendix 7.6.
Related Work
The architectures for end-to-end modeling of taskoriented systems can be coarsely divided into multidecoder methods Tseng et al., 2021;Jeon and Lee, 2022;Ramachandran et al., 2021) and pre-trained language models (Hosseini-Asl et al., 2020;Peng et al., 2020;Kulhánek et al., 2021;Lin et al., 2020;Yang et al., 2021;Su et al., 2021;Lee, 2021;Sun et al., 2022;He et al., 2021). In terms of how to model the dialog context, session-level modeling has become popular with recent works (Yang et al., 2021;Su et al., 2021;Lee, 2021;He et al., 2021). Pre-training on relevant dialog corpus and multi-task learning are also employed to improve task completion. SOLOIST (Peng et al., 2020) is further pre-trained on a large dialog corpus with a multi-task objective. GALAXY (He et al., 2021) is further pre-trained via semi-supervised learning which makes use of unlabeled dialog samples. PPTOD (Su et al., 2021) proposes a multi-task pre-training strategy for dialogs with prompts. MT-TOD (Lee, 2021) train a T5-based model with the auxiliary task.
The exposure bias problem is previously discussed and studied in the training process of neural machine translation (Bengio et al., 2015;Ranzato et al., 2015;Shen et al., 2015;Wiseman and Rush, 2016;Zhang et al., 2019). Contrastive learning is also used to reduce the exposure bias problem by learning in the representation space (Lee et al., 2020;Liu and Liu, 2021;Pan et al., 2021). Wu et al. (2018) and Wang and Sennrich (2020) shared some helpful insights on the relationship between exposure bias and error propagation. For TOD systems, exposure bias and error propagation exist in the multi-turn nature of dialogs. Some works have addressed the error propagation problem through data augmentation to increase the robustness of the systems Li et al., 2021;Sun et al., 2022). UBARv2 is the first work that designs methods for mitigating the exposure bias problem in task-oriented dialogs.
Conclusion
This work tries to mitigate the exposure bias problem in task-oriented dialog systems by proposing mixed training with session-level sampling and consistency regularization strategy R-Mask. UBARv2 achieves state-of-the-art performance on the endto-end modeling task of MultiWOZ Evaluation, raising the combined score by over 1 point. By actively bridging the gap between training and inference, the model can stay more consistent and coherent with the generated context. We believe that the exposure bias problem exhibits in multiturn dialogs is an interesting topic worth studying, and hope that UBARv2 can inspire future work to explore more methods to bridge the gap between training and inference for dialog systems. End-to-end neural pipeline for goal-oriented dialogue systems using gpt-2. In ACL 2020: 58th annual meeting of the Association for Computational Linguistics, pages 583-592.
Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, et al. 2021. Galaxy: A generative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injection. arXiv preprint arXiv:2111.14592.
Domain Transfer
To examine the transfer ability of UBARv2 generalizing to unseen domains, we run zero-shot and few-shot experiments on the end-to-end modeling setting by excluding one domain out of the five domains that are available in validation and test set, and training UBARv2 on other four domains. Table 5 shows the results.
Sampling Rate
As shown in Figure 6, with UBARv1 + R-Drop as the baseline, the model completes the tasks better when the sampling rate is appropriate. When is 0.01, it can maintain the fluency of responses.
Effect on the Regularization weight
As shown in Figure 7, among the results of all evaluation metrics corresponding to different regularization weight ranging from 0 to 0.05, the model achieves highest score when α = 0.01. In order to find a better weight, we further explore it in a fine-grained setting from 0.005 to 0.05 in Figure 8, which shows that 0.01 is appropriate.
Masking Rate
Masking rate for "Diff GT" and "Same Gt" ranging from 0 to 0.08 is plotted in Figure 5.
More Case Study
As shown in Table 6, where the user requests a recommendation for a modern European restaurant in downtown. In the third turn, the user should be given an explicit restaurant entity name according to the ground truth while UBARv1 and UBARv2 both choose to ask for the price of the restaurant to 0DVNLQJ5DWH &RPELQHG6FRUH 'LII*7 6DPH*7 ZRPDVN Figure 5: Masking Rate narrow down the choices. However, UBARv1 does not notice the context misses the necessary entity name and only simply provides the user with information such as an address, phone number, and price range in the fourth turn; on the contrary, UBARv2 can find logical inconsistency in context and provides key entity name in the fourth turn. From This case, we can see that UBAR using generated content as context does not completely avoid the problem of the missing entity name, and UBARv1 still has the error of not being able to supplement entity names. Instead, UBARv2 can supplement entity name appropriately, which reflects the fact that UBARv2 also can adaptively supplement and make amends in response to the current user utterance in order to stay consistent and coherent throughout the entire session and do it better than UBARv1. It is worth mentioning that at first we believe that the success of UBARv1 using all generated content comes from inconsistency between training and testing, i.e., the context that the model sees is not the ground truth but generated by the model itself, and are therefore concerned that removing exposure bias might cause UBARv2 to lose this helpful inconsistency, which means using mixed learning might cause the model to not learn to generate key entity words. Fortunately, this case eliminates our concerns and illustrates that the method used by UBARv2 to mitigate exposure bias in the dialog still retains and even improves the ability of the model to be consistent with the entire session. It is difficult to determine whether the exposure bias is effectively mitigated and whether the difference between distributions in training and inference is bridged. Even work on machine translation and automatic summarization motivated by addressing exposure bias has also typically judged whether exposure bias is mitigated just based on the improvement in BLEU or ROUGE score. By this criterion, the improvement of UBARv2 is sufficient to prove that the motivation of the proposed method for mitigating the exposure bias in the dialog is reasonable, but we still want to find a case to show that UBARv2 can effectively mitigate exposure bias in the dialog.
As shown in Table 7, UBARv1 using the original context can generate the key entity name in the current turn, but it can not respond correctly when using the generated context, which indicates that UBARv1 suffers from exposure bias in the dialog, i.e., UBARv1 has accumulated errors due to the generated context. However, again based on the generated context, UBARv2 still generates key entity names in the current turn, effectively mitigating the exposure bias.
Figure 1 :
1An overview of UBARv2, with session-level modeling, session-level sampling and R-Mask.
Figure 3 :
3The combined score of different Sampling Rate scores of each metric with different coefficient weights.
Figure 4 :
4Regularization Weight α from 0 to 0.05
, MinTL (Lin et al., 2020), AuGPT (Kulhánek et al., 2021), SOLOIST (Peng et al., 2020), UBARTable 1: Main results on MultiWOZ Evaluation End-to-end modeling.(Yang et al., 2021),GALAXY (He et al., 2021), PPTODModel
Inform
Success
BLEU
Comb
DAMD (Zhang et al., 2020)
57.9
47.6
16.4
84.8
AuGPT (Kulhánek et al., 2021)
76.6
60.5
16.8
85.4
MinTL (Lin et al., 2020)
73.7
65.4
19.4
89.0
SOLOIST (Peng et al., 2020)
82.3
72.4
13.6
90.9
UBAR (Yang et al., 2021)
83.7
70.3
17.6
94.4
PPTOD (Su et al., 2021)
83.1
72.7
18.2
96.1
BORT (Sun et al., 2022)
85.5
77.4
17.9
99.4
MTTOD (Lee, 2021)
85.9
76.5
19.0
100.2
GALAXY (He et al., 2021)
85.4
75.7
19.6
100.2
UBARv1
82.1
69.7
17.9
93.8
UBARv1+SS
83.9
71.0
17.6
95.0
UBARv1+R-Drop
86.8
76.8
18.5
100.3
UBARv2
87.5
77.6
19.0
101.6
Table 2 :
2The combined scores with different R-Mask Strategies. Gen and GT denotes generated and groundtruth belief state respectively, Same means the masking the same positions for the two sequences and Diff means masking different positions.
Table 3 :
3Case Study: delexicalized responses generated
by UBARv1 and UBARv2 of two consecutive dialog
turns in dialog session PMUL0006 from MultiWOZ
2.0.
Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayandeh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model. arXiv preprint arXiv:2005.05298. Yichi Zhang, Zhijian Ou, Huixin Wang, and Junlan Feng. 2020. A probabilistic end-to-end taskoriented dialog model with latent belief states towards semi-supervised learning. arXiv preprint arXiv:2009.08115. Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Taskoriented dialog systems that consider multiple appropriate responses under the same context. AAAI 2020 : The Thirty-Fourth AAAI Conference on Artificial Intelligence, 34(5):9604-9611.Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu,
Semih Yavuz, and Richard Socher. 2020. A simple
language model for task-oriented dialogue. arXiv
preprint arXiv:2005.00796.
Hyunmin Jeon and Gary Geunbae Lee. 2022. Dora: To-
wards policy optimization for task-oriented dialogue
system with efficient context. Computer Speech &
Language, 72:101310.
Jonáš Kulhánek, Vojtěch Hudeček, Tomáš Nekvinda,
and Ondřej Dušek. 2021. Augpt: Auxiliary tasks
and data augmentation for end-to-end dialogue
with pre-trained language models. arXiv preprint
arXiv:2102.05126.
Hung Le, Doyen Sahoo, Chenghao Liu, Nancy Chen,
and Steven CH Hoi. 2020. Uniconv: A unified
conversational neural architecture for multi-domain
task-oriented dialogues. In Proceedings of the 2020
Conference on Empirical Methods in Natural Lan-
guage Processing (EMNLP), pages 1860-1877.
Seanie Lee, Dong Bok Lee, and Sung Ju Hwang.
2020. Contrastive learning with adversarial per-
turbations for conditional text generation. arXiv
preprint arXiv:2012.07280.
Yohan Lee. 2021. Improving end-to-end task-oriented
dialog system with a simple auxiliary task. In Find-
ings of the Association for Computational Linguis-
tics: EMNLP 2021, pages 1296-1303.
Yunhao Li, Yunyi Yang, Xiaojun Quan, and Jianxing
Yu. 2021. Retrieve & memorize: Dialog policy
learning with multi-action memory. arXiv preprint
arXiv:2106.02317.
Zhaojiang Lin, Andrea Madotto, Genta Indra Winata,
and Pascale Fung. 2020. Mintl: Minimalist transfer
learning for task-oriented dialogue systems. arXiv
preprint arXiv:2009.12005.
Yixin Liu and Pengfei Liu. 2021. Simcls: A sim-
ple framework for contrastive learning of abstractive
summarization. arXiv preprint arXiv:2106.01890.
Nurul Lubis, Christian Geishauser, Michael Heck,
Hsien-Chin Lin, Marco Moresi, Carel van Niek-
erk, and Milica Gasic. 2020. Lava: Latent action
spaces via variational auto-encoding for dialogue
policy optimization. In Proceedings of the 28th In-
ternational Conference on Computational Linguis-
tics, pages 465-479.
Shikib Mehri, Tejas Srinivasan, and Maxine Eskenazi.
2019. Structured fusion networks for dialog. In Pro-
ceedings of the 20th Annual SIGdial Meeting on Dis-
course and Dialogue, pages 165-177.
Tomáš Nekvinda and Ondřej Dušek. 2021. Shades
of bleu, flavours of success: The case of multiwoz.
arXiv preprint arXiv:2106.05555.
Xiao Pan, Mingxuan Wang, Liwei Wu, and Lei Li.
2021. Contrastive learning for many-to-many mul-
tilingual neural machine translation. arXiv preprint
arXiv:2105.09501.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. Bleu: a method for automatic eval-
uation of machine translation. In Proceedings of
40th Annual Meeting of the Association for Compu-
tational Linguistics, pages 311-318.
7 Appendix
7.1 Results on Policy Optimization
Table 4
4shows the results of UBARv2 in the policy optimization setting. Notice that UBARv2 did not achieve much improvement over UBARv1. This is because session-level sampling and R-Mask all target belief state, using the ground-truth belief state may render their advantages obsolete.Model
Inform
Success
BLEU
Comb
UniConv (Le et al., 2020)
66.7
58.7
18.1
80.8
SFN (Mehri et al., 2019)
93.4
82.3
14.1
101.9
HDSA (Chen et al., 2019)
87.9
79.4
20.7
104.4
LAVA (Lubis et al., 2020)
95.9
93.5
10.8
105.5
HDNO (Wang et al., 2020)
93.3
83.4
17.8
106.1
MarCo (Wang et al., 2020)
94.5
87.2
17.3
108.1
GALAXY (He et al., 2021)
92.8
83.5
19.9
108.1
UBARV1
85.8
78.3
19.4
101.5
UBARv2
86.4
79.7
19.8
102.9
Table 4 :
4Policy Optimization results on MultiWOZ Evaluation using ground-truth dialog states to generate responses.
Table 5 :
5Evaluation on 4 Domains Except Hotel Except Train Except Attraction Except Restaurant ExceptTaxi Results of domain transfer. The first row is the base model of UBARv2 trained on the four domains and evaluated in-domain. The second row is the results of the base model fine-tuned with 100 new domain examples on the four domains. The last three rows are evaluations on the new domains with zero-shot or few-shot BM or UBARv2 trained on full data, respectively.Base Model trained in-domain
100.79
93.76
96.02
97.04
99.26
Few-shot BM on new domain
89.15
68.83
86.60
80.47
78.09
UBARv2 on all domains
106.81
99.38
100.08
100.58
101.75
Evaluation on New Domain
Hotel
Train
Attraction
Restaurant
Taxi
Zero-shot BM
25.64
54.07
27.10
20.60
55.79
Few-shot BM on new domain
59.74
84.13
87.39
77.71
90.98
UBARv2 on all domains
92.04
102.27
102.04
101.21
97.51
https://github.com/budzianowski/ multiwoz
https://github.com/Tomiinek/MultiWOZ_ Evaluation 3 https://github.com/dingdingtom/UBARv2
GT Resp. for how many people?UBARv1 how many people will be staying?
Scheduled sampling for sequence prediction with recurrent neural networks. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, Advances in neural information processing systems. 28Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. Advances in neural information processing systems, 28.
A neural probabilistic language model. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Janvin, Journal of Machine Learning Research. 36Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. Journal of Machine Learning Re- search, 3(6):1137-1155.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Causalaware safe policy improvement for task-oriented dialogue. Kazuma Govardana Sachithanandam Ramachandran, Caiming Hashimoto, Xiong, arXiv:2103.06370arXiv preprintGovardana Sachithanandam Ramachandran, Kazuma Hashimoto, and Caiming Xiong. 2021. Causal- aware safe policy improvement for task-oriented di- alogue. arXiv preprint arXiv:2103.06370.
Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, arXiv:1511.06732Sequence level training with recurrent neural networks. arXiv preprintMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.
Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, Yang Liu, arXiv:1512.02433Minimum risk training for neural machine translation. arXiv preprintShiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2015. Minimum risk training for neural machine translation. arXiv preprint arXiv:1512.02433.
Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958.
Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, Yi Zhang, arXiv:2109.14739Multitask pre-training for plug-and-play task-oriented dialogue system. arXiv preprintYixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2021. Multi- task pre-training for plug-and-play task-oriented di- alogue system. arXiv preprint arXiv:2109.14739.
Bort: Back and denoising reconstruction for end-to-end task-oriented dialog. Haipeng Sun, Junwei Bao, Youzheng Wu, Xiaodong He, arXiv:2205.02471arXiv preprintHaipeng Sun, Junwei Bao, Youzheng Wu, and Xi- aodong He. 2022. Bort: Back and denoising recon- struction for end-to-end task-oriented dialog. arXiv preprint arXiv:2205.02471.
Bo-Hsiang Tseng, Yinpei Dai, Florian Kreyssig, Bill Byrne, arXiv:2107.11904Transferable dialogue systems and user simulators. arXiv preprintBo-Hsiang Tseng, Yinpei Dai, Florian Kreyssig, and Bill Byrne. 2021. Transferable dialogue systems and user simulators. arXiv preprint arXiv:2107.11904.
On exposure bias, hallucination and domain shift in neural machine translation. Chaojun Wang, Rico Sennrich, arXiv:2005.03642arXiv preprintChaojun Wang and Rico Sennrich. 2020. On exposure bias, hallucination and domain shift in neural ma- chine translation. arXiv preprint arXiv:2005.03642.
Modelling hierarchical structure between dialogue policy and natural language generator with option framework for task-oriented dialogue system. Jianhong Wang, Yuan Zhang, Tae-Kyun Kim, Yunjie Gu, arXiv:2006.06814arXiv preprintJianhong Wang, Yuan Zhang, Tae-Kyun Kim, and Yun- jie Gu. 2020. Modelling hierarchical structure be- tween dialogue policy and natural language genera- tor with option framework for task-oriented dialogue system. arXiv preprint arXiv:2006.06814.
Multi-domain dialogue acts and response co-generation. Kai Wang, Junfeng Tian, Rui Wang, Xiaojun Quan, Jianxing Yu, ACL 2020: 58th annual meeting of the Association for Computational Linguistics. Kai Wang, Junfeng Tian, Rui Wang, Xiaojun Quan, and Jianxing Yu. 2020. Multi-domain dialogue acts and response co-generation. In ACL 2020: 58th annual meeting of the Association for Computational Lin- guistics, pages 7125-7134.
Sequence-to-sequence learning as beam-search optimization. Sam Wiseman, Alexander M Rush, 10.18653/v1/D16-1137Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsSam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search op- timization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1296-1306, Austin, Texas. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, arXiv:1910.03771Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprintThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. arXiv preprint arXiv:1910.03771.
Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, Rdrop: regularized dropout for neural networks. Advances in Neural Information Processing Systems. 34Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. R- drop: regularized dropout for neural networks. Ad- vances in Neural Information Processing Systems, 34.
Beyond error propagation in neural machine translation: Characteristics of language also matter. Lijun Wu, Xu Tan, Di He, Fei Tian, Tao Qin, Jianhuang Lai, Tie-Yan Liu, 10.18653/v1/D18-1396Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsLijun Wu, Xu Tan, Di He, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018. Beyond error propaga- tion in neural machine translation: Characteristics of language also matter. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3602-3611, Brussels, Bel- gium. Association for Computational Linguistics.
Ubar: Towards fully end-to-end task-oriented dialog system with gpt-2. Yunyi Yang, Yunhao Li, Xiaojun Quan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Yunyi Yang, Yunhao Li, and Xiaojun Quan. 2021. Ubar: Towards fully end-to-end task-oriented dialog system with gpt-2. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 35, pages 14230-14238.
Multiwoz 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines. Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, Jindong Chen, arXiv:2007.12720arXiv preprintXiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020. Multiwoz 2.2: A dialogue dataset with addi- tional annotation corrections and state tracking base- lines. arXiv preprint arXiv:2007.12720.
Bridging the gap between training and inference for neural machine translation. Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu, arXiv:1906.02448arXiv preprintWen Zhang, Yang Feng, Fandong Meng, Di You, and Qun Liu. 2019. Bridging the gap between training and inference for neural machine translation. arXiv preprint arXiv:1906.02448.
| [
"https://github.com/budzianowski/",
"https://github.com/Tomiinek/MultiWOZ_",
"https://github.com/dingdingtom/UBARv2"
] |
[
"Guiding Visual Question Generation",
"Guiding Visual Question Generation"
] | [
"Nihir Vedd n.vedd19@imperial.ac.uk \nImperial College London\n\n",
"Zixu Wang zixu.wang@imperial.ac.uk \nImperial College London\n\n",
"Marek Rei marek.rei@imperial.ac.uk \nImperial College London\n\n",
"Yishu Miao y.miao20@imperial.ac.uk \nImperial College London\n\n",
"Lucia Specia l.specia@imperial.ac.uk \nImperial College London\n\n"
] | [
"Imperial College London\n",
"Imperial College London\n",
"Imperial College London\n",
"Imperial College London\n",
"Imperial College London\n"
] | [
"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | In traditional Visual Question Generation (VQG), most images have multiple concepts (e.g. objects and categories) for which a question could be generated, but models are trained to mimic an arbitrary choice of concept as given in their training data. This makes training difficult and also poses issues for evaluationmultiple valid questions exist for most images but only one or a few are captured by the human references. We present Guiding Visual Question Generation -a variant of VQG which conditions the question generator on categorical information based on expectations on the type of question and the objects it should explore. We propose two variant families: (i) an explicitly guided model that enables an actor (human or automated) to select which objects and categories to generate a question for; and (ii) 2 types of implicitly guided models that learn which objects and categories to condition on, based on discrete variables. The proposed models are evaluated on an answer-category augmented VQA dataset and our quantitative results show a substantial improvement over the current state of the art (over 9 BLEU-4 increase). Human evaluation validates that guidance helps the generation of questions that are grammatically coherent and relevant to the given image and objects. | 10.18653/v1/2022.naacl-main.118 | [
"https://www.aclanthology.org/2022.naacl-main.118.pdf"
] | 239,009,900 | 2110.08226 | 96488812bd748c9f15079b3b5f33a6141350634f |
Guiding Visual Question Generation
July 10-15, 2022
Nihir Vedd n.vedd19@imperial.ac.uk
Imperial College London
Zixu Wang zixu.wang@imperial.ac.uk
Imperial College London
Marek Rei marek.rei@imperial.ac.uk
Imperial College London
Yishu Miao y.miao20@imperial.ac.uk
Imperial College London
Lucia Specia l.specia@imperial.ac.uk
Imperial College London
Guiding Visual Question Generation
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJuly 10-15, 2022/nihirv/guiding-vqg
In traditional Visual Question Generation (VQG), most images have multiple concepts (e.g. objects and categories) for which a question could be generated, but models are trained to mimic an arbitrary choice of concept as given in their training data. This makes training difficult and also poses issues for evaluationmultiple valid questions exist for most images but only one or a few are captured by the human references. We present Guiding Visual Question Generation -a variant of VQG which conditions the question generator on categorical information based on expectations on the type of question and the objects it should explore. We propose two variant families: (i) an explicitly guided model that enables an actor (human or automated) to select which objects and categories to generate a question for; and (ii) 2 types of implicitly guided models that learn which objects and categories to condition on, based on discrete variables. The proposed models are evaluated on an answer-category augmented VQA dataset and our quantitative results show a substantial improvement over the current state of the art (over 9 BLEU-4 increase). Human evaluation validates that guidance helps the generation of questions that are grammatically coherent and relevant to the given image and objects.
Introduction
In the last few years, the AI research community has witnessed a surge in multimodal tasks such as Visual Question Answering (VQA) (Antol et al., 2015;Anderson et al., 2018), Multimodal Machine Translation (Specia et al., 2016;Elliott et al., 2017;Barrault et al., 2018;Caglayan et al., 2019), and Image Captioning (IC) (Vinyals et al., 2015;Karpathy and Fei-Fei, 2015;Xu et al., 2015). Visual Question Generation (VQG) Krishna et al., 2019;, a multimodal task which aims to generate a question given an image, remains relatively under-researched despite the popularity of its textual counterpart. Throughout the sparse literature in this domain, different approaches have augmented and/or incorporated extra information as input. For example, Pan et al. (2019) emphasised that providing the ground truth answer to a target question is beneficial in generating a non-generic question. Krishna et al. (2019) pointed out that requiring an answer to generate questions violates a realistic scenario. Instead, they proposed a latent variable model using answer categories to help generate the corresponding questions. Recently, Scialom et al. (2020) incorporated a pretrained language model with object features and image captions for question generation.
In this work, we explore VQG from the perspective of 'guiding' a question generator. Guiding has shown success in image captioning (Zheng et al. (2018) and Ng et al. (2020)), and in this VQG work we introduce the notion of 'guiding' as conditioning a generator on inputs that match specific chosen properties from the target. We use the answer category and objects/concepts based on an image and target question as inputs to our decoder. We propose our explicit guiding approach to achieve this goal. We additionally investigate an implicit guiding approach which attempts to remove the dependency on an external actor (see more below).
The explicit variant (Section 3.1) is modelled around the notion that an actor can select a subset of detected objects in an image for conditioning the generative process. Depending on the application, this selection could be done by a human, and algorithm or chosen randomly. For example, imagine either a open-conversation chat-bot or a language learning app. In the chat-bot case, a human may show the bot a picture of something. The bot may use randomly sampled concepts from the image (e.g. an object-detected tree) to ask a human a question upon. In the language learning case, the human may wish to select certain concepts they want the generated question to reflect. For exam-ple, they might select a subset of animal-related objects from the whole set of detected objects in order to generate questions for teaching the animalrelated vocabulary in a language learning setting. Alongside the objects, the actor may also provide, or randomly sample, an answer category to the question generator.
The implicit variant (Section 3.2), on the other hand, is motivated by removing the dependency on the aforementioned actor. We provide two methodologies for our proposed implicit variant. The first uses a Gumbel-Softmax (Jang et al., 2016) to provide a discrete sample of object labels that can be used for generating a question. The second method employs a model with two discrete latent variables that learn an internally-predicted category and a set of objects relevant for the generated question, optimised with cross-entropy and variational inference (Kingma and Welling, 2014;Miao et al., 2016).
Human evaluation shows that our models can generate realistic and relevant questions, with our explicit model almost fooling humans when asked to determine which, out of two questions, is the generated question. Our experiments and results are presented in Section 5.
To summarise, our main contributions are: 1) The first work to explore guiding using object labels in Visual Question Generation; 2) A novel generative Transformer-based set-to-sequence approach for Visual Question Generation; 3) The first work to explore discrete variable models in Visual Question Generation; and 4) A substantial increase in quantitative metrics -our explicit model improves the current state of the art setups by over 9 BLEU-4 and 110 CIDEr.
2 Related Work 2.1 Visual Question Generation introduced the first paper in the field of VQG, employing an RNN based encoderdecoder framework alongside model-generated captions to generate questions. Since then, only a handful of papers have investigated VQG. Fan et al. (2018) demonstrated the successful use of a GAN in VQG systems, allowing for non-deterministic and diverse outputs. Jain et al. (2017) proposed a model using a VAE instead of a GAN, however their improved results require the use of a target answer during inference. To overcome this unrealistic requirement, Krishna et al. (2019) augmented the VQA (Antol et al., 2015) dataset with answer categories, and proposed a model which doesn't require an answer during inference. Because their architecture uses information from the target as input (i.e. an answer category), their work falls under our definition of guided generation. More recently, Scialom et al. (2020) investigate the cross modal performance of pre-trained language models by fine-tuning a BERT (Devlin et al., 2018) model on model-based object features and groundtruth image captions. Other work, such as Patro et al. (2018), Patro et al. (2020 and Uppal et al. (2020), either do not include BLEU scores higher than BLEU-1, which is not very informative, or address variants of the VQG task. In the latter case the models fail to beat previous SoTA on BLEU-4 for standard VQG. Recently and (Xu et al., 2021) and (Xie et al., 2021) achieve SoTA in VQG using graph convolutional networks. However, both works follow an unrealistic setup by conditioning their model on raw answers during training and inference -a dependency we attempt to remove.
Discrete (Latent) Variable Models
Discrete variable models are ideal for tasks which require controllable generation (Hu et al., 2017) or 'hard' indexing of a vector (Graves et al., 2016). Existing literature provide several methods to achieve discretization. NLP GAN literature (such as Seq-GAN and MaskGAN (Fedus et al., 2018)) commonly use REINFORCE (Williams, 1992) to overcome differentiability issues with discrete outputs. Other discretization methodologies can be found in Variational Auto Encoder (VAE) literature (Kingma and Welling, 2014). Some older methodologies are NVIL (Mnih and Gregor, 2014) and VIMCO (Mnih and Rezende, 2016). However, VAE literature also introduced Concrete (Maddison et al., 2016), Gumbel-Softmax (Jang et al., 2016) andVector Quantization (Oord et al., 2017) as discretization strategies (technically speaking, Concrete and Gumbel-Softmax are strongly peaked continuous distributions).
In this work, we use a Gumbel-Softmax approach to sample a distribution over objects. At inference time, given a set of object tokens, learning this 'hard' distribution allows the model to internally sample a subset of objects that produce the most informative question. Our variational model additionally learns a generative and variational distribution that allow the model to implicitly learn which objects are relevant to a question and an- (b) Architecture of our implicit model. Similar to the explicit model, first an object detection model is used to extract object labels and object features. Object labels are sent to a non-linear MLP after which a Gumbel-Softmax is applied to obtain the discrete vector 'Scores'. The Scores are then used to mask the object labels and predict a category. The masked object labels and predicted category are then sent to the text encoder. The outputs are fused with the image encoder outputs and sent to the decoder. (c) Architecture of our variational implicit model. After the object detection model extracts the object labels and object features, they are sent to the variational and generative encoders. The variational encoder is used at train time only, and also receives the question and answer pair. Depending whether we're training or in inference, we obtain a discrete vector z from the respective distribution. z is then used to mask the object labels. This variant then follows the same methodology as its non-variational counterpart. For this sub-figure only, the dashed lines indicate training. swer pair whilst incorporating non-determinism for diverse outputs.
Methodology
We introduce the shared concepts of our explicit and implicit model variants, before diving into the variant-specific methodologies (Section 3.1 & 3.2). For both variants, we keep the VQG problem grounded to a realistic scenario. That is, during inference, we can only provide the model with an image, and data that can either be generated by a model (e.g. object features or image captions) and/or trivially provided by an actor (i.e. answer category and a selected subset of the detected objects). However, during training, we are able to use any available information, such as images, captions, objects, answer categories, answers and target questions, employing latent variable models to minimise divergences between feature representations of data accessible at train time but not inference time. This framework is inspired by Krishna et al. (2019). In Appendix A, we discuss the differences of input during training, testing and inference.
Formally, the VQG problem is as follows: Given an imageĩ ∈Ĩ, whereĨ denotes a set of images, decode a question q. In the guided variant, for each i, we also have access to textual utterances, such as ground truth answer categories and answers. The utterances could also be extracted by an automated model, such as image captions (Li et al., 2020), or object labels and features (Anderson et al., 2018). In our work, answer categories take on 1 out of 16 categorical variables to indicate the type of question asked. For example, "how many people are in this picture?" would have a category of "count" (see Krishna et al. (2019) for more details).
Text Encoder. For encoding the text, we use BERT (Devlin et al., 2018) as a pre-trained lan-guage model (PLM). Thus, for a tokenised textual inputS of length T , we can extract a d-dimensional representation fors t ∈S: X = PLM(S) ∈ R T ×d Image Encoder. Given an imageĩ, we can extract object features, f ∈ R ko×2048 , and their respective normalized bounding boxes, b ∈ R ko×4 , with the 4 dimensions referring to horizontal and vertical positions of the feature bounding box. Following the seminal methodology of Anderson et al. (2018), k o is usually 36. Subsequent to obtaining these features, we encode the image using a Transformer (Vaswani et al., 2017), replacing the default position embeddings with the spatial embeddings extracted from the bounding box features (Krasser and Stumpf, 2020;Cornia et al., 2019). Specifically, given f, b from imageĩ:
i = Transformer(f, b) ∈ R ko×d
Text Decoder. We employ a pretrained Transformer decoder for our task (Wolf et al., 2020). Following standard sequence-to-sequence causal decoding practices, our decoder receives some encoder outputs, and auto-regressively samples the next token, for use in the next decoding timestep. Our encoder outputs are the concatenation (; operator) of our textual and vision modality representation: X = [S; i] ∈ R (T +ko)×d , and our decoder takes on the form:q = Decoder(X), whereq is the predicted question.
In this work, we primarily focus on a set-tosequence problem as opposed to a sequence-tosequence problem. That is, our textual input is not a natural language sequence, rather an unordered set comprising of tokens from the answer category, the object labels, and the caption. How this set is obtained is discussed in following section. Due to the set input format, we disable positional encoding on the PLM encoder (Text Encoder in Figure 1).
Explicit Guiding
As mentioned in Section 1, the explicit variant requires some actor in the loop. Thus, in a real world setting, this model will run in two steps. Firstly, we run object detection (OD) and image captioning (IC) over an image and return relevant guiding information to the actor. The actor may then select or randomly sample a subset of objects which are sent to the decoder to start its generation process. If the actor opts for a random sample strategy, no human is needed during the inference process (see Appendix A for examples).
To enable this setup, we create paired data based on the guided notion. At a high level, our approach creates this data in three steps: 1) obtain object labels; 2) obtain concepts via IC Formally,
objects = OD(i) ∈ R ko cap = CaptionModel(i) ∈ R Tcap cap = rmStopWords(caption) ∈ R <Tcap candidate_concepts = set(objects; cap) ∈ R Tcc(1)
Here, OD stands for an object detector model, rmStopWords is a function which removes the stop words from a list, and set is a function which creates a set from the concatenation (the ; operator) of the detected objects and obtained captions. cap stands for caption. The set is of size T cc < k o + T cap . Using this obtained candidate_concepts set, we run our filtration process.
Once the set of candidate concepts has been constructed, we filter them to only retain concepts relevant to the target QA pair. After removing stop words and applying the set function to the words in the QA pair, we use Sentence-BERT (Reimers and Gurevych, 2019) to obtain embeddings for the candidate QA pair and candidate_concepts (Eq 1). We subsequently compute a cosine similarity matrix between the two embedding matrices, and then select the top k most similar concepts. The chosen k concepts,S, are always a strict subset of the candidate concepts that are retrieved using automated image captioning or object detection. This process emulates the selection of objects an actor would select in an inference setting when given a choice of possible concepts, and creates paired data for the guided VQG task. We now concatenate an answer category toS: S = PLM([S; category]) ∈ R T ×d .
With text encoding S, we run the model, optimizing the negative log likelihood between the predicted question and the ground truth. Note that the concatenation in the decoder below is along the sequence axis (resulting in a tensor ∈ R T +ko×d ).
q = Decoder([S; i]) L = CrossEntropy(q, q)(2)
Implicit Guiding
We now introduce our experiments for the implicit variant for VQG. This variant differs from its explicit counterpart as it aims to generate questions using only images as the input, while internally learning to predict the relevant category and objects. Mathematically, the explicit variant modelsq = p(w t |i,S, category, w 0 , ..., w t−1 ; θ) whereS and category are obtained as described in Section 3.1. During inference, the implicit variant instead attempts to modelq = p(w t |i,ẽ obj , e cat , w 0 , ..., w t−1 ; θ) whereẽ obj , e cat are not explicitly fed in to the model. Rather, they are determined internally as defined in Equation 6.
Given an image, we apply the same object detection model as in the explicit variants to extract object labels, which are then encoded using an embed layer. Formally,
objects = OD(i) ∈ R ko e obj = embed(objects) ∈ R ko×d(3)
Since we would like the implicit model to learn relevant objects for an image internally, we project each object in e obj to a real-valued score:
scores = MLP(e obj ) ∈ R ko(4)
Subsequently, we apply a hard Gumbel-Softmax (Jang et al., 2017) to obtain predictions over selected objects. Because Gumbel-Softmax samples from a log-log-uniform distribution, stochasticity is now present in our sampled objects. To sample k objects, we tile/repeat scores k times before inputting it into the Gumbel-Softmax.z, our k-hot sampled objects vector, is then used to mask object embeddings for use in decoding:
z = gumbel-softmax(scores, k) ∈ R kõ e obj =z * e obj ∈ R ko×d(5)
Where * denotes element-wise multiplication. Categories can also be a strong guiding factor and instead of making it an explicit input, we build a classifier to predict possible categories. In this variant,ẽ obj is used as an input to both our text encoder, and the MLP responsible for the category prediction:
S = PLM(ẽ obj ) ∈ R ko×d p(ĉ at|ẽ obj ) = softmax(MLP(ẽ obj )) ∈ R kcat
(6) Using the one-hot representation of the predicted category (i.e. e cat = one-hot(p(ĉ at|ẽ obj )), we can concatenate our image, PLM representation of objects, and predicted category to feed into the decoder:q = Decoder([i; S; e cat ]) ∈ R Tq . However, during training, we teacher force against the 'gold' set of objects,S (obtained using candi-date_concepts in Equation 1). Training and optimization thus follow:
q =Decoder([i;S; e cat ]) ∈ R Tq L =CrossEntropy(q, q)+ CrossEntropy(p(ĉ at|ẽ obj ), cat)+ StartEnd(ẽ obj ,S) (7)
where StartEnd is a BERT QA-head style loss (Devlin et al., 2018) that uses binary cross entropy for each k inẽ obj .
Variational Implicit.
Hypothesising that ground-truth QA pairs might provide information useful to selecting objects, we additionally attempt to extend our model to incorporate QA pairs to learn a latent variational distribution over the objects. However, since QA pairs can only be used during training to learn a variational distribution, we introduce another generative distribution that is only conditioned on the images and extracted objects. We borrow the idea from latent variable models to minimise Kullback-Leibler (KL) divergence between the variational distribution and generative distribution, where the variational distribution is used during training and the generative distribution is used in inference.
Continuing from Equation 3, we build two matrices, M gen and M var . The former is a concatenation of the image features and object embeddings, and the latter the concatenation between the encoded QA pair and M gen . Depending on whether we're in a training or inference regime, the CLS token of the relevant matrix is used to sample a mask, z, which is subsequently applied on the aforementioned object embeddings:
M gen = encode([e obj ; i]) ∈ R 2ko×d e qa = embed(Q;A) ∈ R Tqa×d M var = encode([e qa ; M gen ]) ∈ R (2ko+T qa )×d q ϕ (z|M gen , M var ) = MLP(M CLS gen ; M CLS var ) ∈ R ko p θ (z|M gen ) = MLP(M gen ) ∈ R kõ z = gumbel-softmax(z, k) ∈ R kõ e obj =z * e obj ∈ R ko×d
where q ϕ (z|M gen , M var ) is the variational distribution, p θ (z|M gen ) is the generative distribution, and MLP denotes a multilayer perceptron for learning the alignment between objects and QA pairs. encode is an attention-based function such as BERT (Devlin et al., 2018). From here, our methodology follows on from Equation 6. However, our loss now attempts to minimise the ELBO:
L = E[log p θ (q|z,ĉ at)] − D KL [q ϕ (z|M CLS gen , M CLS var )||p θ (z|M CLS gen )] + log p(ĉ at|M var )
Experiments
Datasets
We use the VQA v2.0 dataset 1 (Antol et al., 2015) (CC-BY 4.0), a large dataset consisting of all relevant information for the VQG task. We follow the official VQA partition, with i.e. 443.8K questions from 82.8K images for training, and 214.4K questions from 40.5K images for validation. Following Krishna et al. (2019), we report the performance on validation set as the annotated categories and answers for the VQA test set are not available.
We use answer categories from the annotations of Krishna et al. (2019). The top 500 answers in the VQA v2.0 dataset are annotated with a label from the set of 15 possible categories, which covers up the 82% of the VQA v2.0 dataset; the other answers are treated as an additional category. These annotated answer categories include objects (e.g. "mountain", "flower"), attributes (e.g. "cold", "old"), color, counting, etc.
We report BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015), METEOR (Lavie and Agarwal, 2007), and MSJ (Montahaei et al., 2019) as evaluation metrics. The MSJ metric accounts for both the diversity of generated outputs, and the n-gram overlap with the ground truth utterances.
Comparative Approaches
We compare our models with four recently proposed VQG models Information Maximising VQG (IMVQG; supervised with image and answer category) (Krishna et al., 2019), What BERT Sees (WBS; supervised with image and image caption) (Scialom et al., 2020), Deep Bayesian Network (DBN; supervised with image, scenes, image captions and tags/concepts) (Patro et al., 2020), and Category Consistent Cyclic VQG (C3VQG; supervised with image and answer category) (Uppal et al., 2020). We follow IMVQG's evaluation setup 1 https://visualqa.org/ because they hold the current SoTA in VQG for realistic inference regimes. We omit (Xu et al., 2021) and (Xie et al., 2021) from our table of results because these models follow an unrealistic inference regime, requiring an explicit answer during training and inference. Our baseline is an image-only model, without other guiding information or latent variables.
Implementation Details
In Section 3 we described the shared aspects of our model variants. The reported scores in Section 5 use the same hyperparameters and model initialisation. A table of hyperparameters and training details can be found in Appendix B. BERT Base (Devlin et al., 2018) serves as our PLM encoder and following Wolf et al. (2020); Scialom et al. (2020), we use a pre-trained BERT model for decoding too. Though typically not used for decoding, by concatenating the encoder inputs with a [MASK] token and feeding this to the decoder model, we are able to obtain an output (e.g.q 1 ). This decoded output is concatenated with the original input sequence, and once again fed to the decoder to sample the next token. Thus, we use the BERT model as a decoder in an auto-regressive fashion.
To encode the images based on the Faster-RCNN object features (Ren et al., 2015;Anderson et al., 2018), we use a standard Transformer (Vaswani et al., 2017) encoder. Empirically, we find k = 2 to be the best number of sampled objects.
Results
We present quantitative results in Table 1 and qualitative results in Figure 2. We evaluate the explicit, implicit and variational implicit models in a singlereference setup, as the chosen input concepts are meant to guide the model output towards one particular target reference.
Quantitative Results
Starting with the explicit variant, as seen in Table 1, we note that our image-only baseline model achieves a BLEU-4 score of 5.95. We test our model with different combinations of text features to identify which textual input is most influential to the reported metrics. We notice that the contribution of the category is the most important text input with respect to improving the score of the model, raising the BLEU-4 score by more than 11 points (image-category) over the aforementioned baseline. Table 1: Single reference evaluation results. "*-guided" refers to the combination of category and objects. In the explicit variant only, objects refers to the subset of detected objects and caption keywords, filtered on the target QA pair. † indicates an unrealistic inference regime, using answers as input for question generation. ‡ WBS scores are from single reference evaluation based on the VQA1.0 pre-trained "Im. + Cap." model provided by the authors.
However, whilst the BLEU-4 for the image-object variant is 2.3 points lower, it outperforms the imagecategory variant by 3.9 points on the diversity orientated metric MSJ-5 -indicating that the imagecategory variant creates more generic questions. As expected, the inclusion of both the category and objects (image-guided) outperforms either of the previously mentioned models, achieving a new state-of-the-art result of 24.4 BLEU-4. This combination also creates the most diverse questions, with an MSJ-5 of 57.3.
We also test our hypothesis that guiding produces questions that are relevant to the fed in concepts. This is tested with 'image-guided-random' variant. This variant is the same trained model as 'image-guided', but uses k = 2 random concepts from a respective image instead of using the ground truth question to generate concepts. Our results show that guiding is an extremely effective strategy to produce questions related to conceptual information, with a BLEU-4 score difference of over 20 points. We refer the reader to Section 5.3 for human evaluation which again validates this hypothesis, and Section 3.1 for an explanation of why guiding is valid for evaluating VQG models.
We evaluate the implicit models as follows. The implicit image-category variant does not predict any objects internally. It uses all image features and object embeddings alongside the category supervision signal as described in Equation 7. The implicit image-guided models use the 'gold' objects at inference (See Section 3.1). If these variants fit the 'gold' objects well, it indicates that their generative abilities are suitable for guiding/conditioning on predicted or random objects. The image-guidedpred variants are evaluated using internally predicted objects -and the model variant that would be used in a real inference setting. Finally, the image-guided-random variants are fed in random object labels at inference.
For implicit guiding to be a valid methodology, we need to validate two criteria: 1) Successfully conditioning the decoder on guiding information; 2) Better than random accuracy of object prediction/selection. Note that intuitively, the implicit model is expected to perform worse than the explicit model in terms of the language generation metrics. This is because of the inherently large entropy of the relevant answer category and the objects given an image. However, if the learned distributions over the categories and objects can capture the relevant concepts of different images, they may benefit the question generation when compared with image-only.
According to Table 1, by predicting just an answer category and no objects (image-category), the proposed implicit model beats the image-only baseline. The BLEU-4 score difference is less than 1 with the best performing WBS model (Scialom et al., 2020) -which also generates questions without explicit guided information.
As mentioned above, we can evaluate the implicit model by either feeding the 'gold' objects obtained as described in Section 3.1, or by the internally predicted objects as described in Section 3.2. These form the variants image-guided and imageguided-pred respectively. For both the implicit and variational implicit models, image-guided is expected to perform the best. Results validate this, showing a performance of 14.2 and 12.6 BLEU-4 respectively. Importantly, the relatively high scores of these guided models (compared to the comparative approaches) show that these models can successfully be conditioned on guiding information.
We also notice that for both types of implicit models, image-guided-pred outperforms image-guided-random. Specifically for the nonvariational implicit, we see a higher BLEU-4 score difference of 2.7. Interestingly, despite this BLEU-4 difference being higher than its variational counterpart, there is a trade-off for the diversityorientated MSJ metric. This indicates that although generated questions are discretely 'closer' to the ground truth, similar phrasing is used between the generated questions. In fact, an acute case of this phenomena occurs for the image-category variant where the BLEU-4 variant is higher than imageguided-pred or image-guided-random. In this case, qualitative analysis shows us that the higher BLEU-4 score can be attributed to the generic nature of the generated question. Failure cases of automatic evaluation metrics in NLP is discussed further in (Caglayan et al., 2020).
To satisfy the 'better than random accuracy of object prediction/selection' criteria previously outlined, we measure the overlap of the k predicted objects vs k 'gold' object labels. These 'gold' object labels are obtained similarly to the explicit variant (Section 3.1), however the caption tokens are not fed to the filtering process. Random accuracy for selecting objects is 12.5%. Our overlap accuracy on implicit image-pred is 18.7% -outperforming random selection. Variational implicit image-pred Baseline Implicit V-Implicit Explicit Experiment 1 34.3% ± 0.1 47.1% ± 0.12 36.7% ± 0.08 44.9% ± 0.08 Experiment 2 95.9% ± 0.03 76.6% ± 0.16 89% ± 0.09 93.5% ± 0.06 Experiment 3 ---77.6% ± 0.09 Experiment 4 ---74.1%/40.0% ± 0.07/0.18 Table 2: Human evaluation results (and standard dev.) failed to outperform random accuracy.
Qualitative Results
Qualitative results are shown in Figure 2 and Appendix D. Figure 2 depicts how outputs from different model variants compare to ground truth questions. Without any guiding information, the imageonly variant is able to decode semantic information from the image, however this leads to generic questions. The implicit variant, for which we also report the predicted category and objects, mostly generates on-topic and relevant questions. Focusing on the explicit variant, we witness high-quality, interesting, and on-topic questions. Appendix D depicts how well our explicit imageguided variant handles a random selection of detected objects given the image. This experiment intends to gauge the robustness of the model to detected objects which may fall on the low tail of the human generating question/data distribution. To clarify, humans are likely to ask commonsense questions which generally focus on obvious objects in the image. By selecting objects at random for the question to be generated on, the model has to deal with object permutations not seen during training, and categories that are invalid for an image.
Analysing the outputs, when viable categories and objects that are expected to fall in a commonsense distribution are sampled, the model can generate high quality questions. Interestingly, we observe that when the sampled objects are not commonsense (e.g. "ears arms" for the baby and bear picture), the model falls back to using the object features instead of the guiding information. This phenomenon is also witnessed when the sampled category does not make sense for the image (e.g. category 'animal' in image 531086). Despite the category mismatch, the model successfully uses the object information to decode a question.
Human Evaluation
We ask seven humans across four experiments to evaluate the generative capabilities of our models. Experiment 1 is a visual Turing test: given an image, a model generated question and a ground truth question, we ask a human to determine which ques- tion they believe is model generated. Experiment 2 attempts to discern the linguistic and grammatical capabilities of our model by asking a human to make a binary choice about whether the generated question seems natural. Experiment 3 shows a human an image alongside a model generated question (explicit variant). Then, we ask the human to make a choice about whether the generated question is relevant to the image (i.e. could an annotator have feasibly asked this question during data collection). Finally, experiment 4 judges whether objects are relevant to a generated question. The experiment is set up with true-pairs and adversarial-pairs. True-pairs are samples where the shown objects are the ones used to generate the question. Adversarialpairs show a different set of objects than those which generated the question. If more true-pairs are are marked correct (i.e. if at least one of the objects is relevant to the generated question) than the adversarial-pairs, then our model successfully generates questions on guiding information.
In experiment 1, a model generating human-level questions should be expected to score 50%, as a human would not be able to reliably distinguish them from the manually created questions. Our results show the explicit and non-variational implicit model outperforming the variational implicit and baseline variants, fooling the human around 45% of the time.
Whilst not yet at the ideal 50%, the explicit approach provides a promising step towards beating the visual Turing Test. Experiment 2 evaluates the grammatical plausibility of the generated questions. In general, all models perform extremely well in this experiment, with the baseline variant generating grammatically correct sentences 96% of the time. This is expected, as the baseline typically falls back to decoding easy/generic questions. Experiment 3, is evaluated on our best performing model (explicit image-guided). Here, 78% of the generated questions are marked as relevant/on-topic given an image. Finally, experiment 4's results show true-pairs marked as correct vs adversarial-pairs (incorrectly) marked as correct. Since the former is larger than the latter -72% vs 42%, the model can successfully use guiding/object information to create on-topic questions.
Conclusions
We presented a guided approach to visual question generation (VQG), which allows for the generation of questions that focus on specific chosen aspects of the input image. We introduced three variants for this task, the explicit, implicit, and variational implicit. The former generates questions based on an explicit answer category and a set of concepts from the image. In contrast, the latter two discretely predict these concepts internally, receiving only the image as input. The explicit model achieves SoTA results when evaluated against comparable models. Qualitative evaluation and human-based experiments demonstrate that both variants produce realistic and grammatically valid questions.
1648
A Training, testing and inference
Here, using an example, we clarify the inputs to our explicit model (Section 3.1) in the training, testing and inference setups. Firstly, we create a set of candidate_concepts (see eq. 1) from the caption and objects: [person, dog, frisbee, grass, man, throwing] (∈ R 6 ). These words are individually embedded. Secondly, we concatenate and embed the set of question and answer tokens (∈ R 7 ).
Then, we construct a matrix which gives us cosine similarity scores for each candidate_concepts token to a QA token (∈ R 6×7 ). We choose k = 2 tokens from the candidate_concepts which are most similar to the words from the QA. Here, "dog" and "frisbee" are likely chosen. Our input to the model is then <i, "object", "dog", "frisbee">.
Notice that it is possible for these words to be in the QA pair (e.g. "frisbee"). Importantly, these words have not been fed from the QA pair -they have been fed in from model-obtained concepts ({Object} and {Caption}). Philosophically similar, Krishna et al. (2019) constructed inputs based on target information for use in training and benchmarking. Testing. Imagine a data labeler creating questions based on an image. They would look at the image, and decide on the concepts to create the question for. Our testing methodology follows this intuition using the strategy outlined above: the k = 2 selected objects from candidate_concepts is a programmatic attempt for selecting concepts which could generate the target question. Note that there can be many questions generated for a subset of concepts (e.g. 'is the dog about to catch the frisbee?', 'what is the flying object near the dog?' etc.). As outlined above, we are not taking concepts from the target. Rather we use information from the target to emulate the concepts an actor would think of to generate the target question. Because there can be different concepts questions are based on for one image (see ground-truth questions in Appendix D), our strategy allows us to generate questions which might be similar to a singular target question. This leads to an evaluation which fairly uses information a human has access to to generate a question. Inference. However, in the real world, there is no 'ground-truth' question. In this case, we simply feed image features, and actor selected concepts to our question generator model. The selection process of the actor may be random -in which case a human agent does not need to be involved in the question generation process. The k ≤ 2 selected concepts here are a subset of candidate_concepts, which are fully generated from models. Empirically, for both variants, we find k = 2 to be the best number of sampled objects. All experiments are run with early stopping (patience 10; training iterations capped at 35000) on the BLEU-4 metric. Scores reported (in Section 5) are from the highest performing checkpoint. We use the Py-Torch library and train our model on a V100 GPU (1.5 hours per epoch). Our models use the heavier Transformers than previous SoTA we compare to. For example, (Krishna et al., 2019) use ResNet and RNNs for their image encoder and question generator (∼18M parameters). Our models have between 200-300M parameters. To validate that our results are not purely attributable to model size, we train a truncated version of image-category and image-guided (explicit only). We truncate our models by using only the first and last layers of our BERT based encoders and decoders (∼36M parameters). Our closest model to theirs is the (truncated) explicit image-category, which achieves a BLEU-4 of 16.2 as seen in Table 4 -an improvement of 1.7 BLEU-4 over IMVQG's t-path. Even if we attribute 100% of this score improvement to the pre-trained nature of the BERT models we use, our methodology still introduces a 5.9 BLEU-4 increase over the image-category combination (truncated imageguided achieves a BLEU-4 of 22.1).
B Hyperparameters and training details
C Impact of model size on results
Model
D More Qualitative Examples.
Examples can be seen in Figure 2 (next page). When examined, we see that the generated question accurately uses the guiding category when the category is valid for the given image. For example, 531086/1 has animal as the sampled category. Because no animal is present in the image, this category isn't valid for the image. The generated question then correctly relies on the object labels and visual modality to generate a valid question given the image. Similarly for 490505/2.
There are some cases where a sampled object/concept is not valid given an image. For example, at least one of the objects in 22929/1, 41276/1, 531086/2, 281711/1, 490505/1 is not valid. In this case the model usually relies on the other available guiding information, prioritising the category information (e.g. 531086/2). In rare cases, the model has failure cases where some of the valid sampled objects may not be used in the generated question (e.g. 293705/2 and 490505/2).
The concept extractor utilises a pre-trained image captioning model and object detector model. This may lead to an accumulation of downstream errors, especially if the data fed into the pre-trained models are from a significantly different data generating distribution than those used to train the model. In this erroneous case, the model will likely fallback to rely on the image modality and category information to produce a generic question (e.g. 22929/1, 22929/2, 531085/1, 293705/2).
E Responsible NLP Research E.1 Limitations
Our approach claims to achieve SoTA in Visual Question Generation. However, we are only able to train and test our model on one dataset because it is the only existing dataset which contains answer categories. It is possible that our work may be suitable for use in a zero-shot setting, but we have not evaluated or tested our model in this setup.
E.2 Risks
Our model could be used to generate novel questions for use in Visual Question Answering. This may have a knock-on effect which leads to training more VQA models, thus having a negative impact on the environment.
Our model could be used in downstream tasks such as language learning. There may be incorrectness in the generated questions which has a knock on effect to a user using this model (e.g. the user may gain a wrong understanding of a concept because of a question the model has generated)
Figure 1 :
1Architecture of the explicit model (a) and implicit model (b)
Figure 2 :
2Qualitative Examples. The ground truth is the target question for the baseline, implicit and explicit. The examples of explicit variant uses image-guided whereas the implicit is using the non-variational image-pred.
Training • Ground truth question: What is the labrador about to catch? Image: i ∈ R ko×d • {Caption}: A man throwing a frisbee to a dog • {Objects}: person, dog, frisbee, grass N.B. {Caption} and {Objects} are both model generated, requiring only an image as input. These inputs are thus available at inference time.• Answer: Frisbee
• Category: Object
•
Table 3 :
3Hyperparameters for our model variants.
Table 4 :
4Truncated models single reference evaluation results.
AcknowledgmentsFigure 3: Qualitative outputs from explicit variant being fed random guiding information. Failure cases are also shown.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
VQA: Visual Question Answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, Devi Parikh, International Conference on Computer Vision (ICCV). Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision (ICCV).
Findings of the third shared task on multimodal machine translation. Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, Stella Frank, 10.18653/v1/W18-6402Proceedings of the Third Conference on Machine Translation: Shared Task Papers. the Third Conference on Machine Translation: Shared Task PapersBelgium, BrusselsAssociation for Computational LinguisticsLoïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Find- ings of the third shared task on multimodal machine translation. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 304-323, Belgium, Brussels. Association for Com- putational Linguistics.
Curious Case of Language Generation Evaluation Metrics: A Cautionary Tale. Ozan Caglayan, Pranava Madhyastha, Lucia Specia, 10.48550/arxiv.2010.13588Ozan Caglayan, Pranava Madhyastha, and Lucia Specia. 2020. Curious Case of Language Generation Evalua- tion Metrics: A Cautionary Tale. pages 2322-2328.
Probing the need for visual context in multimodal machine translation. Ozan Caglayan, Pranava Madhyastha, Lucia Specia, Loïc Barrault, 10.18653/v1/N19-1422Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaAssociation for Computational Linguistics1MinneapolisOzan Caglayan, Pranava Madhyastha, Lucia Specia, and Loïc Barrault. 2019. Probing the need for visual context in multimodal machine translation. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4159-4170, Min- neapolis, Minnesota. Association for Computational Linguistics.
Meshed-Memory Transformer for Image Captioning. Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, Rita Cucchiara, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionMarcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. 2019. Meshed-Memory Trans- former for Image Captioning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 10575-10584.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding.
Findings of the second shared task on multimodal machine translation and multilingual image description. Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, Lucia Specia, 10.18653/v1/W17-4718Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsDesmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine transla- tion and multilingual image description. In Proceed- ings of the Second Conference on Machine Transla- tion, pages 215-233, Copenhagen, Denmark. Associ- ation for Computational Linguistics.
A Reinforcement Learning Framework for Natural Question Generation using Bi-discriminators. Zhihao Fan, Zhongyu Wei, Siyuan Wang, Yang Liu, Xuanjing Huang, Technical reportZhihao Fan, Zhongyu Wei, Siyuan Wang, Yang Liu, and Xuanjing Huang. 2018. A Reinforcement Learning Framework for Natural Question Generation using Bi-discriminators. Technical report.
MaskGAN: Better Text Generation via Filling in the. William Fedus, Ian Goodfellow, Andrew M Dai, 10.48550/arxiv.1801.077366th International Conference on Learning Representations, ICLR 2018 -Conference Track Proceedings. William Fedus, Ian Goodfellow, and Andrew M. Dai. 2018. MaskGAN: Better Text Generation via Fill- ing in the______. 6th International Conference on Learning Representations, ICLR 2018 -Conference Track Proceedings.
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, 10.1038/nature20101Nature. 5387626Koray Kavukcuoglu, and Demis HassabisAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Ko- ray Kavukcuoglu, and Demis Hassabis. 2016. Hybrid computing using a neural network with dynamic ex- ternal memory. Nature, 538(7626):471-476.
Toward Controlled Generation of Text. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, Eric P Xing, Technical reportZhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward Con- trolled Generation of Text. Technical report.
Creativity: Generating Diverse Questions using Variational Autoencoders. Unnat Jain, Ziyu Zhang, Alexander Schwing, Proceedings -30th IEEE Conference on Computer Vision and Pattern Recognition. -30th IEEE Conference on Computer Vision and Pattern RecognitionUnnat Jain, Ziyu Zhang, and Alexander Schwing. 2017. Creativity: Generating Diverse Questions using Vari- ational Autoencoders. Proceedings -30th IEEE Con- ference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-January:5415-5424.
Categorical Reparameterization with Gumbel-Softmax. Eric Jang, Shixiang Gu, Ben Poole, 10.48550/arxiv.1611.011445th International Conference on Learning Representations. ICLR 2017 -Conference Track ProceedingsEric Jang, Shixiang Gu, and Ben Poole. 2016. Categori- cal Reparameterization with Gumbel-Softmax. 5th International Conference on Learning Representa- tions, ICLR 2017 -Conference Track Proceedings.
Categorical reparameterization with gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categori- cal reparameterization with gumbel-softmax.
Deep visualsemantic alignments for generating image descriptions. Andrej Karpathy, Li Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Autoencoding variational bayes. P Diederik, Max Kingma, Welling, 2nd International Conference on Learning Representations, ICLR 2014 -Conference Track Proceedings. International Conference on Learning Representations. ICLRDiederik P. Kingma and Max Welling. 2014. Auto- encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014 -Conference Track Proceedings. International Con- ference on Learning Representations, ICLR.
fairseq-image-captioning. Stumpf Krasser, Krasser and Stumpf. 2020. fairseq-image-captioning. https://github.com/krasserm/ fairseq-image-captioning.
Information Maximizing Visual Question Generation. Ranjay Krishna, Michael Bernstein, Li Fei-Fei, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionRanjay Krishna, Michael Bernstein, and Li Fei-Fei. 2019. Information Maximizing Visual Question Gen- eration. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition, 2019-June:2008-2018.
METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. Alon Lavie, Abhaya Agarwal, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationPrague, Czech RepublicAssociation for Computational LinguisticsAlon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, pages 228-231, Prague, Czech Republic. Association for Computational Linguistics.
Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12375 LNCS. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. Lecture Notes in Com- puter Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinfor- matics), 12375 LNCS:121-137.
Visual question generation as dual task of visual question answering. Yikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, Ming Zhou, CVPRYikang Li, Nan Duan, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, and Ming Zhou. 2018. Vi- sual question generation as dual task of visual ques- tion answering. CVPR.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. Chris J Maddison, Andriy Mnih, Yee Whye Teh, 10.48550/arxiv.1611.007125th International Conference on Learning Representations, ICLR 2017 -Conference Track Proceedings. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. 5th In- ternational Conference on Learning Representations, ICLR 2017 -Conference Track Proceedings.
Neural variational inference for text processing. Yishu Miao, Lei Yu, Phil Blunsom, PMLRProceedings of The 33rd International Conference on Machine Learning. The 33rd International Conference on Machine LearningNew York, New York, USA48Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neu- ral variational inference for text processing. In Pro- ceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Ma- chine Learning Research, pages 1727-1736, New York, New York, USA. PMLR.
Neural Variational Inference and Learning in Belief Networks. Andriy Mnih, Karol Gregor, 10.48550/arxiv.1402.003031st International Conference on Machine Learning. 5Andriy Mnih and Karol Gregor. 2014. Neural Varia- tional Inference and Learning in Belief Networks. 31st International Conference on Machine Learning, ICML 2014, 5:3800-3809.
Variational inference for Monte Carlo objectives. Andriy Mnih, Danilo J Rezende, 10.48550/arxiv.1602.0672533rd International Conference on Machine Learning, ICML 2016. 5Andriy Mnih and Danilo J. Rezende. 2016. Variational inference for Monte Carlo objectives. 33rd Interna- tional Conference on Machine Learning, ICML 2016, 5:3237-3248.
Jointly measuring diversity and quality in text generation models. Ehsan Montahaei, Danial Alihosseini, Mahdieh Soleymani Baghshah, Ehsan Montahaei, Danial Alihosseini, and Mahdieh So- leymani Baghshah. 2019. Jointly measuring diversity and quality in text generation models.
Understanding Guided Image Captioning Performance across Domains. G Edwin, Bo Ng, Piyush Pang, Radu Sharma, Google Soricut, Research, Edwin G Ng, Bo Pang, Piyush Sharma, Radu Soricut, and Google Research. 2020. Understanding Guided Image Captioning Performance across Domains.
Aaron Van Den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural Discrete Representation Learning. Advances in Neural Information Processing Systems, 2017-December. Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural Discrete Representation Learning. Advances in Neural Information Process- ing Systems, 2017-December:6307-6316.
Liangming Pan, Wenqiang Lei, Tat-Seng Chua, and Min-Yen Kan. 2019. Recent Advances in Neural Question Generation. Liangming Pan, Wenqiang Lei, Tat-Seng Chua, and Min-Yen Kan. 2019. Recent Advances in Neural Question Generation.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Multimodal differential network for visual question generation. N Badri, Sandeep Patro, Vinod K Kumar, Vinay P Kurmi, Namboodiri, 10.18653/v1/d18-1434Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsBadri N. Patro, Sandeep Kumar, Vinod K. Kurmi, and Vinay P. Namboodiri. 2018. Multimodal differential network for visual question generation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018, pages 4002-4012. Association for Computational Linguis- tics.
Deep Bayesian Network for Visual Question Generation. N Badri, Vinod K Patro, Sandeep Kurmi, Vinay P Kumar, Namboodiri, Proceedings -2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020. -2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020Badri N. Patro, Vinod K. Kurmi, Sandeep Kumar, and Vinay P. Namboodiri. 2020. Deep Bayesian Net- work for Visual Question Generation. Proceedings -2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020, pages 1555-1565.
Nils Reimers, Iryna Gurevych, Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. EMNLP-IJCNLP 2019 -2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence Embeddings using Siamese BERT- Networks. EMNLP-IJCNLP 2019 -2019 Confer- ence on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, pages 3982-3992.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, Technical reportShaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards Real-Time Ob- ject Detection with Region Proposal Networks. Tech- nical report.
Thomas Scialom, Patrick Bordes, Paul-Alexis Dray, Jacopo Staiano, and Patrick Gallinari. 2020. What BERT Sees: Cross-Modal Transfer for Visual Question Generation. Thomas Scialom, Patrick Bordes, Paul-Alexis Dray, Ja- copo Staiano, and Patrick Gallinari. 2020. What BERT Sees: Cross-Modal Transfer for Visual Ques- tion Generation.
A shared task on multimodal machine translation and crosslingual image description. Lucia Specia, Stella Frank, Khalil Sima'an, Desmond Elliott, 10.18653/v1/W16-2346Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersLucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multimodal machine translation and crosslingual image descrip- tion. In Proceedings of the First Conference on Ma- chine Translation: Volume 2, Shared Task Papers, pages 543-553, Berlin, Germany. Association for Computational Linguistics.
C3VQG: Category Consistent Cyclic Visual Question Generation. Shagun Uppal, Anish Madan, Sarthak Bhagat, Yi Yu, Rajiv Ratn Shah, arXivShagun Uppal, Anish Madan, Sarthak Bhagat, Yi Yu, and Rajiv Ratn Shah. 2020. C3VQG: Category Con- sistent Cyclic Visual Question Generation. arXiv.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need.
Cider: Consensus-based image description evaluation. C L Ramakrishna Vedantam, Devi Zitnick, Parikh, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ramakrishna Vedantam, C. L. Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4566- 4575.
Show and tell: A neural image caption generator. Oriol Vinyals, A Toshev, S Bengio, D Erhan, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Oriol Vinyals, A. Toshev, S. Bengio, and D. Erhan. 2015. Show and tell: A neural image caption generator. 2015 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 3156-3164.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Ronald J Williams, 10.1007/bf00992696Machine Learning. 8Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transform- ers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
Multiple Objects-Aware Visual Question Generation. Jiayuan Xie, Yi Cai, Qingbao Huang, Tao Wang, 10.1145/3474085.3476969MM 2021 -Proceedings of the 29th ACM International Conference on Multimedia. Jiayuan Xie, Yi Cai, Qingbao Huang, and Tao Wang. 2021. Multiple Objects-Aware Visual Question Gen- eration. MM 2021 -Proceedings of the 29th ACM International Conference on Multimedia, pages 4546- 4554.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, Yoshua Bengio, PMLRProceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningLille, France37Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2048-2057, Lille, France. PMLR.
Radial graph convolutional network for visual question generation. Xing Xu, Tan Wang, Yang Yang, Alan Hanjalic, Heng Tao Shen, 10.1109/TNNLS.2020.2986029IEEE Transactions on Neural Networks and Learning Systems. 324Xing Xu, Tan Wang, Yang Yang, Alan Hanjalic, and Heng Tao Shen. 2021. Radial graph convolutional network for visual question generation. IEEE Trans- actions on Neural Networks and Learning Systems, 32(4):1654-1667.
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu, 10.48550/arxiv.1609.0547331st AAAI Conference on Artificial Intelligence, AAAI 2017. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2016. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. 31st AAAI Conference on Artificial Intelligence, AAAI 2017, pages 2852-2858.
Automatic Generation of Grounded Visual Questions. Shijie Zhang, Lizhen Qu, Shaodi You, Zhenglu Yang, Jiawan Zhang, IJCAI International Joint Conference on Artificial Intelligence. Shijie Zhang, Lizhen Qu, Shaodi You, Zhenglu Yang, and Jiawan Zhang. 2016. Automatic Generation of Grounded Visual Questions. IJCAI International Joint Conference on Artificial Intelligence, pages 4235-4243.
Intention Oriented Image Captions with Guiding Objects. Yue Zheng, Yali Li, Shengjin Wang, 10.1109/CVPR.2019.00859Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionYue Zheng, Yali Li, and Shengjin Wang. 2018. Inten- tion Oriented Image Captions with Guiding Objects. Proceedings of the IEEE Computer Society Confer- ence on Computer Vision and Pattern Recognition, 2019-June:8387-8396.
| [
"https://github.com/krasserm/"
] |
[
"End-to-End Text Classification via Image-based Embedding using Character-level Networks",
"End-to-End Text Classification via Image-based Embedding using Character-level Networks"
] | [
"Shunsuke Kitada shunsuke.kitada.8y@stu. \nHitoshi IYATOMI Major in Applied Informatics\nGraduate School of Science and Engineering Hosei University Tokyo\nJapan\n",
"Ryunosuke Kotani ryunosuke.kotani.58@ \nHitoshi IYATOMI Major in Applied Informatics\nGraduate School of Science and Engineering Hosei University Tokyo\nJapan\n"
] | [
"Hitoshi IYATOMI Major in Applied Informatics\nGraduate School of Science and Engineering Hosei University Tokyo\nJapan",
"Hitoshi IYATOMI Major in Applied Informatics\nGraduate School of Science and Engineering Hosei University Tokyo\nJapan"
] | [] | For analysing and/or understanding languages having no word boundaries based on morphological analysis such as Japanese, Chinese, and Thai, it is desirable to perform appropriate word segmentation before word embeddings. But it is inherently difficult in these languages. In recent years, various language models based on deep learning have made remarkable progress, and some of these methodologies utilizing characterlevel features have successfully avoided such a difficult problem. However, when a model is fed character-level features of the above languages, it often causes overfitting due to a large number of character types. In this paper, we propose a CE-CLCNN, character-level convolutional neural networks using a character encoder to tackle these problems. The proposed CE-CLCNN is an end-to-end learning model and has an image-based character encoder, i.e. the CE-CLCNN handles each character in the target document as an image. Through various experiments, we found and confirmed that our CE-CLCNN captured closely embedded features for visually and semantically similar characters and achieves state-of-the-art results on several open document classification tasks. In this paper we report the performance of our CE-CLCNN with the Wikipedia title estimation task and analyse the internal behaviour. | 10.1109/aipr.2018.8707407 | [
"https://arxiv.org/pdf/1810.03595v2.pdf"
] | 52,936,759 | 1810.03595 | 44b8e6b5404b31c6804b6a61ffb17f164a14a22c |
End-to-End Text Classification via Image-based Embedding using Character-level Networks
10 Oct 2018
Shunsuke Kitada shunsuke.kitada.8y@stu.
Hitoshi IYATOMI Major in Applied Informatics
Graduate School of Science and Engineering Hosei University Tokyo
Japan
Ryunosuke Kotani ryunosuke.kotani.58@
Hitoshi IYATOMI Major in Applied Informatics
Graduate School of Science and Engineering Hosei University Tokyo
Japan
End-to-End Text Classification via Image-based Embedding using Character-level Networks
10 Oct 2018Index Terms-text classificationimage-based character em- beddingconvolutional neural networks
For analysing and/or understanding languages having no word boundaries based on morphological analysis such as Japanese, Chinese, and Thai, it is desirable to perform appropriate word segmentation before word embeddings. But it is inherently difficult in these languages. In recent years, various language models based on deep learning have made remarkable progress, and some of these methodologies utilizing characterlevel features have successfully avoided such a difficult problem. However, when a model is fed character-level features of the above languages, it often causes overfitting due to a large number of character types. In this paper, we propose a CE-CLCNN, character-level convolutional neural networks using a character encoder to tackle these problems. The proposed CE-CLCNN is an end-to-end learning model and has an image-based character encoder, i.e. the CE-CLCNN handles each character in the target document as an image. Through various experiments, we found and confirmed that our CE-CLCNN captured closely embedded features for visually and semantically similar characters and achieves state-of-the-art results on several open document classification tasks. In this paper we report the performance of our CE-CLCNN with the Wikipedia title estimation task and analyse the internal behaviour.
I. INTRODUCTION
Overfitting is one of the most essential problems in machine learning. Various regularization methods have been proposed in order to improve generalization performance, especially in deep networks. Data augmentation is the most common way to improve generalization performance of the system by increasing the training dataset in a pseudo manner. In natural language processing (NLP) tasks, various data augmentation methods have also been proposed, such as synonym lists [1], grammar induction [2], task-specific heuristic rules [3], and contextual augmentation [4]. However, these methods basically require appropriate word segmentation and semantic analysis of the context in advance, and they are inherently difficult in the Asian languages, especially Japanese, Chinese or Thai, etc. In recent years, various language models based on deep learning have made remarkable progress, and some of these methodologies utilizing character-level features have successfully avoided such problems [1], [5]. From the model selection point of view, recurrent neural networks (RNN) have been widely applied in NLP tasks, however, they have a significant problem in learning long text sequences. The recent introduction of Long-short term memory (LSTM) [6] and gated recurrent units (GRU) [7] alleviated this issue and are commonly used in the NLP field. However, it still has drawbacks, such as difficulty in parallelization. Character level convolutional neural networks (CLCNN) [1], i.e. onedimensional convolutional neural networks (CNN), also accept long text sequences, and, in addition, its training speed is generally faster than LSTM and GRU thanks to its native property and the ease of parallelization [8], [9]. However, there are still problems remaining when dealing with the aforementioned languages. When a model is fed character-level features (e.g. in one-hot vector or other common embeddings) of the language above, it often causes overfitting due to a large number of unique characters. For example, Japanese 1 and Chinese 2 have over 2,000 type of characters in common use. We need to tackle this problem as well. Fortunately, a not insignificant number of Kanji and Han characters used in Japanese and Chinese are ideograms, which means its character shape represents its meaning. Therefore, capturing the shape feature of characters in the document is meaningful for better understanding the contents. Based on this hypothesis, several studies have been proposed recently. Shimada et al. [10] proposed epoch making schematics called image-based character embedding, in which they treat each character in the target document as an image. Their model learns a low-dimensional character embedding by a convolutional auto-encoder (CAE) [11] and then the relationship between the sequence of embeddings and the document category is trained with the following CLCNN. They also proposed a simple and very effective data augmentation technique called wildcard training. The wildcard training randomly dropouts [12] arbitrary elements in the embedded domain at the time of training of CLCNN. This data augmentation method greatly improved system generalization performance without requiring morphological analysis. They confirmed that the effect of this wildcard training improves the document classification accuracy by about 10% on their evaluation, using open and private datasets. Lately, Zhang et al. [13] also proposed similar semantic dropout for word representations and reported its effectiveness. On the other hand, however, since their model learns CAE and CLCNN separately, it cannot fully exploit the merits of image-based character embedding. Further improvement in performance can be expected. Liu et al. [14] proposed an end-to-end document classification model that learns character embedding using a CNN-based character encoder and classifies documents using GRU on Chinese, Japanese, and Korean documents. Unlike Shimada's model [10], their model does not train to preserve the shape feature of characters explicitly. But they demonstrated that characters with similar shape features are embedded closer to the character representation. Su et al. [15] proposed glyph-enhanced word embedding (GWE) to focus on the shape of Kanji. The basic strategy is the same with [10]: GWE extracts the shape information of the character and uses it for training the word representation. They also performed image-based character embedding on a Chinese document with CAE and showed that characters with similar shape features are represented by close character representation. In these studies [14], [15], image-based character embedding showed promising performance, while there is still room for improvement from the viewpoint of introducing data augmentation in which the model inputs take advantage of the features of character image. Based on these backgrounds, in this paper, we propose a new "character encoder characterlevel convolutional neural networks" (CLCNN) model. The proposed CE-CLCNN is an end-to-end learning model and has an image-based character encoder. Due to this architecture, our CE-CLCNN has the following desirable features:
1) It is freed from intractable morphological analysis.
2) It learns and obtains character embedding associating with character appearance. 3) It is capable of a suitable data augmentation method both for image and embedded feature spaces. By introducing two essentially different types of data augmentation, the robustness of the model is enhanced and the performance of document classification task is significantly improved.
II. CE-CLCNN
The outline of the proposed CE-CLCNN is shown in Fig. 1. CE-CLCNN is made up of two different CNN consolidations. The first CNN acts as a character encoder (CE) that learns character representations from character images, and the second CNN, CLCNN, performs document classification. The parameters of these two consecutive networks are optimized by the backpropagation with the cross entropy error function as the objective function.
A. Character encoder by CNN
Firstly, each character of the target document is converted to an image having 36 × 36 pixels. The CE embeds (i.e. encodes) each character image into a d CE dimensional feature vector. Table I
B. Document classifier by CLCNN
The character representation of the d CE bit/character encoded from the CE is reshaped to be the batch size B again with the character string length of C. Then the representations are inputted to the CLCNN. Note that we use convolutions with stride s rather than pooling operations which are widely used in natural language processing, with reference to prior work [16]. Table II shows the architecture of CLCNN used in this instance.
C. Data augmentation on input space and feature space
Convolutional neural networks are known to require a large amount of diverse training data. Our CE-CLCNN model has a capability to perform data augmentation both in the input [17]). Note that this is an example of an implementation on this experiment and augmentation is not limited to this method.
space and the feature space thanks to its end-to-end structure.
In the input space, we apply random erasing data augmentation (RE) [17] to the character image that will be fed to the CE. Each character image is randomly masked with noise on the rectangular area, and thus a part of the character is occluded as shown in Fig. 2. In the feature embedded space, we apply wildcard training (WT) [10] that randomly drops out some of embedded expression (i.e. some element of encoded vector) with the ratio of γ w .
III. EXPERIMENTS
A. Implementation
The number of embedding dimensions and the chunk size of characters were set to d CE = 128 and C = 10, respectively. Table III summarizes the parameters used in random erasing data augmentation. The ratio in the wildcard training was set to γ = 0.1. In the CLCNN, the batch size of the embedded characters in the training B = 256, and Adam [18] was used for parameter optimization.
B. Category estimation of Wikipedia titles
In this paper, we evaluate our proposed CE-CLCNN using an open dataset for category estimation of Wikipedia titles.
The Wikipedia title dataset [14] contains the article titles acquired from Wikipedia and the related topic class label. This dataset includes 12 classes: Geography, Sports, Arts, Military, Economics, Transportation, Health Science, Education, Food Culture, Religion and Belief, Agriculture and Electronics. In this experiment, we used the Japanese data subset this time (total 206,313 titles). For training of the model, we split the dataset into the training and testing set with an 8:2 ratio, respectively. Zero padding was performed for titles with less than 10 characters so that the input sentence would be 10 characters or more. Table IV shows the results. To the best of our knowledge, the proposed CE-CLCNN showed state-ofthe art performance on this dataset. Shimada's method [10] showed about 4% better performance than Liu's method [14] (proposed later thanks to their WT with highly effective generalization). The proposed CE-CLCNN with RE and WT showed even better performance by about 4%. According to Table IV, the performance of the native CE-CLCNN (i.e. without RE and WT) was equivalent to Shimada's CLCNN+WT. Since the performance improvement of CE-CLCNN by introduction of WT was limited, we can speculate CE-CLCNN has sufficient model versatility in the embedded space. While on the other hand, the effect of introducing RE was certain, with a 3-3.5% gain. [10] 54.7 CLCNN † [10] 36.2 VISUAL model ‡ [14] 47.8 LOOKUP model ‡ [14] 49.1 Ensemble (VISUAL + LOOKUP) ‡ [14] 50.3 † Not published ‡ Refering to Liu et. al. [14] C. Analysis of Character Encoder Table V shows an example of similar characters in the CE-encoded feature embedding domain with the 5-nearest neighbouring method. Many of the neighbouring characters for the query character were similar in shape features of letters, such as radicals (i.e. character components). Therefore, it was confirmed that the character encoder learned by capturing the shape feature of the character.
Furthermore, we extracted the character representation of Chinese characters by using learned CE, and then projected the representation on 2 dimensional space using t-SNE [19]. A part of the visualization result is shown in Fig. 3. We can see that characters with the same components are clustered. Note that Su et. al. [15] explicitly learned to preserve character shape features by CAE, but our CE-CLCNN does not explicitly learn character representation that preserves the shape Fig. 3. Example of character representation in the feature embedded domain obtained by our CE-CLCNN. The 128 dimensional feature space was mapped into 2 dimension with t-SNE [19] for visualization purpose. We can see that similarly shaped and semantically close characters are located near one another. features of characters explicitly. In CE-CLCNN, since the loss of document classification backpropagates to the CE which learns character representation, we found that clusters that are semantically similar in character representation are close clusters. For example, it can be seen that the character cluster having "舟" component representing "boat" and the character cluster having "魚" component representing "fish" are closely related.
IV. CONCLUSION
In this paper, we propose the new and promising text analysis model "CE-CLCNN" to solve several conventional problems for languages such as Japanese and Chinese. We confirm not only its excellent document classification performance, but also its readability in terms of how the model works. In near future, we would like to investigate our model more and apply it to other languages whose character shapes are related to the meaning.
Fig. 1 .
1shows the architecture of CE used in this instance. Here, let k be the kernel size and o be the number of filters. In the training of CE, continuous C characters in the document are treated as a chunk. The convolution is Schematics of our CE-CLCNN model. The CE-CLCNN is made up of character encoder (CE) component and document classification component. These two parts are basically composed of CNN and CLCNN respectively and are directly concatenated.
TABLE I ARCHITECTURE
IOF CHARACTER ENCODER(CE) performed with depth-wise manner, and, in fact, each input character is embedded in 128 dimensional vector. Thus, the input dimension of the CE is 36×36×C, and the output of that is 1×128×C. The CE trains with the batch size of B.Layer #
CE configuration
1
Conv(k=(3, 3), o=32) → ReLU
2
Maxpool(k=(2, 2))
3
Conv(k=(3, 3), o=32) → ReLU
4
Maxpool(k=(2, 2))
5
Conv(k=(3, 3), o=32) → ReLU
6
Linear(800, 128) → ReLU
7
Linear(128, 128) → ReLU
TABLE II
IIARCHITECTURE OF CHARACTER-LEVEL CONVOLUTIONAL NEURAL
NETWORK (CLCNN)
Layer #
CLCNN configuration
1
Conv(k=(1, 3), o=512, s=3) → ReLU
2
Conv(k=(1, 3), o=512, s=3) → ReLU
3
Conv(k=(1, 3), o=512) → ReLU
4
Conv(k=(1, 3), o=512)
5
Linear(5120, 1024)
6
Linear(1024, # classes)
Fig. 2. Example of data augmentation on image domain (random erasing
data augmentation
TABLE III PARAMETERS
IIIOF RANDOM ERASING DATA AUGMENTATIONParameter
Scale
Erasing probability p
0.3
Max area ratio s l
0.4
Min area ratio s h
0.02
Max aspect ratio r 1
2.0
Min aspect ratio r 2
0.3
TABLE IV RESULTS
IVOF CATEGORY ESTIMATION OF WIKIPEDIA TITLESMethod
Accuracy[%]
(Proposed) RE + CE-CLCNN + WT
58.4
(Proposed) RE + CE-CLCNN
58.0
(Proposed) CE-CLCNN + WT
55.3
(Proposed) CE-CLCNN
54.4
CLCNN + WT †
TABLE V TOP
VFIVE CHARACTERS NEAR THE CHARACTER REPRESENTATION FOR THE QUERY CHARACTERQuery character
Neighbouring character
Euclidean distance
with query character
鮫
鰭
370.1
駮
403.7
鮪
405.2
鰐
409.4
鰤
409.6
痛
癨
317.2
癜
388.3
瘻
398.3
痕
398.9
痴
399.2
披
彼
452.8
擅
491.5
擔
520.5
擒
533.8
捗
536.8
Character-level convolutional networks for text classification. X Zhang, J Zhao, Y Lecun, Advances in neural information processing systems. X. Zhang, J. Zhao, and Y. LeCun, "Character-level convolutional net- works for text classification," Advances in neural information processing systems, pp. 649-657, 2015.
Data recombination for neural semantic parsing. R Jia, P Liang, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics1R. Jia and P. Liang, "Data recombination for neural semantic parsing," Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, vol. 1, pp. 12-22, 2016.
Data augmentation for morphological reinflection. M Silfverberg, A Wiemerslage, L Liu, L J Mao, Proceedings of the CoNLL SIG-MORPHON 2017 Shared Task: Universal Morphological Reinflection. the CoNLL SIG-MORPHON 2017 Shared Task: Universal Morphological ReinflectionM. Silfverberg, A. Wiemerslage, L. Liu, and L. J. Mao, "Data augmen- tation for morphological reinflection," Proceedings of the CoNLL SIG- MORPHON 2017 Shared Task: Universal Morphological Reinflection, pp. 90-99, 2017.
Contextual augmentation: Data augmentation by words with paradigmatic relations. S Kobayashi, NAACL-HLT. S. Kobayashi, "Contextual augmentation: Data augmentation by words with paradigmatic relations," NAACL-HLT, 2018.
Character-aware neural language models. Y Kim, Y Jernite, D Sontag, A M Rush, Y. Kim, Y. Jernite, D. Sontag, and A. M. Rush, "Character-aware neural language models," 2016.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K Cho, Y Bengio, arXiv:1412.3555J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," CoRR arXiv:1412.3555, 2014.
Language modeling with gated convolutional networks. Y N Dauphin, A Fan, M Auli, D Grangier, arXiv:1612.08083CoRR preprintY. N. Dauphin, A. Fan, M. Auli, and D. Grangier, "Language modeling with gated convolutional networks," CoRR preprint arXiv:1612.08083, 2016.
Quasi-recurrent neural networks. J Bradbury, S Merity, C Xiong, R Socher, arXiv:1611.01576CoRR preprintJ. Bradbury, S. Merity, C. Xiong, and R. Socher, "Quasi-recurrent neural networks," CoRR preprint arXiv:1611.01576, 2016.
Document classification through image-based character embedding and wildcard training. D Shimada, R Kotani, H Iyatomi, IEEE International Conference on Big Data. D. Shimada, R. Kotani, and H. Iyatomi, "Document classification through image-based character embedding and wildcard training," IEEE International Conference on Big Data, pp. 3922-3927, 2016.
Stacked convolutional auto-encoders for hierarchical feature extraction. J Masci, U Meier, D Cireşan, J Schmidhuber, Artificial Neural Networks and Machine Learning-ICANN 2011. J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, "Stacked convolu- tional auto-encoders for hierarchical feature extraction," Artificial Neural Networks and Machine Learning-ICANN 2011, pp. 52-59, 2011.
Improving neural networks by preventing co-adaptation of feature detectors. G E Hinton, N Srivastava, A Krizhevsky, I Sutskever, R R Salakhutdinov, arXiv:1207.0580G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, "Improving neural networks by preventing co-adaptation of feature detectors," CoRR arXiv:1207.0580, 2012.
Word embedding perturbation for sentence classification. D Zhang, Z Yang, arXiv:1804.08166CoRR preprintD. Zhang and Z. Yang, "Word embedding perturbation for sentence classification," CoRR preprint arXiv:1804.08166, 2018.
Learning character-level compositionality with visual features. F Liu, H Lu, C Lo, G Neubig, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Long Papers)F. Liu, H. Lu, C. Lo, and G. Neubig, "Learning character-level com- positionality with visual features," Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 2059-2068, 2017.
Learning chinese word representations from glyphs of characters. T Su, H Lee, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingT. Su and H. Lee, "Learning chinese word representations from glyphs of characters," Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pp. 264-273, 2017.
Deconvolutional paragraph representation learning. Y Zhang, D Shen, G Wang, Z Gan, R Henao, L Carin, Advances in Neural Information Processing Systems. Y. Zhang, D. Shen, G. Wang, Z. Gan, R. Henao, and L. Carin, "De- convolutional paragraph representation learning," Advances in Neural Information Processing Systems, pp. 4169-4179, 2017.
Random erasing data augmentation. Z Zhong, L Zheng, G Kang, S Li, Y Yang, arXiv:1708.04896Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, "Random erasing data augmentation," CoRR arXiv:1708.04896, 2017.
Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980D. Kingma and J. Ba, "Adam: A method for stochastic optimization," CoRR arXiv:1412.6980, 2014.
Visualizing data using t-sne. L V D Maaten, G Hinton, Journal of machine learning research. 9L. v. d. Maaten and G. Hinton, "Visualizing data using t-sne," Journal of machine learning research, vol. 9, no. Nov, pp. 2579-2605, 2008.
| [] |
[
"U T F II Language Segmentation Erklärung zur Masterarbeit",
"U T F II Language Segmentation Erklärung zur Masterarbeit"
] | [
"C D H ",
"David A "
] | [] | [] | Hiermit erkläre ich, dass ich die Masterarbeit selbstständig verfasst und keine anderen als die angegebenen ellen und Hilfsmiel benutzt und die aus fremden ellen direkt oder indirekt übernommenen Gedanken als solche kenntlich gemacht habe.Die Arbeit habe ich bisher keinem anderen Prüfungsamt in gleicher oder vergleichbarer Form vorgelegt. Sie wurde bisher nicht veröffentlicht.DatumUnterschri i Abstract Language segmentation consists in finding the boundaries where one language ends and another language begins in a text wrien in more than one language. is is important for all natural language processing tasks.e problem can be solved by training language models on language data. However, in the case of low-or no-resource languages, this is problematic. I therefore investigate whether unsupervised methods perform beer than supervised methods when it is difficult or impossible to train supervised approaches.A special focus is given to difficult texts, i.e. texts that are rather short (one sentence), containing abbreviations, low-resource languages and non-standard language.I compare three approaches: supervised n-gram language models, unsupervised clustering and weakly supervised n-gram language model induction. I devised the weakly supervised approach in order to deal with difficult text specifically. In order to test the approach, I compiled a small corpus of different text types, ranging from one-sentence texts to texts of about 300 words.e weakly supervised language model induction approach works well on short and difficult texts, outperforming the clustering algorithm and reaching scores in the vicinity of the supervised approach. e results look promising, but there is room for improvement and a more thorough investigation should be undertaken.ii | null | [
"https://arxiv.org/pdf/1510.01717v1.pdf"
] | 25,644,249 | 1510.01717 | 78a8284edf60e1da5f22b79e51a0758aef68f18b |
U T F II Language Segmentation Erklärung zur Masterarbeit
August 18, 2015
C D H
David A
U T F II Language Segmentation Erklärung zur Masterarbeit
August 18, 2015Author: Supervisors: Prof. Dr. Caroline S Dr. Sven N
Hiermit erkläre ich, dass ich die Masterarbeit selbstständig verfasst und keine anderen als die angegebenen ellen und Hilfsmiel benutzt und die aus fremden ellen direkt oder indirekt übernommenen Gedanken als solche kenntlich gemacht habe.Die Arbeit habe ich bisher keinem anderen Prüfungsamt in gleicher oder vergleichbarer Form vorgelegt. Sie wurde bisher nicht veröffentlicht.DatumUnterschri i Abstract Language segmentation consists in finding the boundaries where one language ends and another language begins in a text wrien in more than one language. is is important for all natural language processing tasks.e problem can be solved by training language models on language data. However, in the case of low-or no-resource languages, this is problematic. I therefore investigate whether unsupervised methods perform beer than supervised methods when it is difficult or impossible to train supervised approaches.A special focus is given to difficult texts, i.e. texts that are rather short (one sentence), containing abbreviations, low-resource languages and non-standard language.I compare three approaches: supervised n-gram language models, unsupervised clustering and weakly supervised n-gram language model induction. I devised the weakly supervised approach in order to deal with difficult text specifically. In order to test the approach, I compiled a small corpus of different text types, ranging from one-sentence texts to texts of about 300 words.e weakly supervised language model induction approach works well on short and difficult texts, outperforming the clustering algorithm and reaching scores in the vicinity of the supervised approach. e results look promising, but there is room for improvement and a more thorough investigation should be undertaken.ii
Introduction
Language segmentation and identification are important for all natural language processing operations that are language-specific, such as taggers, parsers or machine translation (Jain and Bhat, 2014;Zubiaga et al., 2014). Indeed, using "traditional" monolingual natural language processing components on mixed language data leads to miserable results (Jain and Bhat, 2014). Even if the results are not terrible, language identification and segmentation can improve the overall results. For example, by identifying foreign language inclusions in an otherwise monolingual text, parser accuracy can be increased .
One important point that has to be borne in mind is the difference between language identification and language segmentation. Language identification is concerned with recognizing the language at hand. It is possible to use language identification for language segmentation. Indeed, by identifying the languages in a text, the segmentation is implicitly obtained. Language segmentation on the other hand is only concerned with identifying language boundaries. No claims about the languages involved are made.
Aer giving an overview over related work and different approaches that can be taken for language segmentation, I will present the theory behind supervised methods as well as unsupervised methods. Finally, I will introduce a weakly supervised method for language segmentation that I developed.
Aer the theoretical part, I will present experiments done with the different approaches, comparing their effectiveness on the task of language segmentation on different text types. A special focus will be given to difficult text types, such as short texts, texts containing under-resourced languages or texts containing a lot of abbreviations or other non-standard features.
A big advantage of unsupervised methods is language independence. If the approach used does not rely on language-specific details, the approach is more flexible as no language resources have to be adapted for the method to work on other languages. ese advantages might be especially useful for under-resourced languages. When there is no or insufficient data available to train a supervised language model, an unsupervised approach might yield beer results.
Another advantage is that unsupervised methods do not require prior training. ey are not dependent on training data and thus cannot be skewed by the data. Indeed, supervised approaches that are trained on data are qualitatively tied to their training data; different training data will, in all probability, yield different models.
is thesis aims at answering the question whether unsupervised language segmentation approaches work beer on difficult text types than supervised language approaches.
2 Related work 2.1 N-Grams and rank order statistics Cavnar and Trenkle (1994) use an n-gram language model for language identification purposes. eir program 'Textcat' is intended to classify documents by language. e system calculates n-grams for 1 n 5 from training data and orders the n-grams according to inverse frequency, i.e. from the most frequent n-grams to the most infrequent n-grams. e numerical frequency data is then discarded and only inherently present.
During training, the program calculates an n-gram profile consisting of these ngram lists for each category (i.e. language to classify).
New data is classified by first calculating the n-gram profile and then comparing the profile to existing profiles. e category with the lowest difference score is taken as the category for the document. e score they use for classification is called out-of-place metric. For each n-gram in the document n-gram profile, the corresponding n-gram in the category profile is looked up and the absolute difference of ranks is taken as score. e sum is calculated over all n-grams. More formally, the out-of-place metric m oop is calculated as:
m oop = n ∑ i=1 (|r(x i , d) − r(x i , c)|)(1)
With n the number of n-grams in the document profile, x i the i-th n-gram, r(x i , d) the rank of the i-th n-gram in the document profile, r(x i , c) the rank of the i-th n-gram in the category profile. Figure 1 illustrates the out-of-place metric. In figure 1, the document profile has 'ER' as most frequent n-gram, at rank 1, followed by 'ING' at rank 2, etc. e category profile does not contain the n-gram 'ER'; in that case, an arbitrary fixed maximum value is assigned. e category profile contains the n-gram 'ING' at rank 2, the same rank as in the document profile; the difference is 0. e category profile contains the n-gram 'AT' at rank 1, while in the document profile, it occurs at rank 3. e absolute difference is 2. e out-of-place metric consists of the sum of all scores thus calculated. Cavnar and Trenkle (1994) collected 3713 Usenet texts with a cultural theme in different languages. ey filtered out non-monolingual texts and texts that had no useful content for language classification. In the end, they had 3478 articles ranging from a single line of text to 50 KB of text.
eir results indicated that length had no significant impact on the classification, contrary to what they thought. Also, they found that training the system with 400 n-grams yielded the best result with a precision of 99.8%.
ey also showed that their approach could be used for subject classification of texts in the same language with reasonable precision. is finding indicates that language and domain are linked to a certain degree. Dunning (1994) also uses an n-gram language model for language identification purposes. e program calculates n-grams and their frequencies from the training data and estimates the probability P of a given string using the Maximum Likelihood Estimator (MLE) with Laplace add-one smoothing.More formally: P (w i |w 1 , . . . , w i−1 ) = C(w 1 , . . . , w i ) + 1 C(w 1 , . . . , w i−1 ) + |V |
N-Grams and maximum likelihood estimator
with C(w 1 , . . . , C i ) the number of times the n-gram w 1 , . . . , w i occurred, C(w 1 , . . . , C i−1 ) the number of times the (n − 1)-gram w 1 , . . . , w i−1 occurred and |V | the size of the vocabulary.
For a string S, the string is decomposed into n-grams and the log probability l k is calculated as:
l k = ∑ w 1 ,...,w k ∈S C(w 1 , . . . , w k ) log P (w k |w 1 , . . . , w k−1 )(3)
where k is the order of the n-gram (k = n) used. In order to test the system, Dunning (1994) uses a specially constructed test corpus from a bilingual parallel translated English-Spanish corpus containing English and Spanish texts with 10 texts varying from 1000 to 50000 bytes for the training set and 100 texts varying from 10 to 500 bytes for the test set.
e results indicate that bigram models perform beer for shorter strings and less training data while trigram models work beer for larger strings and more training data. Dunning (1994) criticizes Cavnar and Trenkle (1994) for saying that their system would be insensitive to the length of the string to be classified, as the shortest text they classified was about 50 words. e system implemented by Dunning (1994) can classify strings of 10 characters in length "moderately well", while strings of 50 characters or more are classified "very well". Accuracies given vary from 92% for 20 bytes of training data to 99.9% for 500 bytes of text. Grefenstee (1995) compares trigrams versus short words for language identification. Short words are oen function words that are typical for and highly frequent in a given language.
Trigrams and short words
e trigram language guesser was trained on one million characters of text in 10 languages: Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Spanish and Swedish. From the same texts, all words with 5 or less characters were counted for the short-word-strategy.
e results indicate that the trigram approach works beer for small text fragments of up to 15 words, while for any text longer than 15 words, both methods work equally well with reported accuracies of up to 100% in the 11-15 word range. Gao et al. (2001) present a system that augments n-gram language models with clustering techniques. ey cluster words by similarity and use these clusters in order to overcome the data sparsity problem.
N-Grams and clustering
In traditional cluster-based n-gram models, the probability P (w i ) is defined as the product of the probability of a word given a cluster c i and the probability of the cluster c i given the preceding clusters. For a trigram model, the probability P (w i ) of a word w i is calculated as
P (w i |w i−2 w i−1 ) = P (w i |c i ) × P (c i |c i−2 c i−1 )(4)
e probability of a word given a cluster is calculated as
P (w i |c i ) = C(w i ) C(c i )(5)
with C(w i ) the count of the word w i and C(c i ) the count of the cluster c i . e probability of a cluster given the preceding clusters is calculated using the Maximum Likelihood Estimator Gao et al. (2001) derive from this three ways of using clusters to augment language models: predictive clustering (7), conditional clustering (8) and combined clustering (9).
P (c i |c i−2 c i−1 ) = C(c i−2 c i−1 c i ) C(c i−2 c i−1 )(6)P (w i |w i−2 w i−1 ) = P (c i |w i−2 w i−1 ) × P (w i |w i−2 w i−1 c i ) (7) P (w i |w i−2 w i−1 ) = P (w i |c i−2 c i−1 )(8)P (w i |w i−2 w i−1 ) = P (c i |c i−2 c i−1 ) × P (w i |c i−2 c i−1 c i )(9)
Similarly, Dreyfuss et al. (2007) use clustering to cluster words by their context in order to improve trigram language models. In addition to Gao et al. (2001), they also use information about the subject-verb and verb-object relations of the sentence.
ey show that their model, using clustering, subject-verb information, verb-object information, and the Porter stemmer outperforms a traditional trigram model. Carter (1994) clusters training sentences (i.e. the corpus) into subcorpora of similar sentences and calculates separate language model parameters for each subcorpus in order to capture contextual information. In contrast to other works, Carter (1994) clusters sentences instead of single words (compare Pereira et al. (1993) and Ney et al. (1994)). Carter (1994) shows that the subdivision into smaller clusters increases the accuracy of bigram language models, but not trigram models.
Inclusion detection
Beatrice Alex (cf. Alex (2005Alex ( , 2006; ; Alex and Onysko (2010)) addresses the problem of English inclusions in mainly non-English texts. For the language pair German-English, inclusions are detected using a German and an English lexicon as first resource. If a word is found only in the English lexicon, it is tagged as unambiguously English. If the word is found in neither lexicon, a web search is conducted, restricting the search options to either German or English and counting the number of results. If the German search yields more results, the word is tagged as German, otherwise as English inclusion. If a word is found in both lexicons, a postprocessing module resolves the ambiguity.
Alex is mainly concerned with the improvement of parsing results by inclusion detection. For example in they report an increase in F-Score of 4.3 by using inclusion detection when parsing a German text with a parser trained on the TIGER corpus (Brants et al., 2002).
Clustering and spee
In the area of clustering and spoken language identification, Yin et al. (2007) present a hierarchical clusterer for spoken language. ey cluster 10 languages 1 using prosodic features and Mel Frequency Cepstral Coefficients (MFCC). MFCC vectors are a way of representing acoustic signals (Logan et al., 2000). e signal is first divided into smaller 'frames', each frame is passed through the discrete Fourier transform and only the logarithm of the amplitude spectrum is retained (Logan et al., 2000). e spectrum is then projected onto the 'Mel frequency scale', a scale that maps actual pitch to perceived pitch, "as apparently the human auditory system does not perceive pitch in a linear manner" (Logan et al., 2000). Finally, a discrete cosine transform is applied to the spectrum to get the MFCC representations of the original signal (Logan et al., 2000). Yin et al. (2007) show that their hierarchical clusterer outperforms traditional Acoustic Gaussian Mixture Model systems.
As spoken language will not be further investigated in this thesis, I will not dive deeper into the maer at this point. Yamaguchi and Tanaka-Ishii (2012), King and Abney (2013) and Lui et al. (2014) use monolingual training data in order to train a system capable of recognizing the languages in a multilingual text.
Monolingual training data
Yamaguchi and Tanaka-Ishii (2012) use a dynamic programming approach to segment a text by language. eir test data contains fragments of 40 to 160 characters and achieves F-scores of 0.94 on the relatively 'closed' data set of the Universal Declaration of Human Rights 2 and 0.84 on the more 'open' Wikipedia data set. However, the approach is computationally intensive, not to say prohibitive; while Yamaguchi and Tanaka-Ishii (2012) self-report a processing time of 1 second for an input of 1000 characters, Lui et al. (2014) found that with 44 languages, the approach by Yamaguchi and Tanaka-Ishii (2012) takes almost 24 hours to complete the computation on a 16 core workstation. King and Abney (2013) use weakly supervised methods to label the languages of words. ey consider the task as sequence labeling task. ey have limited themselves to bilingual documents with a single language boundary and the task consists in discriminating between English and non-English text. ey found that a Conditional Random Field model augmented with Generalized Expectation criteria worked best, yielding accuracies of 88% with as lile as 10 words used for training. Lui et al. (2014) consider the task as multi-label classification task. ey represent a document as an n-gram distribution of byte sequences in a bag-of-words manner. ey report F-scores of 0.957 and 0.959. ey note that similar languages will pose problems when trying to identify a language, and solve this problem by identifying a set of languages that most probably are correct instead of a single language.
One problem that these approaches all have is that they need to know the languages that will occur in the test data (King and Abney, 2013;Lui et al., 2014). Seldin et al. (2001) propose a system for automatic unsupervised language segmentation and protein sequence segmentation. eir system uses Variable Memory Markov (VMM) sources, an alternative to Hidden Markov Models (HMM) implemented as Predictive Suffix Trees (PST).
Predictive suffix trees
Whereas HMMs require substantial amounts of training data and a deep understanding of the problem in order to restrict the model architecture, VMMs are simpler and less expressive than HMMs, but have been shown to "solve many applications with notable success" (Begleiter et al., 2004). In contrast to n-gram models that estimate the probability of w as P (w|N ) with N the context (typically the n previous words), VMMs can vary N in function of the available context (Begleiter et al., 2004). us, they can capture both small and large order dependencies, depending on the training data (Begleiter et al., 2004).
ere is no single VMM algorithm, but rather a family of related algorithms. One of these algorithms is called Predictive Suffix Tree (PST) (Ron et al., 1996). A PST is a tree over an alphabet Σ, with each node either having 0 (leaf nodes) or |Σ| children (non-terminal nodes) (Ron et al., 1996). Each node is labeled with the result of the walk from that node up to the root (Ron et al., 1996). Each edge is labeled by a symbol s ∈ Σ and the probability for the next symbol being s (Ron et al., 1996).
By modifying the Predictive Suffix Tree (PST) algorithm using the Minimum Description Length (MDL) principle, Seldin et al. (2001) end up with a non-parametric self-regulating algorithm. e MDL principle avoids overfiing of the model by favoring low complexity over goodness-of-fit (Grünwald, 2007). ey embed the algorithm in a deterministic annealing (DA) procedure to refine the results. Finally, they use the Blahut-Arimoto algorithm, a rate-distortion function, until convergence of the system. For the language segmentation task, they use 150000 leers of text, 30000 from each of the following languages: English, German, French, Italian, transliterated Russian. ey used continuous language fragments of approximately 100 leers, yielding a synthetic multilingual text that switches language approximately every two sentences. One important point that they note is that "too short segments do not enable reliable discrimination between different models". erefore, they disallow switching models aer every word.
ey report very good results on the language segmentation task (and on the protein segmentation task). Aer 2000-3000 iterations of the Blahut-Arimoto algorithm, the correct number of languages is identified and the segmentation is accurate up to a few leers.
eory
Supervised language model
N-Gram models
Among supervised language models, n-gram models are very popular (Gao et al., 2001). An n-gram is a slice from the original string (Cavnar and Trenkle, 1994). ese slices can be contiguous or not. Non-contiguous n-grams are also called skip-grams (Guthrie et al., 2006). In skip-grams, an additional parameter k indicates the maximum distance that is allowed between units. In this parlance, contiguous n-grams can be regarded as 0-skip-n-grams (Guthrie et al., 2006). e following example demonstrates the difference between (traditional) n-grams and skip-grams. Given the following sentence:
T h i s i s a s a m p l e s e n t e n c e .
We can construct, for example, the following word k-skip-n-grams: (0-skip-)2-grams: is is, is a, a sample, sample sentence 2-skip-2-grams: is is, is a, is sample, is a, is sample, is sentence, a sample, a sentence, sample sentence (0-skip-)3-grams:is is a, is a sample, a sample sentence 2-skip-3-grams:is is a, is is sample, is is sentence, is a sample, is a sentence, is sample sentence, is a sample, is a sentence, is sample sentence, a sample sentence e results for 2-skip-2-grams does not include the skip-gram "is sentence", as the distance in words between these two words is 3, higher than the allowed k of 2. As can be seen from this example, the number of skip-grams is more than two times higher than the number of contiguous n-grams, and this trend continues the more skips are allowed (Guthrie et al., 2006). Skip-grams, unlike n-grams, do not incur the problem of data sparseness with an increase of n.
Instead of using words as unit for n-gram decompositions, we can also choose characters. Each word is then decomposed into sequences of n characters. For example, the word model can be decomposed into the 2-grams: mo, de, el. Oen, the word to decompose is padded with start and end tags in order to improve the model (Cavnar and Trenkle, 1994). If we pad the word with <w> and </w>, the 2-gram decomposition yields: <w>m, mo, de, el, l </w>. e use of paddings allows the model to capture details about character distribution with regard to the start and end of words (Cavnar and Trenkle, 1994). For example, in English the leer 'y' occurs more oen at the end of words than at the beginning of words, while the leer 'w' occurs mainly at the beginning of words (Taylor, 2015). A non-padding model cannot capture this distinction, while a padding model can.
One advantage of n-gram models is that the decomposition of a string into smaller units reduces the impact of typing errors (Cavnar and Trenkle, 1994). Indeed, a typing error only affects a limited number of units (Cavnar and Trenkle, 1994). Due to this property, n-gram models have been shown to be able to deal well with noisy text (Cavnar and Trenkle, 1994).
Formal definition
Traditional n-gram language models predict the next word w i given the previous words w 1 , . . . , w i−1 . is prediction uses the conditional probability P (w i |w 1 , . . . , w i−1 ). Instead of using the entire history w 1 , . . . , w i−1 , the probability is approximated by using only the n previous words w i−n+1 , . . . , w i−1 .
P (w i |w 1 , . . . , w i−1 ) = P (w i |w i−n+1 , . . . , w i−1 )(10)
e probability can be estimated using the Maximum Likelihood Estimation (MLE):
P (w i |w i−n+1 , . . . , w i−1 ) = C(w i−n+1 , . . . , w i ) C(w i−n+1 , . . . , w i−1 )(11)
Where C(w i−n+1 , . . . , w i ) represents the number of times the n-gram sequence w i−n+1 , . . . , w i occurred in the training corpus and C(w i−n+1 , . . . , w i−1 ) represents the number of times the (n − 1)-gram sequence w i−n+1 , . . . , w i−1 was seen in the training corpus.
Smoothing
e problem with MLE is that sequences not seen during training will have a probability of zero. In order to avoid this problem, different smoothing techniques can be used (Chen and Goodman, 1996). e simplest smoothing technique is additive (Laplace) smoothing (Chen and Goodman, 1996). Let V be the vocabulary size (i.e. the total number of unique words in the test corpus). e smoothed probability P Laplace becomes:
P Laplace (w i |w i−n+1 , . . . , w i−1 ) = C(w i−n+1 , . . . , w i ) + λ C(w i−n+1 , . . . , w i−1 ) + λV(12)
With λ the smoothing factor. If we choose λ = 1, we speak of "add one" smoothing (Jurafsky and Martin, 2000). In practice, λ < 1 is oen chosen (Manning and Schütze, 1999).
An important estimation is the Good-Turing estimation (Chen and Goodman, 1996). While not directly a smoothing method, it estimates the frequency of a given observation with
c * = (c + 1) N c+1 N c(13)
where c is the number of times the observation was made, N c is the number of times the frequency c was observed and N c+1 the frequency of the frequency c + 1. us, instead of using the actual count c, the count is taken to be c * (Chen and Goodman, 1996).
Another way to avoid assigning probabilities of zero to unseen sequences is by using back-off models. ere are linear and non-linear back-off models. In non-linear back-off models, if the original n-gram probability falls below a certain threshold value, the probability is estimated by the next lowest n-gram model. Katz's back-off model (Katz, 1987) for instance calculates probability P bo using the formula:
P bo = { d w i−n+1 ,...,w i C(w i−n+1 ,...,w i ) C(w i−n+1 ,...,w i−1 ) if C(w i−n+1 , . . . , w i ) > k α w i−n+1 ,...,w i−1 P bo (w i |w i−n+2 , . . . , w i−1 ) otherwise(14)
With d and α as smoothing parameters. e parameter k is oen chosen k = 0. is means that if the probability given a high-order n-gram model is zero, we back off to the next lowest model. For tri-gram models, the formula becomes:
P bo (w i |w i−2 , w i−1 ) = P (w i |w i−2 , w i−1 ) if C(w i−2 , w i−1 ) > 0 α 1 P (w i |w i−1 ) if C(w i−2 , w i−1 ) = 0 and C(w i−1 , w i ) > 0 α 2 P (w i ) otherwise(15)
In contrast, linear back-off models use an interpolated probability estimate by combining multiple probability estimates and weighting each estimate. e probability P LI for a tri-gram model is:
P LI (w i |w i−2 , w i−1 ) = λ 3 P (w i |w i−2 , w i−1 ) + λ 2 P (w i |w i−1 ) + λ 1 P (w i )(16)
with ∑ λ i = 1
Unsupervised clustering
Clustering consists in the grouping of objects based on their mutual similarity (Biemann, 2006). Objects to be clustered are typically represented as feature vectors (Biemann, 2006); from the original objects, a feature representation is calculated and used for further processing.
Clustering can be partitional or hierarchical (Yin et al., 2007). Partitional clustering divides the initial objects into separate groups in one step, whereas hierarchical clustering builds a hierarchy of objects by first grouping the most similar objects together and then clustering the next level hierarchy with regard to the existing clusters (Yin et al., 2007).
e clustering algorithm uses a distance metric to measure the distance between the feature vectors of objects (Biemann, 2006). e distance metric defines the similarity of objects based on the feature space in which the objects are represented (Jain et al., 1999). ere are different metrics available. A frequently chosen metric is the cosine similarity that calculates the distance between two vectors, i.e. the angle between them (Biemann, 2006).
In order for a clustering algorithm to work, features that represent the object to be clustered have to be defined (Jain et al., 1999). Features can be quantitative (e.g. word length) or qualitative (e.g. word starts with a capital leer) (Jain et al., 1999).
Most clustering algorithms, e.g. k-means, need the number of clusters to generate (Jain et al., 1999). e question how to best choose this key number has been addressed in-depth by Dubes (1987).
Clustering can be so or hard. When hard-clustering, an object can belong to one class only, while in so-clustering, an object can belong to one or more classes, sometimes with different probabilities (Jain et al., 1999).
Weakly supervised language model induction
e main idea behind language model induction is that by inducing language models from the text itself, the models are highly specialized but the approach is generally more flexible since genre or text specific issues do not arise.
is approach is similar in character to the work by Seldin et al. (2001) in that the text itself is used as data set. However, the realization differs greatly. Whereas Seldin et al. (2001) use predictive suffix trees, I use n-gram language models. e intuition is to learn the language models from the text itself, in an iterative manner. Suppose we have a document as follows where w i represents the word at position i in the text. Suppose the text contains two languages, marked in red and blue.
w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 …
Figure 2: Simple text illustration
We take the first word and create a language model m 1 from that word.
w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … m 1 create Figure 3: Initial model creation
We then evaluate the second word using the first language model. If the language model score is high enough, we update the language model with the second word. e last example shows that it is not necessarily the case that exactly one language model is created per language; it oen is the case that many language models are created for one language.
At the beginning, the models are not very reliable, as they only have a few words as basis, but the more text is analyzed, the more reliable the models become.
However, the approach is problematic in that the text structure itself influences the language models created. If the text starts with a foreign language inclusion, as illustrated in figure 12, the initial model might be too frail to recognize the following words as being a different language, updating the first model with the second and third word and so on. us, the approach would fail at recognizing the foreign language inclusion. w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … Figure 12: Problematic text sample If we were to start from the end of the text and work towards the beginning, the probability of having a relatively robust language model for the 'blue' language would be high, and so, it would theoretically be easier to recognize the first word as not being 'blue'. erefore, one induction step involves one forward generation and one backwards generation. is yields two sets, the set of models from the forward generation F = {f 1 , f 2 , . . . , f n } and the set from the backwards generation B = {b 1 , b 2 , . . . , b m }. en, from the two sets of models, the most similar models are selected. For this, every model from F is compared to every model from B, as figure 13 shows. e most similar models are then merged, as illustrated in figure 14. Indeed, if both the forward and backwards generation yielded a similar language model, it is probable that the model is correct.
Even so, both forward and backwards generation can not guarantee ideal results, there is the option to run the generation from a random position. is random induction picks a random position in the text and runs one induction step from that position, meaning one forward and one backwards generation. Finally, the most similar models are merged as for the general generation.
f 1 f 2 f 3 f 4 b 1 b 2 b 3 b 4 b 5 b 6
Figure 13: Finding the most similar models
f 1 f 2 f 3 f 4 b 1 b 2 b 3 b 4 b 5 b 6
Merged model Figure 14: Merging most similar models is only yields one probable language model, therefore the induction is repeated with the difference that all probable models are taken into consideration as well. For each word, if a probable model models the word well enough, no new model is created, otherwise a new model is created.
At the end of the induction loop, the set of probable models P is examined. As long as there are two models that have a similarity score below a certain threshold, the two most similar models are merged.
Finally, aer the language models have been induced, another pass is made over the text and each word is assigned to the language model which yields the highest score for that word, resulting in a word-to-model assignment as illustrated in figure 15.
w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … • Forward/Backwards threshold: reshold for forward/backwards merging • Silver threshold: reshold for P model merging ese parameters can be adapted, in the hope that some parameter configurations will work beer on certain data sets than other configurations. Since the approach has parameters that have to be learned from a development set, the approach is said to be weakly supervised; the development set is not used to train any language specifics, only for the estimation of the parameters of the approach.
Experimental setup
In this chapter I present experiments done using the approaches delineated in the previous section in order to find out whether there are approaches that work beer on certain types of text.
e central hypothesis is that unsupervised language segmentation approaches are more successful on difficult data. Difficult data is data for which there is not enough data to train a language model or data which contains a lot of non-standard language such as abbreviations.
First, I present the data used to test the language segmentation systems and elaborate on the different aspects that had to be considered for the data compilation.
I then present two supervised language segmentation experiments using n-gram language models and Textcat.
For unsupervised language segmentation, I will first present experiments using clustering algorithms before presenting experiments using language model induction.
Data
In order to test the different language segmentation approaches, I compiled different sets of test data. As I want to focus on short texts, most texts from the test corpus are rather small, sometimes consisting of only one sentence. However, in order to test the general applicability of the approach, the test corpus also contains larger text samples.
e test corpus can be subdivided into different sub-corpora:
• Latin-based: Texts consisting of languages using Latin-based scripts, such as German, English, Finnish or Italian • Mixed script: Texts consisting of languages using Latin-based scripts and languages using non-Latin-based scripts • Twier data: Short texts taken from Twier
• Pali dictionary data: Unstructured texts containing many different language inclusions such as Vedic Sanskrit, Sanskrit, Indogermanic reconstructions, Old Bulgarian, Lithuanian, Greek, Latin, Old Irish, many abbreviations and references to text passages
As every outcome has to be manually checked, the test corpus is rather small. Every category consists of five texts. Each texts consists of two or three languages with the exception of the Pali dictionary data that oen contains inclusions from many different languages in the etymological explanations.
For each text, I also created a gold standard version with the expected clusters. In some cases it is not clear how to cluster certain objects. In that case, I use a clustering that makes sense to me, but this need not mean that it is the correct or only possible clustering.
For the parameter estimation of the language model induction approach, I also compiled a set of development data. All texts can be found in the appendix under 8.1 and 8.2.
Supervised language model 4.2.1 Implementation
For the supervised language segmentation method, I implemented an n-gram language model as described by Dunning (1994). e n-gram language model is implemented as a character trigram model with non-linear back-off to bigram and unigram models. e conditional probability P is calculated using the formula: (17) with α 1 = 0.7, α 2 = 0.2, α 3 = 0.09, α 4 = 0.01, V the number of unigrams, W the number of bigrams and X the number of trigrams.
P (w i |w i−2 , w i−1 ) = α 1 C(w i−2 ,w i−1 ,w i ) C(w i−2 ,w i−1 ) if C(w i−2 , w i−1 , w i ) > 0 α 2 C(w i−1 ,w i ) C(w i−1 ) if C(w i−1 , w i ) > 0 α 3 C(w i ) V if C(w i ) > 0 α 4 1 V +W +X otherwise
Each word is padded by two different start symbols and two different end symbols. e joint probability for a word w of length n is calculated as
P (w) = 1 ∑ n i=2 | log P (w i |w i−2 , w i−1 )|(18)
In the denominator, I use the log probability instead of the probability to increase numerical stability. Indeed, multiplying very small numbers can lead to the result being approximated as zero by the computer when the numbers become too small to be represented as normalized number (Goldberg, 1991). Using the sum of logarithms avoids this problem and is less computationally expensive (Bürgisser et al., 1997).
As the logarithm of a number approaching zero tends to infinity, rare observations get a higher score than frequent observations. As such, the denominator can be seen as a scale of rarity, with a higher score corresponding to a rarer word. By taking the inverse of this scale, we get a score corresponding to the "commonness" (≈ frequency) of a word.
Training phase
First, models are trained on training data in the relevant languages. I have not included the languages from the Pali dictionary data, as there are too many different languages and there are typically only small inclusions of different languages in a dictionary entry; as such, it would not have made sense to train a language model just to recognize a single word. Another reason for not using the Pali dictionary data languages is that sometimes it is not possible to find data for a language, e.g. Old Bulgarian or reconstructed Indogermanic. In some cases, it would have been conceivable to train models on similar languages, but again, the effort of training a model is disproportionately high compared to the (uncertain) result of recognizing a single inclusion. Instead, an additional catch-all language model is used to capture words that do not seem to belong to a trained model. e training data consists of Wikipedia dumps from the months June and July 2015; a dump is a copy of the whole encyclopedia for a given language. Due to the difference in size of the Wikipedia of the different languages, I choose the full dump for languages with less than 3 GB of compressed data and limited the amount of data to maximally 3 GB of compressed data.
e Wikipedia data was processed using the Wikipedia Extractor 3 version 2.8 in order to extract the textual content from the article pages. Indeed, the Wikipedia pages are wrien using the MediaWiki Markup Language 4 . While this markup is useful for meta-data annotation and cross-referencing, the encoded information is superfluous for language model training and has to be removed before training a model on the data. Table 1 shows the size of the training data per language aer text extraction.
As the test data only contains transliterated Amharic text, the Wikipedia data, written in the Ge'ez script, had to be transliterated. e text was transliterated according to the EAE transliteration scheme by the Encyclopaedia Aethiopica.
As the test data contains transliterated Greek, the Greek data was used once as-is and once transliterated according to the ELOT (Hellenic Organization for Standardization) transliteration scheme for Modern monotonic Greek.
It should be borne in mind that the training data influences the quality and accuracy of the model. Furthermore, a model might work well on certain text types and less well on other text types. It is not possible to train a perfect, universal model.
Application of the approa
In the second step, an input text is segmented into words. en, each word is evaluated by each language model and the model with the highest score is assigned as the word's language model. e approach taken consists in classifying words as either belonging to a trained language model or to the additional, catch-all model other, which simply means that the word could not be assigned to a trained model class.
Textcat and language segmentation
I also tested how well Textcat is suited to the task of language segmentation. e approach is similar to the n-gram approach, with the exception that I do not train any models and rely on Textcat's classifier for language prediction.
In the first step, an input text is segmented into words. en, each word is passed to Textcat and the guess made by Textcat is taken as the word's language.
Unsupervised clustering
In order to test the efficiency of clustering algorithms on the task of language segmentation, I looked at various algorithms readily available through WEKA, "a collection of machine learning algorithms for data mining tasks" by the University of Waikato in New Zealand (Hall et al., 2009) and the Environment for Developing KDD-Applications Supported by Index-Structures (ELKI), "an open source data mining soware […] with an emphasis on unsupervised methods in cluster analysis and outlier detection" by the Ludwig-Maximilians-Universität München (Achtert et al., 2013). I also looked at JavaML, "a collection of machine learning and data mining algorithms" (Abeel et al., 2009), in order to integrate clusterers into my own code framework. JavaML offers different clustering algorithms and also offers access to WEKA's clustering algorithms. In contrast to WEKA and ELKI, which can be used in stand-alone mode, JavaML is meant to be integrated into bigger programs and provides an application programming interface (API) that allows the provided algorithms to be accessed in a programmatic way, i.e. from inside a program.
Preprocessing
However, in order for the clustering algorithms to work, the document to segment has to be preprocessed in a number of ways, as shown in figure 16. First of all, the document has to be read in by the program. is step is straightforward.
e document then has to be tokenized. Tokenization is not trivial and depends on the definition of a 'word'. For this task I have used a whitespace tokenizer that defines a word as a continuous sequence of character literals separated by one or more whitespace characters. While it can be objected that for scripts that don't use whitespace to separate words, such as Chinese, tokenization fails, this is not too big a concern. Indeed, if a continuous block of Chinese characters is treated as one word, it is likely to be clustered separately due to the different in "word" length and the different character set. If, however, a document contains two scripts that do not separate words by whitespace, the approach totally fails. It is beyond the scope of this thesis, and possibly of any thesis, to implement a universal tokenizer that works regardless of language without prior knowledge about the languages at hand.
Each token is then normalized. Normalization of a non-Latin-based input (e.g. Arabic or Cyrillic script) returns the input without modification. Otherwise, the following modifications are made, if applicable:
• remove leading and trailing whitespace • remove punctuation • remove control characters Control characters are defined as the set
( [ ] ) \
Punctuation is defined as the set . , " ' : ; ! ? − e token is then stripped of XML-like tags, if applicable. e following example illustrates this step. Let us assume we have the following token: <word i d = " 1 " lemma =" go " > goes < / word > e token is replaced by the text content of the node, thus the resulting token is 'goes'.
If, aer all these modifications, the token corresponds to the empty string, we continue with the next token. Otherwise, the token is passed on to the feature extraction module. e algorithm terminates when all tokens have been consumed.
Defining features
e final step consists in defining features by which to cluster and implementing feature extractors that build the feature vectors from the input. Since the features are to be language independent, using features such as 'occurs in an English lexicon' cannot be used. e following features were devised:
1. word length: the length of the word in characters 2. X tail bigrams: bigrams calculated from the end of the word 3. Y tail trigrams: trigrams calculated from the end of the word 4. X first bigrams: bigrams calculcated from the beginning of the word 5. Y first trigrams: trigrams calculated from the beginning of the word 6. latin basic: is the word latin basic? While most features are rather self-explanatory, a few require further explanation. For the n-grams, the number of n-grams is restricted so as to keep the resulting vectors the same size. is is important because the clustering algorithm considers one data column as one feature, and having vectors of different length would disrupt this precondition. Implementing the comparison of vectors of different lengths, or rather or vectors containing vectors as features would have been possible, but rather timeconsuming. If a word is too short to generate the required number of n-grams, only the possible n-grams are generated and all other positions filled with 0.
e 'latin' features check whether the word consists only of the basic latin leers A-Z and a-z ('basic') while the 'extended' feature also covers leers derived from the latin leers (e.g. ë, ç, ṃ, ñ).
Non-words are defined as anything not consisting of leers, such as punctuation marks or digits.
Directionality indicates which direction a character should be wrien. While the actual list is much more exhaustive, this property basically indicates whether the character is wrien from le to right or from right to le. 5 BMP stands for Basic Multilingual Plane and refers to an encoding unit known as plane, which consists of 2 16 = 65536 codepoints (i.e. encoding slots for characters) (e Unicode Consortium, 2014). e BMP is the first plane, covering the codepoints U+0000 to U+FFFF (e Unicode Consortium, 2014). While it is not important to understand the technical details fully, it is interesting to note that most characters are covered by the BMP, including Chinese, Japanese and Korean characters (e Unicode Consortium, 2014). e next plane, called Supplementary Multilingual Plane or Plane 1 contains historic scripts such as Egyptian hieroglyphs and cuneiform scripts, but also musical notation, game symbols and various other scripts and symbols (e Unicode Consortium, 2014). ere are 17 planes in total (e Unicode Consortium, 2014).
e last feature in the list, General Type is also an implementation-related property. Type can be, for example 5 , END_PUNCTUATION, LETTER_NUMBER or MATH_SYMBOL. ese constants are represented as numbers internally, which are taken as feature for the clustering algorithm.
Mapping features to a common scale
As JavaML requires numerical features, all features were mapped to numerical scales:
• Binary features were mapped to 0 (false) and 1 (true)
• Ternary features were mapped to 0 (false), 1 (true) and 99 (not applicable)
• Numerical features were represented as themselves, either as whole numbers (e.g. word length) or as floating point numbers (e.g. vowel ratio)
• Java specific features (18,20) take the underlying numerical value as feature
• N-grams were encoded numerically using algorithm 1 return sum 8: end function While algorithm 1 does not encode n-grams in an unambiguous way ("en" and "ne" are both encoded as 211), it provides a sufficiently good encoding.
e problem of unambiguous encoding
I have tried using unambiguous encodings. e main problem with unambiguous encoding is that the notion of "distance" is distorted. e idea behind the unambiguous encoding is that each "word" (i.e. string of characters) is encoded numerically so that no two "words" are represented as the same number. Besides the encoding of each separate character, the position of the character inside the string also has to be encoded. A possible encoding e for a string w 1 w 2 w 3 could be
e w 1 w 2 w 3 = n(w 1 ) + x * n(w 2 ) + y * n(w 3 )(19)
with w i the character of the string at position i, n(w i ) the numerical encoding of the character w i and x and y parameters. If |A| is the alphabet size of the alphabet A in which the word is encoded, the following constraints must be true for the encoding to be unambiguous:
x ≥ |A| (20) y ≥ |A| 2(21)
If we take for example the English alphabet with 26 lowercase and 26 uppercase leers, not counting punctuation, digits and other characters, it has to be true that x ≥ 52 and y ≥ 2704. e problem is that we cannot know in advance what size the alphabet will be. If we have English and German texts, the size can be estimated around 60. However, if we have English, Russian and Arabic text, the size drastically increases. We could choose any two very big numbers, but if we want to guarantee our encoding to be unambiguous, we run the risk of ending up with numbers too big to be represented efficiently.
In this encoding scheme, distance is skewed: changes to the first character result in linear distance. 'man' and 'nan' have a distance of 1, because 'm' and 'n' have a distance of 1. 'man' and 'lan' have a distance of 2, etc. Changes to the second character are multiplied by x. 'man' and 'men' have a distance of x * (distance(a, e)) = 4 * x.
Changes to the third character are scaled by y. For any sufficiently big x and y, the distances are too skewed to be used for automatic cluster analysis. Let us consider the following example with only two characters for simplicity. For this example, let us assume x = 1373.
e clusterer
Most clustering algorithms such as k-means need to be passed the number of clusters to generate. As we want to work as flexibly as possible, I ignored all algorithms that need the number of clusters before clustering. In contrast, the x-means algorithm (Pelleg and Moore, 2000) estimates the number of clusters to generate itself. is algorithm has been chosen to perform the language clustering tasks.
While WEKA and ELKI offer a graphical user interface and various graphical representations of the results, the output is not easily interpretable. Indeed, we can get a visualization of a clustering operation as shown in figures 17 (WEKA) and 18 (ELKI). However, all data points have to be manually checked by either clicking each point in order to get additional information about that data point (WEKA) or by hovering over the data points aer having selected the Object Label Tooltip option (ELKI). Figure 18 shows the information for the lowest orange rectangle data point in the ELKI visualization. erefore, I have decided to embed the x-means clustering algorithm into a custom framework. Originally part of the WEKA algorithms, the x-means algorithm has been integrated into a Java program via the JavaML library. e framework takes an input file, constructs the aforementioned feature vectors from the input, performs normalization, passes the calculated feature vectors to the clustering algorithm and displays the results in a text-based easily interpretable manner.
Preliminary analyses have shown that the first clustering result oen is not discriminating enough. Hence, I perform a first clustering analysis, followed by a second clustering analysis on the clusters obtained from the first analysis.
Evaluating clusterings
e clustering results are evaluated using four common similarity measures used in evaluating the accuracy of clustering algorithms. ese methods are based on counting pairs (Wagner and Wagner, 2007).
Let us consider the clustering C = {C 1 , . . . , C k }. C is a set of non-empty disjoint clusters C 1 , . . . , C k . Let us consider the reference clustering C ′ = {C 1 , . . . , C l }. We define the following sets.
• S 11 : set of pairs that are in the same cluster in C and C ′ • S 00 : set of pairs that are in different clusters in C and C ′ • S 10 : set of pairs that are in the same cluster in C and in different clusters in C ′ • S 01 : set of pairs that are in different clusters in C and in the same cluster in C ′
Let n ij = |S ij |, with i, j ∈ {0, 1} be the size of a given set S ij .
e Rand Index is defined as RI = n 11 + n 00 n 11 + n 10 + n 01 + n 00 (22) e Rand Index measures the accuracy of the clustering given a reference partition (Wagner and Wagner, 2007). However, it is criticized for being highly dependent on the number of clusters (Wagner and Wagner, 2007).
e Jaccard Index measures the similarity of sets. It is similar to the Rand Index, but it disregards S 00 , the set of pairs that are clustered into different clusters in C and C ′ (Wagner and Wagner, 2007). It is calculated as J = n 11 n 11 + n 10 + n 01
e Fowlkes-Mallows Index measures precision. It is calculated as
F M = n 11 √ (n 11 + n 10 )(n 11 + n 01 )(24)
e Fowlkes-Mallows Index has the undesired property of yielding high values when the number of clusters is small (Wagner and Wagner, 2007).
Finally, I will indicate the F-Score. According to Manning et al. (2008), in the context of clustering evaluation the F(β) score is defined as
F (β) = (β 2 + 1) * P * R (β 2 )P + R(25)
with precision P and recall R defined as P = n 11 n 11 + n 10 (26) R = n 11 n 11 + n 01 (27) By varying β, it is possible to give more weight to either precision (β < 0) or recall (β > 1) (Manning et al., 2008). As I value recall higher than precision, I will indicate F1 (β = 1) and F5 (β = 5) scores. Indeed, I want to penalize the algorithm for clustering together pairs that are separate in the gold standard while not penalizing the algorithm for spliing pairs that are together in the gold standard.
All measures of similarity fall between [0, 1] with 0 being most dissimilar and 1 being identical. As there is no ultimate measure and all measures of similarity have their drawbacks (Wagner and Wagner, 2007), all measures will be indicated in the results section.
Weakly supervised language model induction
e language model induction approach works in two stages. In the first stage, n-gram language models are induced from the text. In the second stage, the text is mapped to the induced models. e algorithm for the language model induction is as follows: First of all, an initial language model is created. For each word, the maximum model and maximum score is calculated. ese values correspond to the language model that yielded the highest probability for the word in question, and the associated probability. If the score falls below a threshold t (i.e. none of the existing language models model the word well enough), a new language model is created on the basis of the word and added to the list of language models. Otherwise, the top scoring language model is updated with the word in question.
As the text structure itself influences the quality of the induced models, the language model induction is run i times (i 1), with one iteration consisting of two induction steps, once forward and once backward, and j times from a random position (j 0). e initial model creation thus either picks the first word of the text (as shown in algorithm 3 line 2), or the last word of the text, or a random word. return maxM odel, maxScore 12: end function Algorithm 4 returns both the max model and the max score wrapped as a custom object. e individual values can then be read as necessary.
Aer the models have been induced, the most similar models are merged based on distributional similarity. Distributional similarity is calculated as explained below. is merging step only merges one model from the forward induction group with one model from the backward induction group. e resulting model is added to the set of probable ("silver") models.
Merging is performed according to algorithm 5. e merging algorithm only retains the common set of unigrams from both models, and all resulting bi-and trigrams, excluding any bi-and trigrams that contain character that occur only in one of the models. e values for the resulting language model are calculated according to one of four different merge modes.
e merge modes are: if not exclude contains any char in b then 19: if not exclude contains any char in t then 28: If the random iteration count j > 0, a random word is chosen and the induction is run once forward and once backward starting from this position. en, the most similar models from each set are merged and added to the set of probable models. It should be noted that seing the parameter j > 0 will make the algorithm nondeterministic.
if u 1 = u 2 then 6: v 1 ← f (u 1 ) ◃ f (u 1 ) is the frequency of u 1 7: v 2 ← f (u 2 ) 8: value ← mode(v 1 , v 2 ) 9: unigram ← u 1 ◃ or u 2 ,v 1 ← f (b, model 1 ) or 0 ◃ frequency of b inv 1 ← f (t, model 1 ) or 0 29: v 2 ← f (t, model 2 ) or 0 30: value ← mode(v 1 , v 2 )
e model induction is then repeated while the iteration count i has not been reached or until no more models are induced, with the difference that for each word, each probable model is first consulted. If any of the probable models yields a score higher than the threshold value t, it is assumed that the word is already well represented by one of the probable models and no models are induced for this word. If the score falls below the threshold value t, induction is run as described.
At the end of the induction loop, all probable models are checked against each other. While there are two models that have a similarity below the silver threshold value s, the two models are merged and added to the set of very probable ("gold") models.
If the set of probable models is not empty aer this merging step, all remaining probable models are added to the set of very probable models.
In the second stage, the text is segmented according to the induced "gold" models. For each word, the language model with the highest probability for the word is chosen as that word's hypothetical language model.
Distributional similarity
Suppose we have three models with the distributions of leers as shown in figures 19, 20 and 21 6 . Similarity could be calculated based on the occurrence of unigrams/leers alone, i.e. if model 1 contains the leer 'a' and model 2 also contains the leer 'a', their similarity increases by 1.
However, if we calculate similarity in such a way, all three models are equally similar to each other, as each of the leers occurs at least once in each model. Yet, it should be clear that models 1 and 2 are very similar to each other while model 3 is dissimilar.
erefore, in order to include the distribution of leers in the similarity measure, similarity is calculated as shown in algorithm 6. if u 1 = u 2 then ◃ unigram occurs in both models with f (c) returning the frequency of the character c. e number 2 in (2−q) in line 10 can be explained as follows: q expresses the dissimilarity of the models with regard to a unigram distribution with 0 q 1, hence (1 − q) expresses the similarity. To this, we add 1, as we increase similarity by 1 due to the match; we augment the simple increase of 1 by the similarity of the distribution.
Evaluating results
e results of this approach can be interpreted as clusters, where each language model represents one cluster core and all words assigned to that model making up that cluster. Evaluation will hence be analogous to the evaluation of the clustering approach.
Estimating the parameters
As the language model induction can be controlled by parameters, we have to find a combination of parameters that works well for our task. e parameters i, j and "merge mode" have been estimated on the development set. e development set contains similar documents to those in the test set. e development set can be found in the appendix.
It has been found that the parameter combination i = 4, j = 2, ADD yields good results across the development set. Hence, these values have been used for the test set evaluation.
Results
'Baseline' indicates the measurement where all words have been thrown into one cluster, measured against the gold standard. For 'Baseline 2', every word has been put into its own cluster and this clustering is evaluated against the gold standard. e column 'F1' stands for the F1 score and the 'F5' column stands for the F5 score.
If any of the 'runs' yields a higher score than any of the baseline values, the maximum score is indicated in bold. If a field contains 'n/a', this means that the value could not be calculated for whatever reason (most oen a division by zero would have occurred). Seldin et al. (2001) is similar to the work presented here. ey propose an unsupervised language (and protein sequence) segmentation approach that yields accurate segmentations. While their work looks promising, it also has its drawbacks. eir method requires longer monolingual text fragments and a sizable amount of text. Furthermore, they disallow switching language models aer each word. is presumption will fail to detect single-word inclusions and structures as shown in figure 22, where the language alternates aer each word.
N-Gram language model
w 1 w 2 w 3 w 4 w 5 w 6 w 7 … Figure 22: Alternating language structure While this structure looks very artificial, such a structure is found, for instance, in the fih Pali dictionary text, in the passage "Pacati, [Ved. pacati, Igd. *peqǔō, Av. pac-;". In this case, 'red' corresponds to Pali, 'blue' to (abbreviations in) English and 'green' to reconstructed Indo-european.
N-Gram language models
e trained n-gram language model approach works well on the Latin script data, managing to single out the German inclusion from the English-German text (even though it is classified as "other" instead of German).
For German-Finnish-Turkish, English-French, English-Transliterated Greek and Italian-German, the separation of the main languages involved is good, although there appear to be some problems when words contain non-word characters such as quotes or parentheses.
Some puzzling misclassifications happen in the English-Transliterated Greek case: agápe is considered English and éros is considered Transliterated Amharic.
In the Italian-German text, the Italian language leads to a rather important Spanish cluster due to the relatedness of the two Romance languages.
On the mixed script data set, the results are more diverse. Greek-Russian, English-Spanish-Arabic and Ukrainian-Russian are segmented well, with English-Spanish-Arabic having Spanish split into Spanish, French and Italian due to the relatedness of the languages.
In contrast, the segmentation of English-Greek did not work well at all. Of the two Greek words ἀγάπη and ἔρως, ἀγάπη was considered French and ἔρως was considered Russian. It must be noted, though, that these words bear polytonic diacritics, whereas the model was trained on monotonic Greek.
Also, the segmentation of English-Chinese did not work well. is is probably due to the way the model was trained. Chinese script is wrien without whitespace characters between words, and the correct segmentation of a text wrien in Chinese requires in-depth knowledge of the language. Some words are wrien with only one character, but others are composed of two or more characters, with the meaning oen being non-compositional; the meaning of a two-character word is different from the sum of the meaning of the two characters. Sometimes, more than one segmentation would be possible and the context decides on which segmentation is correct. In other cases, more than one segmentation might be correct. is problem occurs with all scripts that are wrien without whitespace.
As with the simplified assumption in the tokenization of whitespace-scripts, where I consider a word to be a character sequence delineated by whitespace, I have treated each character as a word. Adapting the method to Chinese and similar scripts would have been possible, but would have introduced the need for large amounts of external linguistic knowledge. Indeed, every possible non-whitespace-script would have to be considered, and each of the tokenizers would be language dependent, i.e. a tokenizer for Chinese would not work on Korean or Japanese.
e supervised approach did not work well on the Pali dictionary data. While English words could be isolated somewhat successfully, the rest of the data proved difficult to segment. As an example, let us look at the first Pali text. e English cluster contains almost only English words, but not all, the "other" cluster contains mainly marked up words, and the rest is seemingly haphazardly distributed among the other models.
Pali 1: abbha
• (AR) ., 134., 289.
• (DE) Miln), imber, dark), Miln
• (EL) (=,(abbhaŋ • (EN) water,mountain,of,free,(used,or,like,referred,(also,A,is,cloudy,clouds,later,a,froth,1,summit,thundering,by,mass,Pv,Oir,obscure,scum,that,water]., thick, As, from, It, is, at, as, the, in, clouds, things, also • (TrEL) , Gr., Sk., Idg., to, pabbata, nt.
• (UK) 12)., 273, 617, 348)., 250;, 251)., 382).
• (other) <b> -saŋvilāpa </b>, <b> -mua </b>, <smallcaps> vi. </smallcaps>, (mahiyā, <smallcaps> iv. </smallcaps>, cloud\";, <b> Rāhu </b>, <b> abbhā </b>, <b> abbhaŋ, <superscript> 9 </superscript>, marajo </b>, abbhāmua, valāhaka);, <smallcaps> i. </smallcaps>, <b> abbhāmaa </b>, valāhaka-sikhara, <superscript> s. </superscript>, <smallcaps> ii. </smallcaps>, <b> dhū-, stormcloud, /><b> -kūṭa </b>, thunder-cloud);, <at> a)fro\\s </at>, <b> -paṭala </b>, <at>o)/mbros</at>, nīla-megha, <superscript>1</superscript>, *m̊bhro, \"dull\";, acchādesi);, mahikā</b>, <b> -ghana </b>
On the Twier data, the supervised approach achieved passable results. While the numbers look great, the actual segmentations do not. For Twier 1, too many clusters were generated, for Twier 2 and 3, the recognition of French words worked somewhat, also recognizing English words as French and French words as English. For Twier 4, the Polish inclusion was isolated but recognized as "other", together with "strawberries". e recognition of transliterated Amharic worked satisfactorily, yielding 'naw' to the Polish model.
As the number of language models increases, so does the risk of misclassification. As can be seen, we already have quite some misclassification with only 15 language models. For example, in our data, the English preposition 'to' is oen erroneously classified as 'transliterated Greek'. e Greek particle το 'to' can be either the neuter singular accusative or nominative definite article 'the', the masculine singular accusative or nominative definite article 'the' or the 3rd person neuter singular nominative/accusative weak pronoun 'it', and as such is rather frequent in the language. is is especially problematic with the transliterated Greek language model, which tends to misclassify the English preposition 'to' as transliterated Greek. A quick corpus study using the Corpus of Modern Greek 8 and the Corpus of Contemporary American English 9 reveals that the frequency per million words for the Greek particle το is 22666, while the English preposition 'to' has a frequency per million words of 25193. eir relative frequencies are very close together, and it might just have happened that the training data used in this work contained more Greek 'to's than English 'to's, leading to this misclassification.
Other reasons for misclassification include relatedness of the modeled languages as in the case of Germanic or Romance language families. Also, the text types used for training and the text types used for testing play an important role, as well as the amount of training data.
For n-gram language models, the quality of the model is dependent on the texts used for training and the texts used in evaluation. It is probable that a different training set would have yielded different results. is is also the problem with the supervised approach; it is necessary to have language data for training and the trained models reflect the training data to some extent.
Textcat
Textcat works well on monolingual texts. However, it fails on multilingual texts and does not work well on short fragments of text, such as single words. Many of the words are tagged as unknown, and if a language has been identified, the language guess oen is not correct. Hence, Textcat cannot be used for language segmentation purposes.
Indeed, Textcat fails to exceed the baseline values except for two cases: 'Twier 3' and 'Twier 4' yield beer values than the baseline values. However, upon closer inspection, it is clear that the numerical index values do not give a reliable picture of the quality of the clustering.
Indeed, while the clustering of 'Twier 3' is not nonsensical, it is not very good, failing to extract the French insertion 'breuvages'. e Rand Index also only shows a slightly beer value than the baseline values. It seems that the outstanding score for 'Twier 4' is achieved because both the clustering by Textcat and the gold standard have the same number of clusters.
Tables 20 and 21 show the clusterings side by side. Clearly, Textcat performed poorly despite the high numerical index values. A closer inspection of all the Textcat results shows that Textcat performs poorly at the task of language segmentation; oen, a word cannot be assigned a language and thus is added to the cluster of 'unknown' language words. For the words where a language has been identified, it most oen is not the correct language. While language identification is not necessary for the task of language segmentation, it helps to understand why Textcat failed at the task of language segmentation.
Textcat
Gold standard Cluster 1 ∅ breuvages Cluster 2 #bilingualism #FWWC2015, #bilingualism Cluster 3 Food, and, breuvages, in, Edmonton, are, ready, to, go, just, waiting, for, the, fans, #FWWC2015 Food, and, in, Edmonton, are, ready, to, go, just, waiting, for, the, fans
Clustering
e clustering results are more difficult to interpret. Oen, the first distinction made seems to be based on case, i.e. words that begin with a capital leer versus words that are all lowercase leers. e second run on the 'mixed script: English -Greek' data shows that the first cluster from the first run has been separated into a cluster with words that begin with a capital leer and two clusters with words that don't begin with a capital leer.
English-Greek: First run: First cluster
• "intimate, "without, Although, Aquinas, Christians, Corinthians, Socrates, Symposium, Testament, Whether, affection, ancient, another. ", appreciation, aspires, araction, araction. ", becomes, benevolence., biblical, brotherly, chapter, ", charity;, children, children., contemplation, content, continues, contributes, definition:, described, existence;, explained, express, feeling, feelings, finding, further, holding, initially, inspired, knowledge, marriage., necessary, non-corporeal, passage, passion. ", philosophers, physical, platonic, refined, relationships, returned, self-benefit)., sensually, spiritual, subject, suggesting, through, throughout, transcendence., unconditional, understanding, without, youthful
English-Greek: Second run: Splitting of first cluster
• affection, ancient, another. ", aspires, becomes, biblical, chapter, ", charity;, children, children., content, definition:, feeling, feelings, finding, holding, marriage., necessary, passage, passion. ", platonic, refined, returned, subject, through, without • Although, Aquinas, Christians, Corinthians, Socrates, Symposium, Testament, Whether
• "intimate, appreciation, araction, araction. ", benevolence., brotherly, contemplation, continues, contributes, described, existence;, explained, express, further, initially, inspired, knowledge, non-corporeal, philosophers, physical, relationships, self-benefit)., sensually, spiritual, suggesting, throughout, transcendence., unconditional, understanding, youthful
Another important distinction seems to be the length of words. Indeed, the results oen show clusters that clearly are based on the length of the contained words. e first run on the 'latin script: German -Italian' data shows that short words have been singled out into the first cluster.
Italian-German: First run: First cluster
• (il, E, So, a, ad, da, di, e, es, ha, i, il, in, la, le, lo, ma, ne, se, si, un, va, zu e clustering works well when the scripts involved are dissimilar, as in the case of the English-Chinese text, where the Chinese characters were isolated aer the first run, and also the English-Spanish-Arabic example, where the Arabic part was completely isolated in the first run.
e closer the scripts become, the less well clear cut the results are. For Greek-Russian, the results are acceptable, with one mixed cluster. However, the number of clusters is too high for the number of languages involved and the separation is only achieved aer two consecutive clusterings.
e clustering of closer scripts, such as Ukrainian-Russian does not work well. e clusters, with the exception of the cluster containing the datum '9-13' are all impure, consisting of Ukrainian and Russian words. e second run also fails at improving the clustering.
Finally, clustering of latin based scripts does not perform well unless diacritics are involved and the diacritics form the most salient distinction. Word containing leers with diacritics are then generally separated from words containing no diacritics, as in the German-Finnish-Turkish example. e first run generates a cluster for numbers, two clusters with diacritics and one cluster without diacritics.
Probably for this reason, the clustering of Transliterated Greek-English and Greek-English worked surprisingly well. In both cases, the first run managed to separate the (transliterated) Greek parts from the English words. However, unaccented Greek words such as Agape, erotas or eros were clustered with English.
English-Transliterated Greek: First run: Transliterated Greek cluster
• agápe, philía, storgē., éros
English-Greek: First run: Greek cluster
• (ἀγάπη, (ἔρως, Agápe, agápē), Éros, érōs), -e problem is that when there are other salient distinguishing features besides diacritics, the result is less good, as can be seen on the Pali data.
Pali: abhijjhitar: Second run
• abhijjhita, abhijjhātar, covets, function], med., one, who,°itar),°itar,°ātar).
• (T., A, M • =, l., v.
• <smallcaps> i. </smallcaps>, <smallcaps> v. </smallcaps>, ag., fr., in • 265, 287
• [n.
In some cases, the clustering fails at the task of language segmentation, as in the case of the various English-French texts and the English-German example with the German inclusion. We can thus say that the surface structure or morphology, or in other words the basis from which we can extract features, is not sufficient to deduce relevant information about 'language'.
When there are more than two languages that are to be separated, the clustering also does not work well. Indeed, the most dissimilar objects are separated first. In the case of English-Spanish-Arabic, the Arabic part is separated first, as well as words with diacritics, while English and Spanish words without diacritics are thrown together. Subsequent runs show no improvement of the clustering concerning the separation of English and Spanish.
In the case of German-Finnish-Turkish, the clustering algorithm seems to cluster out Turkish first, followed by Finnish. e results are however much less clear-cut than for English-Spanish-Arabic.
Language model induction
e language model induction does not seem to work very well on the Latin script data. ere are almost only impure clusters, containing more than one language. However, the approach consistently outperforms the clustering approach when we look at the F5 score. For the English-French data set, the clustering approach even outperforms the n-gram language model approach. Indeed, the French words are relatively well separated from the English text, with the exception of 'sucré', which is still thrown together with English words.
Latin script: English-Fren
• both, "so", in, English, although, their, is, is, the, opposite, of, "rough", or, is, the, opposite, of, sweet, only, for, wines, (otherwise, is • mou, :, mou, but
• doux,
• Doux, (rugueux), Doux • while
• "hard"., used).,
• translate, as, meaning, very, different., "coarse", can, also, mean, almost,sucré,
In contrast, the approach works well on the mixed script data. Indeed, we achieve a good separation of the languages by script. However, when there are also Latin based scripts, we encounter the same problems as mentioned above with rather modest results. For example, for the English-Greek text, the approach separates out the Greek character words but it fails to separate transliterated Greek and English. Also, for the English-Spanish-Arabic text, Arabic is separated out, but English and Spanish are not separated well.
One interesting observation can be made in the case of the English-Chinese text. e Chinese characters have been isolated, but the Pinyin transcription is thrown together with the Chinese characters. Based on the prior observations, this is rather unexpected. is raises the question of whether Pinyin ought to be clustered out, or clustered together with English or Chinese.
Again, the language model induction approach outperforms the clustering approach, and also the n-gram language model approach in the case of the English-Greek text.
On the larger Pali dictionary entries, the language model induction approach yields acceptable results. On the shorter Pali dictionary entries, the language model induction approach yields good results.
e quite low performance must be blamed on the data. Indeed, the Pali dictionary data contain various problematic characters such as 'comma/dot and whitespace' as one character. On such characters, whitespace tokenization fails, yielding big chunks of nonsense tokens. For example, the fourth Pali dictionary entry was split into five chunks (while it might not be displayed as such, all commata and all dots are in fact not followed by whitespace, the whitespace is part of the character, 10 hence whitespace tokenization fails).
Pali: gūhanā: Chunks
• Gūhanā, (f.)
• [abstr.fr.gūhati]=gūhanā •(q.v.) • Pug.19.Cp.pari°. (Page • 253)
Furthermore, the data contains markup, abbreviations, references, typing mistakes and signs such as <-> that are difficult to assign to a language.
On the Twier data, the language model induction approach works rather well. For example, on the first text, separation is not perfect with the Greek cluster still containing some English words.
Twitter 1: English-Greek
• BUSINESS, EXCELLENCE. • Μόλις, ψήφισα, αυτή, τη, λύση, Internet, of, στο, διαγωνισμό • ings, IT For the third and fourth text, the approach manages to single out the other-language inclusions, but not exclusively. Both times, there is one additional item in the cluster (the relevant clusters are marked in red).
Twitter 3: Fren-English
• #FWWC2015
• breuvages, go • Food, Edmonton, to, for, the • in, waiting, #bilingualism • and, are, ready, just, fans Twitter 4: English-Polish • comes, from, with, two, crates, of, strawberries, jackets, omg • my, dad, poland, and, adidas • back, żubrówka e approach exceeded expectations on the second and fih Twier text. On the second text, the 'French' cluster does not only contain the French words 'Demain' and 'par', but also the French way of notating time '18h'.
Twitter 2: Fren-English • Keynote,"e,collective,of,or,perish;,it,all,that,counts?" • Demain,18h,par • #dhiha6,David • @dhiparis,dynamics,is On the fih text, an almost perfect result was achieved, with only one additional subdivision of the 'English' cluster.
Twitter 5: Transliterated Amharic-English
• (coffee • bread). is, our • Buna, dabo, naw It seems that the language model approach does not work very well on longer texts, especially on longer texts in Latin-based scripts, with the chosen parameter set; still, the approach outperforms the clustering approach and achieves scores in the vicinity of the scores achieved with the supervised trained n-gram language model approach. On mixed script texts, the approach consistently outperforms the clustering approach and we also reach scores in the vicinity of the scores achieved with the supervised trained n-gram language model approach.
Moreover, on short texts, the approach works rather well. We succeed in outperforming the supervised trained n-gram language model approach on a number of texts, and we achieve scores close to the scores achieved with the supervised trained n-gram language model approach.
Although the language model induction approach tends to generate too many clusters, it also generally succeeds at separating the languages involved.
Scores
Of the scores I used for evaluation purposes, it seems that a combination of a high Rand Index and a high F5 score indicate a good language segmentation. A high F5 score alone is not significant. For example, the clustering algorithm achieves an F5 score of 0.7215 on 'Twier 3'. is score looks good, but the Rand Index score is at 0.4571, and the segmentation is not good.
Twitter 3: Cluster analysis
• Edmonton, Food • go, in, to • and, are, breuvages, fans, for, just, ready, the, waiting Similarly, a high Rand Index score alone is not significant. For example, the clustering algorithm achieves a Rand Index score of 0.6738 on the 'Pali 2' text, but the F5 score is at 0.3825 and the clustering is not good.
Pali 2: Cluster analysis
• abhijjhita, abhijjhātar, covets, function], med., one, who,°itar),°itar,°ātar). • (T., <smallcaps>i.</smallcaps>, <smallcaps>v.</smallcaps>, =, A, M, ag., fr., in, l., v. • 265, 287 • [n.
Conclusion
In this thesis, I have asked the question of whether unsupervised approaches to language segmentation perform beer on short and difficult texts than supervised approaches by overcoming some of the difficulties associated with supervised approaches, such as the need for (enough and adequate) 11 training data, the language-specificity of the language model or the inflexibility of trained language models when it comes to spelling variation and abbreviations, unless the training data also contained spelling variation and abbreviations. I have given an overview over related work, presenting supervised approaches that have been used in monolingual language identification and the amelioration of such approaches through unsupervised approaches such as clustering.
Unfortunately, the body of literature covering the topic of language segmentation is sparse. e work by Yin et al. (2007) and the work by Seldin et al. (2001) are closest in topic to this thesis. However, Yin et al. (2007) concern themselves with spoken language, with requires a different approach than dealing with wrien language. As I concentrated on wrien language, their work was not conducive to this thesis.
In contrast, Seldin et al. (2001) present a work that looks promising. ey present a system that finds language borders in a text with great accuracy using unsupervised algorithms. However, they restrict their algorithm in such a way that switching language models aer each word is disallowed. us, they are unable to detect singleword inclusions and cannot handle situations where the language switches every word, as has been shown to occur in the test data used in section 4.
Another major drawback of the approach is that it also needs longer fragments of monolingual text and an overall longer text. Hence, their approach would not work well on short texts, if at all.
Next, I have presented the theoretical foundations of a supervised n-gram language model approach and an unsupervised clustering approach. Finally, I have introduced a weakly supervised n-gram language model inducing approach devised by myself. All of these approaches can be used for language segmentation. In order to test how well the different approaches perform on different text types, I have performed experiments.
Section 4 presents the experiments made. I have first compiled a small corpus of texts ranging from longer texts with clearly separated languages to one-sentence Twier messages containing foreign language inclusions. I have also included a set of dictionary entries from the Pali dictionary by the Pali Text Society. Indeed, these entries contain a lot of different languages and abbreviations, and (unfortunately) are not consistently formaed.
I have then presented my implementations of the supervised and weakly super-vised approaches and the choice of the unsupervised clustering algorithms. en, I have presented the results of their application to the data. It can be said that the supervised approach works reasonably well. e drawbacks are that the approach needs training data to train the models on. e problems of the training data and its influence on the models have been raised more than once.
e supervised approach failed for non-whitespace scripts. e models would have to be adapted for non-whitespace scripts, introducing more complexity. Also, the training and test texts would have to be split in meaningful ways, introducing the need for a vast array of language-specific text spliers, should the approach work on a wide range of languages.
e unsupervised approach generally succeeded in separating languages by script when different scripts were involved. Other than that, it seems that the chosen morphological features, or possibly morphological features in general, are insufficient for the algorithm to separate languages effectively.
e weakly supervised approach worked well on short texts and on difficult short texts, but less well on long texts, while still outperforming the clustering approach on long texts. e approach consistently outperforms the clustering approach and reaches scores in the vicinity of the scores achieved by the supervised approach, even surpassing the supervised approach in some cases. ese results are promising, but more thorough investigations have to be undertaken.
In conclusion, it can be said that some unsupervised (or weakly supervised) approaches can perform beer on the task of language segmentation on difficult and short texts. e presented weakly supervised approach does not only outperform the unsupervised clustering approach, it also achieves scores comparable to the scores achieved with the supervised approach.
Future work could concentrate on the reduction of the number of generated clusters, ideally geing down to one cluster per language; it would also be thinkable to prevent overly frequent language model switching by taking a word's context into account. Finally, the parameters could conceivably be adapted automatically. With an increased interest in the area of multilingual text processing lately, the emergence and evolution of the texts themselves will influence the direction of the work in that direction.
"Il est venu le temps des cathédrales le monde est entré dans un nouveau millénaire L'homme a voulu monter vers les étoiles écrire son histoire dans le verre ou dans la pierre" -Gringoire
Mixed script data
Capitalism is an economic system and a mode of production in which trade, industries, and the means of production are largely or entirely privately owned. Private firms and proprietorships usually operate in order to generate profit, but may operate as private nonprofit organizations.
ولودیا
Twitter 2 Music for Airports > le piano en libre-accès dans l'aéroport Charles-de-Gaulles
Source: Yannick Rochat (yrochat). "Music for Airports > le piano en libre-accès dans l'aéroport Charles-de-Gaulles". 26 July 2015, 18:12. Tweet.
Pali dictionary data
All entries have been taken from the Pali Text Society's Pali-English dictionary (T. W. Rhys Davids, William Stede, editors, e Pali Text Society's Pali-English dictionary. Chipstead: Pali Text Society, 1921-5). 8 parts [738 pp.].)
Hambho Hambho, (indecl.)[haṁ+bho] a particle expressing surprise or haughtiness J.I,184,494.See also ambho. (Page 729) Ussada Ussada,[most likely to ud + syad;see ussanna]:this word is beset with difficulties,the phrase sa-ussada is applied in all kinds of meanings,evidently the result of an original application & meaning having become obliterated.sa°is taken as *sapta (seven)as well as *sava (being) ,ussada as prominence,protuberance, fulness, arrogance. e meanings may be tabulated as follows: (1)prominence(cp. Sk.utsedha) ,used in characterisation of the Nirayas,as "projecting,prominent hells" ,ussadanirayā (but see also below 4)J.I,174;IV,3,422 (pallaṅkaṁ, v.l.caturassạṁ,with four corners) ;V,266.-adj.prominent A.13 (tejussadehi ariyamaggadhammehi, or as below 4?) . -2. protuberance, bump, swelling J.IV,188;also in phrase saussada having 7 protuberances,a qualification of the Mahāpurisa D.III,151 (viz.on both hands,feet,shoulders,and on his back) . -3.rubbing in,anointing,ointment;adj.anointed with (-°) ,in candan°J.III, 139;IV,60;.1,267;Vv 537;DhA.I,28;VvA.237.-4.a crowd adj.full of (-°)in phrase saussada crowded with (human beings)D.I,87 (cp.DA.I, 245:aneka-saa-samākiṇṇa;but in same sense BSk.sapt-otsada Divy 620,621) ;Pv IV.18 (of Niraya = full of beings,expld.by saehi ussanna uparûpari nicita PvA. 221.-5.qualification,characteristic,mark,aribute,in catussada "having the four qualifications (of a good village) "J.IV,309 (viz.plenty of people,corn, wood and water C. ) .e phrase is evidently shaped aer D.I,87 (under 4) .As "preponderant quality,characteristic"we find ussada used at Vism.103 (cf.Asl. 267)in combns. lobh°, dos°, moh°, alobh°etc. (quoted from the"Ussadakiana" ) , and similarly at VvA. 19 in Dhammapāla's definition of manussa(lobh'ādīhi alobh' ādīhi sahitassa manassa ussannatāya manussā) ,viz.saā manussa-jātikā tesu lobh' ‹-› ādayo alobh' ādayo ca ussadā. -6. (metaph. )self-elevation, arrogance, conceit, haughtiness Vin.I,3;Sn.515,624 (an°= taṇhā-ussada-abhāvena SnA 467) ,783 (expld.by Nd1 72 under formula saussada;i.e.showing 7 bad qualities,viz.rāga, dosa,moha etc. ) ,855.-See also ussādana,ussādeti etc. (Page 157)
Test data
Latin script data
English -German e German word Nabelschau means "navel-gazing" or "staring at your navel". But in this case, it doesn't refer to anyone else's belly buon -just your own.
Source: Glass, Nicole (2015): "German Missions in the United States -Word of the Week". Germany.info.
English -Fren doux, mou : both translate as "so" in English, although their meaning is very different. Doux is the opposite of "rough" or "coarse" (rugueux), while mou is the opposite of "hard". Doux can also mean sweet, but almost only for wines (otherwise sucré is used).
Source: Maciamo, (2015): "French words and nuances that don't exist in English". Eupedia.
English -Transliterated Greek e
Greek language distinguishes at least four different ways as to how the word love is used. Ancient Greek has four distinct words for love: agápe, éros, philía, and storgē. However, as with other languages, it has been historically difficult to separate the meanings of these words when used outside of their respective contexts. Nonetheless, the senses in which these words were generally used are as follows.
Source: hps://en.wikipedia.org/wiki/Greek_words_for_love Italian -German Milano ne custodisce l'esempio più struggente: quel Cenacolo che il vinciano affrescò con amore, cura e rivoluzionaria psicologia (il Giuda non viene privato dell'aureola, ma si condanna da solo, con la consapevolezza del peccato) cominciò subito ad autodistruggersi, con un cancro che solo un lunghissimo restauro ha di recente arginato.
Kaum eine Woche vergeht, in der es keine neue Studie, Umfrage oder Warnung zum ema Fachkräemangel in Deutschland gibt.
Certo, lo faceva per definire le idee, ma anche perché consapevole che le intuizioni sono periture, che la vita stessa va caurata in qualche modo.
Dabei mehren sich letzter Zeit auch Stimmen, die Entwarnung geben. So kam jüngst eine Studie des Stierverbands ür die Deutsche Wissenscha zu dem Ergebnis, dass "ein allgemeiner Fachkräemangel in den MINT-Berufen eher nicht mehr" drohe.
Come anche i riccioli del Baista richiamano il movimento delle acque, moto che poi Leonardo studierà più approfonditamente a Venezia, nelle ricerche sui bacini in chiave di difesa anti-Turchi. E si vada alla bellissima Annunciazione, con un occhio aento alle ali dell'angelo: la delicatezza delle punte all'insù che cosa sono se non il barbaglio di un sogno che lo ossessionava da anni, ovvero quello di volare?
Ist das seit Jahren angemahnte Szenario vom drohenden Fachkräemangel bei Ingenieuren und Naturwissenschalern also nur ein Mythos?
Source: Stalinski, Sandra (2015): "Ingenieure: Mythos Fachkräemangel?". tagesschau.de. Scorranese, Roberta (2015): "Nelle grandi opere il racconto sofferto della natura mortale". Archiviostorico.corriere.it.
German -Finnish -Turkish
Mixed script data
Greek -Russian Η ελληνική γλώσσα είναι μία από τις ινδοευρωπαϊκές γλώσσες. Αποτελεί το μοναδικό μέλος ενός ανεξάρτητου κλάδου της ινδοευρωπαϊκής οικογένειας γλωσσών. Ανήκει επίσης στον βαλκανικό γλωσσικό δεσμό. Στην ελληνική γλώσσα, έχουμε γραπτά κείμενα από τον 15ο αιώνα π.Χ. μέχρι σήμερα.
На греческом языке на всех этапах его существования была создана богатейшая литература. В Римской империи знание греческого языка считалось обяза-тельным для всякого образованного человека. В латинском языке присутствует большое количество греческих заимствований, а в греческом -значительное количество латинских и романских слов. В новое время древнегреческий язык стал (наряду с латинским) источником создания новых научных и технических терминов (так называемая международная лексика). В русский язык греческие слова проникали в основном двумя путями -через международную лексику и через церковнославянский язык.
Source: hps://el.wikipedia.org/wiki/Ελληνική_γλώσσα hps://ru.wikipedia.org/wiki/Греческий_язык English -Greek -Transliterated Greek Agápe (ἀγάπη agápē) means "love: esp. brotherly love, charity; the love of God for man and of man for God. " Agape is used in the biblical passage known as the "love chapter, " 1 Corinthians 13, and is described there and throughout the New Testament as brotherly love, affection, good will, love, and benevolence. Whether the love given is returned or not, the person continues to love (even without any self-benefit). Agape is also used in ancient texts to denote feelings for one's children and the feelings for a spouse, and it was also used to refer to a love feast. It can also be described as the feeling of being content or holding one in high regard. Agape is used by Christians to express the unconditional love of God for his children. is type of love was further explained by omas Aquinas as "to will the good of another. " Éros (ἔρως érōs) means "love, mostly of the sexual passion. " e Modern Greek word "erotas" means "intimate love. " It can also apply to dating relationships as well as marriage. Plato refined his own definition: Although eros is initially felt for a person, with contemplation it becomes an appreciation of the beauty within that person, or even becomes appreciation of beauty itself. Plato does not talk of physical araction as a necessary part of love, hence the use of the word platonic to mean, "without physical araction. "
In the Symposium, the most famous ancient work on the subject, Plato has Socrates argue that eros helps the soul recall knowledge of beauty, and contributes to an understanding of spiritual truth, the ideal "Form" of youthful beauty that leads us humans to feel erotic desire -thus suggesting that even that sensually based love aspires to the non-corporeal, spiritual plane of existence; that is, finding its truth, just like finding any truth, leads to transcendence. Lovers and philosophers are all inspired to seek truth through the means of eros.
Source: hps://en.wikipedia.org/wiki/Greek_words_for_love English -Spanish -Arabic A black ribbon is a symbol of remembrance or mourning. Wearing or displaying a black ribbon has been used for POW/MIA remembrance, mourning tragedies or as a political statement.
El crespón negro o lazo negro es un símbolo utilizado por personas, estados, sociedades y organizaciones, representando un sentimiento político-social en señal de duelo.
من
Results
N-Gram Language Models
For the n-gram language model approach, the identified language is indicated in parentheses. e language abbreviations are:
Textcat
For Textcat, the identified language is indicated in parentheses. As Textcat returns unknown for many words, I merely indicate the non-unknown categories to save space and write rest to indicate that all other words of the text have been classified as unknown. e language abbreviations are:
Clustering
Clustering the different data sets produced the following clusters. e second run uses the clusters from the first run and possibly subdivides each cluster into two or more clusters.
Data: Latin script: German -English
First run
• "navel-gazing", doesn't, else's • "staring, But, German, Nabelschau, anyone, belly, buon, case, just, means, navel"., own., refer, this, word, your • at, in, it, or, to • -, e
Second run
• doesn't, else's • "navel-gazing"
• "staring, But, German, Nabelschau, belly, case, means, navel"., refer, this
• anyone, buon, just, own., word, your • it, or, to • at, in • -, e Data: Latin script: German -Finnish -Turkish
First run
• Dünya, Güney, Küre'de, Südhalbkugel, Südsommer., Südwinter, Sıcak, arasında, gemäßigten, günler, için, kesäkuukausiksi, lämpimin, säteilee, sıcak, wärmste, çıkar., Der • Aralık,Eylül,Kesä,Yarım,arasındadır.,eteläisellä,eiği,eä,gerçekleşir.,jyrkemmässä,kevään,välissä.,yaklaşık,ısıyı • 21,22 • Der,Haziran,Jahreszeiten,Je,Klimazone.,Kuzey,Mart,Nordsommer,Pohjoisella,Sommer,Yaz,arktischen,auf,aurinko,ay,dem,depo,der,die,eli,elokuu,en,er,findet,genellikle,gerade,gleichzeitig,helmikuu.,herrscht,iki,ile,in,ise,ist,ja,kallistunut,koska,kuin,kulmassa,lasketaan,maan,maapallo,man,mevsimdir.,mit,muina,nachdem,niin,ob,oder,on,ortaya,pallonpuoliskolla,pinnalle,silloin,sonra,spricht,sta.,suvi,syksyn,tavallisesti,und,uzun,vier,vom,vuodenaika,vuodenaikoina • arktischen, auf, aurinko, dem, depo, der, die, eli, elokuu, findet, genellikle, gerade, gleichzeitig, helmikuu., herrscht, iki, ile, ise, ist, joulu-, kallistunut, koska, kuin, kulmassa, lasketaan, maan, maapallo, man, mevsimdir., mit, muina, nachdem, niin, oder, ortaya, pallonpuoliskolla, pinnalle, silloin, sonra, spricht, sta., suvi, syksyn, tammi-, tavallisesti, und, uzun, vier, vom, vuodenaika, vuodenaikoina., vuodenajoista, yazda • Je, ay, en, er, in, ja, ob, on Data: Latin script: English -French
First run
• "coarse", "hard"., "rough", "so", (otherwise, (rugueux), Doux, English, almost, also, although, both, but, can, different., doux, for, mean, meaning, mou, only, opposite, sucré, sweet, the, their, translate, used)., very, while, wines
• is, or
• as, in, of
Second run
• Doux, English,
• "coarse", (otherwise, (rugueux), almost, although, different., meaning, opposite, translate
• "hard"., "rough", "so", also, both, but, can, doux, for, mean, mou, only, sucré, sweet, the, their, used • Ancient, However, Nonetheless, contexts., different, difficult, distinct, distinguishes, follows., generally, historically, language, languages, meanings, outside, respective, senses, separate, which, words • Greek, and, are, as, at, been, for, four, has, how, in, is, it, least, love, love:, of, other, the, their, these, to, used, used., ways, were, when, with, word
Second run
• e
• philía, storgē.
• agápe, éros,
• Ancient, However, Nonetheless, contexts., different, difficult, distinct, distinguishes, follows., generally, historically, meanings, respective
• words
• language, languages, outside, senses, separate, which • and, are, as, at, been, for, four, has, how, in, is, it, least, love, love:, of, other, the, their, these, to, used, used., ways, were, when, with, word
• Greek
Data: Latin script: German -Italian
First run • (il, E, So, a, ad, da, di, e, es, ha, i, il, in, la, le, lo, ma, ne, se, si, un, va, zu
Figure 1 :
1Out-of-place metric
Figure 4 :Figure 5 :
45w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … Initial model evaluation w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … Model update If the score is below a certain threshold, the existing language model does not model the word well enough and a new model is created.
w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9
9
Figure 6 :Figure 7 :
67Evaluation w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … New model creation When there is more than one language model, each word is evaluated by every language model, and the highest scoring model is updated, or a new model is created if no language model models the word well enough.
Figure 8 :Figure 9 :Figure 10 :Figure 11 :
891011w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … Multiple model evaluation w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … Updating relevant model w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … Multiple model evaluation 2 w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 w 9 w 10 … New model creation 2
Figure 15 :
15Word-Model assignment I have made the approach parametric with parameters being: • Induction iterations: Number of induction iterations • Random iterations: Number of random iterations
Figure 16 :
16Clustering
7. latin extended: is the word latin extended? 8. capitalized: is the word capitalized? 9. contains non-word: does the word contain a non-word? 10. is non-word: is the word a non-word? 11. number of latin leers: number of latin leers 12. number of non-latin leers: number of non-latin leers 13. vowel ratio: number of vowels divided by the word length 14. basic latin leer ratio: number of latin leers divided by the word length 15. max consonant cluster: the longest consonant cluster size in characters 16. is digit: is the word a digit? 17. is ideographic: is the word ideographic? 18. directionality: what directionality does first character of the word have? 19. is BMP codepoint: does the word contain non-BMP characters? 20. general type: what is the general type of the first character of the word? e last two features are based on the Java Character class. is class provides methods to check for specific implementation-based properties of characters.
Figure 17 :Figure 18 :
1718WEKAELKI: Cluster visualization
◃
or 0 if it does not exist 21: v 2 ← f (b, model 2 ) or 0 22: value ← mode(v 1 , v 2 ) 23: merged ← (b, value) all trigrams t in model 1 and model 2 do 27:
35: end function • MAX: use the maximum value (max(v 1 , v 2 )) • MIN: use the minimum value (min(v 1 , v 2 )) • MEAN: use the mean value ( v 1 +v 2 2 ) • ADD: use the sum of the values (v 1 + v 2 )
◃
Initialize difference to 1 to avoid division by zero 4: for unigram u 1 in model 1 .unigrams do 5: for unigram u 2 in model 2 .unigrams do 6:
Table 2 :
2Unambiguous encoding: distances It should be apparent from table 2 that the notion of "distance" is distorted. In comparison, table 3 shows the encoding achieved with algorithm 1.na
ma
ne
me
na
0
1
4
3
ma
1
0
5
4
ne
4
5
0
1
me
3
4
1
0
Table 3 :
3Simplified encoding: distances While this encoding is not unambiguous, it is considered sufficiently good for our purposes.
for all bigrams b in model 1 and model 2 dosince both are equal
10:
merged ← (unigram, value)
11:
else
12:
exclude ← u 1
13:
exclude ← u 2
14:
end if
15:
end for
16:
end for
17:
18:
Table 4 :
4N-Gram language model results: Latin script
Table 5 :
5N-Gram language model results: Mixed script
Table 6 :
6N-Gram language model results: Pali dataRand
Jaccard
Fowlkes-
Mallows
F1
F5
Twitter 1
Baseline
0.4615
0.4615
0.6793
0.6315
0.4712
Baseline 2 0.5384
0.0000
n/a
n/a
n/a
NGLM
0.8589
0.5925
0.7542
0.7441
0.8757
Twitter 2
Baseline
0.5555
0.5555
0.7453
0.7142
0.5652
Baseline 2 0.4444
0.0000
n/a
n/a
n/a
NGLM
0.7485
0.6090
0.7591
0.7570
0.8121
Twitter 3
Baseline
0.6583
0.6583
0.8113
0.7939
0.6670
Baseline 2 0.3416
0.0000
n/a
n/a
n/a
NGLM
0.6750
0.4347
0.6479
0.6060
0.8996
Twitter 4
Baseline
0.8750
0.8750
0.9354
0.9333
0.8792
Baseline 2 0.1250
0.0000
n/a
n/a
n/a
NGLM
0.7250
0.5822
0.7597
0.7360
0.9545
Twitter 5
Baseline
0.4285
0.4285
0.6546
0.6000
0.4382
Baseline 2 0.5714
0.0000
n/a
n/a
n/a
NGLM
0.6666
0.1250
0.2672
0.2222
0.4561
Table 7 :
7N-Gram language model results: Twier data5.2 Textcat
Rand
Jaccard
Fowlkes-
Mallows
F1
F5
German-English
Baseline
0.9259
0.9259
0.9622
0.9615
0.9285
Baseline 2 0.0000
0.0740
n/a
n/a
n/a
Textcat
0.8632
0.8518
0.9200
0.9200
0.9200
German-Finnish-Turkish
Baseline
0.3312
0.3312
0.5755
0.4976
0.3400
Baseline 2 0.6721
0.0103
0.1015
0.0204
0.2132
Textcat
0.4095
0.1903
0.3823
0.3198
0.2124
English-Fren
Baseline
0.7038
0.7038
0.8389
0.8261
0.7119
Baseline 2 0.3064
0.0145
0.1207
0.0287
0.2777
Textcat
0.3890
0.3211
0.5476
0.4861
0.3411
English-Transliterated Greek
Baseline
0.8809
0.8809
0.9385
0.9385
0.8850
Baseline 2 0.1269
0.0090
0.0949
0.0178
0.1911
Textcat
0.5202
0.4853
0.6678
0.6535
0.5492
Italian-German
Baseline
0.5807
0.5807
0.7620
0.7347
0.5902
Baseline 2 0.4227
0.0060
0.0776
0.0119
0.1360
Textcat
0.4030
0.3057
0.5014
0.4682
0.3520
Table 8 :
8Textcat results: Latin scriptRand
Jaccard
Fowlkes-
Mallows
F1
F5
Greek-Russian
Baseline
0.5578
0.5578
0.7468
0.7161
0.5674
Baseline 2 0.4440
0.0034
0.0584
0.0068
0.0817
Textcat
0.4468
0.2971
0.4769
0.4581
0.3644
English-Greek
Baseline
0.9179
0.9179
0.9580
0.9571
0.9208
Baseline 2 0.0946
0.0136
0.1167
0.0269
0.2643
Textcat
0.5357
0.4933
0.6730
0.6607
0.5619
English-Spanish-Arabic
Baseline
0.3354
0.3354
0.5791
0.5023
0.3442
Baseline 2 0.6682
0.0109
0.1044
0.0215
0.2227
Textcat
0.3956
0.2832
0.5042
0.4414
0.3052
English-Chinese
Baseline
0.8474
0.8474
0.9205
0.9174
0.8524
Baseline 2 0.1595
0.0082
0.0909
0.0164
0.1781
Textcat
0.5018
0.4468
0.6251
0.6177
0.5408
Ukrainian-Russian
Baseline
0.4950
0.4950
0.7035
0.6622
0.5048
Baseline 2 0.5060
0.0022
0.0472
0.0044
0.0550
Textcat
0.3787
0.2625
0.4472
0.4159
0.3105
Table 9 :
9Textcat results: Mixed scriptRand
Jaccard
Fowlkes-
Mallows
F1
F5
Pali 1
Baseline
0.3131
0.3131
0.5595
0.4768
0.3216
Baseline 2 0.6906
0.0118
0.1089
0.0234
0.2379
Textcat
0.4531
0.2508
0.4849
0.4011
0.2641
Pali 2
Baseline
0.3589
0.3589
0.5991
0.5283
0.3680
Baseline 2 0.6495
0.0238
0.1543
0.0465
0.3880
Textcat
0.4307
0.2745
0.5088
0.4307
0.2888
Pali 3
Baseline
0.4947
0.4947
0.7033
0.6619
0.5045
Baseline 2 0.5075
0.0045
0.0676
0.0091
0.1067
Textcat
0.2032
0.0704
0.2502
0.1315
0.0736
Pali 4
Baseline
0.4000
0.4000
0.6324
0.5714
0.4094
Baseline 2 0.6000
0.0000
n/a
n/a
n/a
Textcat
0.5000
0.1666
0.2886
0.2857
0.2524
Pali 5
Baseline
0.5800
0.5800
0.7615
0.7341
0.5895
Baseline 2 0.4236
0.0063
0.0798
0.0126
0.1430
Textcat
0.5090
0.3458
0.5141
0.5140
0.5236
Table 10 :
10Textcat results: Pali dataRand
Jaccard
Fowlkes-
Mallows
F1
F5
Twitter 1
Baseline
0.4615
0.4615
0.6793
0.6315
0.4712
Baseline 2 0.5384
0.0000
n/a
n/a
n/a
Textcat
0.3736
0.2597
0.4460
0.4123
0.3049
Twitter 2
Baseline
0.5555
0.5555
0.7453
0.7142
0.5652
Baseline 2 0.4444
0.0000
n/a
n/a
n/a
Textcat
0.4678
0.4347
0.6158
0.6060
0.5207
Twitter 3
Baseline
0.6583
0.6583
0.8113
0.7939
0.6670
Baseline 2 0.3416
0.0000
n/a
n/a
n/a
Textcat
0.6838
0.6446
0.8011
0.7839
0.6586
Twitter 4
Baseline
0.8750
0.8750
0.9354
0.9333
0.8792
Baseline 2 0.1250
0.0000
n/a
n/a
n/a
Textcat
0.8833
0.8666
0.9309
0.9285
0.8711
Twitter 5
Baseline
0.4285
0.4285
0.6546
0.6000
0.4382
Baseline 2 0.5714
0.0000
n/a
n/a
n/a
Textcat
0.3333
0.3333
0.5773
0.5000
0.3421
Table 11 :
11Textcat results: Twier data e first run indicates the value aer one clustering step, and the second run indicates the value aer applying the clustering algorithm to the results of the first run.5.3 Clustering
Rand
Jaccard
Fowlkes-
Mallows
F1
F5
German-English
Baseline
0.9259
0.9259
0.9622
0.9615
0.9285
Baseline 2 0.0000
0.0740
n/a
n/a
n/a
First run
0.4102
0.3929
0.6069
0.5642
0.8549
Second
run
0.2336
0.1970
0.4199
0.3291
0.7712
German-Finnish-Turkish
Baseline
0.3312
0.3312
0.5755
0.4976
0.3400
Baseline 2 0.6721
0.0103
0.1015
0.0204
0.2132
First run
0.4841
0.1764
0.3369
0.2998
0.2110
Second
run
0.6259
0.1611
0.2840
0.2775
0.2320
English-Fren
Baseline
0.7038
0.7038
0.8389
0.8261
0.7119
Baseline 2 0.3064
0.0145
0.1207
0.0287
0.2777
First run
0.4051
0.2980
0.5001
0.4592
0.3362
Second
run
0.4601
0.1836
0.3116
0.3103
0.2857
English-Transliterated Greek
Baseline
0.8809
0.8809
0.9385
0.9385
0.8850
Baseline 2 0.1269
0.0090
0.0949
0.0178
0.1911
First run
0.5867
0.3977
0.5725
0.5691
0.6320
Second
run
0.5423
0.3161
0.4909
0.4804
0.5934
Italian-German
Baseline
0.5807
0.5807
0.7620
0.7347
0.5902
Baseline 2 0.4227
0.0060
0.0776
0.0119
0.1360
First run
0.4222
0.2838
0.4640
0.4421
0.3453
Second
run
0.4915
0.2472
0.4006
0.3964
0.3499
Table 12 :
12Clustering results: Latin scriptRand
Jaccard
Fowlkes-
Mallows
F1
F5
Greek-Russian
Baseline
0.5578
0.5578
0.7468
0.7161
0.5674
Baseline 2 0.4440
0.0034
0.0584
0.0068
0.0817
First run
0.5787
0.3811
0.5672
0.5519
0.4549
Second
run
0.7536
0.3883
0.5899
0.4494
0.7914
English-Greek
Baseline
0.9179
0.9179
0.9580
0.9571
0.9208
Baseline 2 0.0946
0.0136
0.1167
0.0269
0.2643
First run
0.4244
0.2482
0.4015
0.3977
0.4553
Second
run
0.3705
0.0855
0.1784
0.1576
0.2777
English-Spanish-Arabic
Baseline
0.3354
0.3354
0.5791
0.5023
0.3442
Baseline 2 0.6682
0.0109
0.1044
0.0215
0.2227
First run
0.8016
0.5650
0.7400
0.7221
0.6008
Second
run
0.7226
0.2860
0.4495
0.4448
0.5130
English-Chinese
Baseline
0.8474
0.8474
0.9205
0.9174
0.8524
Baseline 2 0.1595
0.0082
0.0909
0.0164
0.1781
First run
0.5480
0.3356
0.5087
0.5025
0.5866
Second
run
0.5138
0.2584
0.4361
0.4107
0.5957
Ukrainian-Russian
Baseline
0.4950
0.4950
0.7035
0.6622
0.5048
Baseline 2 0.5060
0.0022
0.0472
0.0044
0.0550
First run
0.5867
0.1953
0.3268
0.3267
0.3305
Second
run
0.5934
0.1154
0.2178
0.2070
0.2907
Table 13 :
13Clustering results: Mixed scriptRand
Jaccard
Fowlkes-
Mallows
F1
F5
Pali 1
Baseline
0.3131
0.3131
0.5595
0.4768
0.3216
Baseline 2 0.6906
0.0118
0.1089
0.0234
0.2379
First run
0.4674
0.2540
0.4898
0.4051
0.2666
Second
run
0.7168
0.2547
0.4118
0.4060
0.3516
Pali 2
Baseline
0.3589
0.3589
0.5991
0.5283
0.3680
Baseline 2 0.6495
0.0238
0.1543
0.0465
0.3880
First run
0.6738
0.3026
0.4777
0.4646
0.3825
Second
run
0.6646
0.1865
0.3147
0.3144
0.3021
Pali 3
Baseline
0.4947
0.4947
0.7033
0.6619
0.5045
Baseline 2 0.5075
0.0045
0.0676
0.0091
0.1067
First run
0.5686
0.0746
0.2002
0.1389
0.0831
Second
run
0.7534
0.0911
0.1962
0.1670
0.1125
Pali 4
Baseline
0.4000
0.4000
0.6324
0.5714
0.4094
Baseline 2 0.6000
0.0000
n/a
n/a
n/a
First run
0.5333
0.3000
0.5477
0.4615
0.3083
Second
run
0.3000
0.3000
0.5477
0.4615
0.3083
Pali 5
Baseline
0.5800
0.5800
0.7615
0.7341
0.5895
Baseline 2 0.4236
0.0063
0.0798
0.0126
0.1430
First run
0.5294
0.2472
0.4111
0.3965
0.5242
Second
run
0.4666
0.1214
0.2524
0.2166
0.4117
Table 14 :
14Clustering results: Pali dataRand
Jaccard
Fowlkes-
Mallows
F1
F5
Twitter 1
Baseline
0.4615
0.4615
0.6793
0.6315
0.4712
Baseline 2 0.5384
0.0000
n/a
n/a
n/a
First run
0.8681
0.7142
0.8451
0.8333
0.7222
Second
run
0.8461
0.6000
0.7745
0.7499
0.9750
Twitter 2
Baseline
0.5555
0.5555
0.7453
0.7142
0.5652
Baseline 2 0.4444
0.0000
n/a
n/a
n/a
First run
0.4575
0.3941
0.5655
0.5654
0.5573
Second
run
0.4967
0.3888
0.5615
0.5600
0.6012
Twitter 3
Baseline
0.6583
0.6583
0.8113
0.7939
0.6670
Baseline 2 0.3416
0.0000
n/a
n/a
n/a
First run
0.4571
0.3595
0.5525
0.5289
0.7215
Second
run
0.3523
0.2093
0.3997
0.3461
0.6428
Twitter 4
Baseline
0.8750
0.8750
0.9354
0.9333
0.8792
Baseline 2 0.1250
0.0000
n/a
n/a
n/a
First run
0.9019
0.8584
0.9265
0.9238
0.8631
Second
run
0.6250
0.5000
0.6789
0.6666
0.8080
Twitter 5
Baseline
0.4285
0.4285
0.6546
0.6000
0.4382
Baseline 2 0.5714
0.0000
n/a
n/a
n/a
First run
0.7142
0.4666
0.6831
0.6363
0.4764
Second
run
0.5714
0.3076
0.4780
0.4705
0.4046
Table 15 :
15Clustering results: Twier data In addition to highlighting results that outperform the baseline values, the following tables have been color coded. Results that outperform the clustering algorithm are indicated in red and results that outperform both the clustering algorithm and the ngram language model are indicated in blue. 75.4 Language model induction
Rand
Jaccard
Fowlkes-
Mallows
F1
F5
German-English
Baseline
0.9259
0.9259
0.9622
0.9615
0.9285
Baseline 2 0.0000
0.0740
n/a
n/a
n/a
Inducted
0.6837
0.6574
0.7988
0.7932
0.8896
German-Finnish-Turkish
Baseline
0.3312
0.3312
0.5755
0.4976
0.3400
Baseline 2 0.6721
0.0103
0.1015
0.0204
0.2132
Inducted
0.6438
0.1771
0.3057
0.3009
0.2588
English-Fren
Baseline
0.7038
0.7038
0.8389
0.8261
0.7119
Baseline 2 0.3064
0.0145
0.1207
0.0287
0.2777
Inducted
0.6171
0.2835
0.4427
0.4418
0.4692
English-Transliterated Greek
Baseline
0.8809
0.8809
0.9385
0.9385
0.8850
Baseline 2 0.1269
0.0090
0.0949
0.0178
0.1911
Inducted
0.4436
0.2398
0.4277
0.3868
0.6382
Italian-German
Baseline
0.5807
0.5807
0.7620
0.7347
0.5902
Baseline 2 0.4227
0.0060
0.0776
0.0119
0.1360
Inducted
0.5658
0.1536
0.2871
0.2664
0.4065
Table 16 :
16Induction results: Latin script 7 Results that outperform only the n-gram language model would have been indicated in green, but there is no score that outperforms only the n-gram language model.Rand
Jaccard
Fowlkes-
Mallows
F1
F5
Greek-Russian
Baseline
0.5578
0.5578
0.7468
0.7161
0.5674
Baseline 2 0.4440
0.0034
0.0584
0.0068
0.0817
Inducted
0.7142
0.4222
0.5940
0.5937
0.6125
English-Greek
Baseline
0.9179
0.9179
0.9580
0.9571
0.9208
Baseline 2 0.0946
0.0136
0.1167
0.0269
0.2643
Inducted
0.4769
0.3266
0.5089
0.4924
0.6423
English-Spanish-Arabic
Baseline
0.3354
0.3354
0.5791
0.5023
0.3442
Baseline 2 0.6682
0.0109
0.1044
0.0215
0.2227
Inducted
0.7783
0.5677
0.7534
0.7242
0.5773
English-Chinese
Baseline
0.8474
0.8474
0.9205
0.9174
0.8524
Baseline 2 0.1595
0.0082
0.0909
0.0164
0.1781
Inducted
0.5657
0.3343
0.5258
0.5011
0.6953
Ukrainian-Russian
Baseline
0.4950
0.4950
0.7035
0.6622
0.5048
Baseline 2 0.5060
0.0022
0.0472
0.0044
0.0550
Inducted
0.6289
0.1000
0.1935
0.1818
0.2659
Table 17 :
17Induction results: Mixed scriptRand
Jaccard
Fowlkes-
Mallows
F1
F5
Pali 1
Baseline
0.3131
0.3131
0.5595
0.4768
0.3216
Baseline 2 0.6906
0.0118
0.1089
0.0234
0.2379
Inducted
0.7856
0.1683
0.2898
0.2882
0.3188
Pali 2
Baseline
0.3589
0.3589
0.5991
0.5283
0.3680
Baseline 2 0.6495
0.0238
0.1543
0.0465
0.3880
Inducted
0.8148
0.5000
0.6686
0.6666
0.7176
Pali 3
Baseline
0.4947
0.4947
0.7033
0.6619
0.5045
Baseline 2 0.5075
0.0045
0.0676
0.0091
0.1067
Inducted
0.8492
0.0569
0.1083
0.1078
0.1186
Pali 4
Baseline
0.4000
0.4000
0.6324
0.5714
0.4094
Baseline 2 0.6000
0.0000
n/a
n/a
n/a
Inducted
0.6000
0.0000
0.0000
n/a
n/a
Pali 5
Baseline
0.5800
0.5800
0.7615
0.7341
0.5895
Baseline 2 0.4236
0.0063
0.0798
0.0126
0.1430
Inducted
0.4033
0.2082
0.3504
0.3446
0.4134
Table 18 :
18Induction results: Pali dataRand
Jaccard
Fowlkes-
Mallows
F1
F5
Twitter 1
Baseline
0.4615
0.4615
0.6793
0.6315
0.4712
Baseline 2 0.5384
0.0000
n/a
n/a
n/a
Inducted
0.6282
0.3695
0.5515
0.5396
0.4533
Twitter 2
Baseline
0.5555
0.5555
0.7453
0.7142
0.5652
Baseline 2 0.4444
0.0000
n/a
n/a
n/a
Inducted
0.7719
0.6020
0.7687
0.7515
0.9325
Twitter 3
Baseline
0.6583
0.6583
0.8113
0.7939
0.6670
Baseline 2 0.3416
0.0000
n/a
n/a
n/a
Inducted
0.5916
0.3000
0.5236
0.4615
0.8185
Twitter 4
Baseline
0.8750
0.8750
0.9354
0.9333
0.8792
Baseline 2 0.1250
0.0000
n/a
n/a
n/a
Inducted
0.5250
0.3736
0.5615
0.5439
0.7055
Twitter 5
Baseline
0.4285
0.4285
0.6546
0.6000
0.4382
Baseline 2 0.5714
0.0000
n/a
n/a
n/a
Inducted
1.0000
1.0000
1.0000
1.0000
1.0000
Table 19 :
19Induction results: Twier data6 Discussion
e work by
Table 20 :
20'Twier 3': Textcat versus Gold clustering
Textcat
Gold standard
Cluster 1 strawberries,
żubrówka
Cluster 2 my, dad, comes, back, from,
poland, with, two, crates, of,
żubrówka, and, adidas, jack-
ets, omg
my, dad, comes, back, from,
poland, with, two, crates,
of, strawberries, and, adidas,
jackets, omg
Table 21 :
21'Twier 4': Textcat versus Gold clustering
Karl Marx anses som en af de fire klassiske sociologer. Marx er epokegørende for den historiske videnskab. Og Marx spillede en vigtig rolle for den samtidige og eerføl-8 Appendix
8.1 Development data
8.1.1 Latin script data
gende arbejderbevaegelse.
1891, nach einer Tuberkuloseerkrankung Hopes, eröffnete das Ehepaar ein mod-
ernes Lungensanatorium in Nordrach im Schwarzwald, das sie bis 1893 gemeinsam
ührten. 1895 wurde die Ehe geschieden.
Sources:
hps://da.wikipedia.org/wiki/Karl_Marx
hps://de.wikipedia.org/wiki/Hope_Bridges_Adams_Lehmann
شد. مشهور لنین اسم به دنیا در ولی بود لیانوف او ایلیچ ولادمیر او اصلی نام است ولادمیر مخفف که کردند می خطاب ولودیا ا ر او سیمبریسک در مرفه اده خانو یک در پاریس، کمون از قبل سال یک یعنی ۱۸۷۰ سال در که بود لیانوف او اده خانو فرزند شش از فرزند سومین یک پدرش گردید. متولد ٓامد در نوفسک اولیاء نام به بزرگی شهر صورت به بعدها ولی نبود بیش شهرکی زمان ٓان در که ولگا رود ساحل در تفکر طرز و المانیها به عمر مدت تمام در لنین جهت همین وبه بود المانی پزشک یک دختر مادرش و ریاضی معلم و ال لیبر ای بورژو خرده در ولی داشت درخشانی استدلال قوه و بود خوبی شاگرد دبیرستان در ولودیا نگریست. می اغماض دیده به بود ٓان لود مو مارکس که المانی بود. موذی ای بچه حال عین Sources: hps://en.wikipedia.org/wiki/Capitalism hps://fa.wikipedia.org/wiki/ولادیمیر_لنین8.1.3 Twitter data
Twitter 1 »Fallo ergo sum«: On being wrong.
Source:
Roland Hieber (daniel_bohrer). "»Fallo ergo sum«: On being wrong. ". 26 July 2015,
16:47. Tweet.
Der Sommer ist die wärmste der vier Jahreszeiten in der gemäßigten und arktischen Klimazone. Je nachdem, ob er gerade auf der Nord-oder Südhalbkugel herrscht, spricht man vom Nord-oder Südsommer. Der Nordsommer findet gleichzeitig mit dem Südwinter sta.Kesä eli suvi on vuodenaika kevään ja syksyn välissä. Kesä on vuodenajoista lämpimin, koska maapallo on silloin kallistunut niin, eä aurinko säteilee maan pinnalle jyrkemmässä kulmassa kuin muina vuodenaikoina. Pohjoisella pallonpuoliskolla kesäkuukausiksi lasketaan tavallisesti kesä-. heinä-ja elokuu, eteläisellä pallonpuoliskolla joulu-, tammi-ja helmikuu.Yaz, en sıcak mevsimdir. Kuzey Yarım Küre'de en uzun günler yazda gerçekleşir. Dünya ısıyı depo eiği için en sıcak günler genellikle yaklaşık iki ay sonra ortaya çıkar. Sıcak günler Kuzey Yarım Küre'de 21 Haziran ile 22 Eylül arasında, Güney Yarım Küre'de ise 22 Aralık ile 21 Mart arasındadır.Source: hps://fi.wikipedia.org/wiki/Kesä hps://de.wikipedia.org/wiki/Sommer hps://tr.wikipedia.org/wiki/Yaz
و كلمات لاية الحاجة دون احدة و بنظرة رسالتها تنقل ٔان ينبغي العلامة ٔان ف وعموما معين شيء عن يعبر الذي الرسم يعني الرمز هم العلامات استخدم من ٔاكثر ولكن العلامات ا ٔاستخدمو اغريق ٔ ال و المصريين قدماء ٔان المعروف Source: hps://es.wikipedia.org/?title=Lazo_negro hps://en.wikipedia.org/wiki/Black_ribbon hps://ar.wikipedia.org/wiki/رمز English -Chinese -(Pinyin) e Chinese word for "crisis" (simplified Chinese: 危 机; traditional Chinese: 危機; pinyin: wēijī) is frequently invoked in Western motivational speaking because the word is composed of two Chinese characters that can represent "danger" and "opportunity". Some linguists have criticized this usage because the component pronounced jī (simplified Chinese: 机; traditional Chinese: 機) has other meanings besides "opportunity". In Chinese tradition, certain numbers are believed by some to be auspicious (吉利) or inauspicious (不利) based on the Chinese word that the number name sounds similar to. e numbers 0, 6, 8, and 9 are believed to have auspicious meanings because their names sound similar to words that have positive meanings. Source: hps://en.wikipedia.org/w/index.php?title=Chinese_word_for_"crisis" Ukrainian -Russian Віддавна на території України існували держави скіфів, сарматів, готів та інших народів, але відправним пунктом української державності й культури вважається Київська Русь 9-13 століття. На юге омывается водами Чёрного и Азовского морей. Имеет сухопутную границу с Россией, Белоруссией, Польшей, Словакией, Венгрией, Румынией и Молдавией. Source: hps://uk.wikipedia.org/wiki/Україна Surgut-safari.ru, (2015): "Страны -Safari Tour". Tweet 1: Greek -English Μόλις ψήφισα αυτή τη λύση Internet of ings, στο διαγωνισμό BUSINESS IT EXCELLENCE. Source: GaloTyri. "Μόλις ψήφισα αυτή τη λύση Internet of ings, στο διαγωνισμό BUSINESS IT EXCELLENCE. ". 19 June 2015, 12:06. Tweet Tweet 2: English -Fren Demain #dhiha6 Keynote 18h @dhiparis "e collective dynamics of science-publish or perish; is it all that counts?" par David @chavalarias Source: Claudine Moulin (ClaudineMoulin). "Demain #dhiha6 Keynote 18h @dhiparis "e collective dynamics of science-publish or perish; is it all that counts?" par David @chavalarias". 10 June 2015, 17:35. Tweet. Tweet 3: English -Fren Food and breuvages in Edmonton are ready to go, just waiting for the fans #FWWC2015 #bilingualism Source: HBS (HBS_Tweets). "Food and breuvages in Edmonton are ready to go, just waiting for the fans #FWWC2015 #bilingualism". 6 June 2015, 23:29. Tweet. Tweet 4: English -Polish my dad comes back from poland with two crates of strawberries, żubrówka and adidas jackets omg Source: katarzyne (wifeyriddim). "my dad comes back from poland with two crates of strawberries, żubrówka and adidas jackets omg". 8 June 2015, 08:49. Tweet. Tweet 5: Transliterated Amharic -English Buna dabo naw (coffee is our bread). Source: eCodeswitcher. "Buna dabo naw (coffee is our bread). ". 9 June 2015, 02:12. Tweet. 8.2.4 Pali dictionary data All entries have been taken from the Pali Text Society's Pali-English dictionary (T. W. Rhys Davids, William Stede, editors, e Pali Text Society's Pali-English dictionary. Chipstead: Pali Text Society, 1921-5. 8 parts [738 pp.].) abbha (nt.) [Vedic abhra nt. & later Sk. abhra m. "dark cloud"; Idg. *m̊bhro, cp. Gr. <at>a)fro\\s</at> scum, froth, Lat. imber rain; also Sk. ambha water, Gr. <at>o)/mbros</at> rain, Oir ambu water]. A (dense & dark) cloud, a cloudy mass A <smallcaps>ii.</smallcaps> 53 = Vin <smallcaps>ii.</smallcaps> 295 = Miln 273 in list of to things that obscure moon-& sunshine, viz. <b>abbhaŋ mahikā</b> (mahiyā A) <b>dhū-marajo</b> (megho Miln), <b>Rāhu</b> . is list is referred to at SnA 487 & VvA 134. S <smallcaps>i.</smallcaps> 101 (°sama pabbata a mountain like a thunder-cloud); J <smallcaps>vi.</smallcaps> 581 (abbhaŋ rajo acchādesi); Pv <smallcaps>iv.</smallcaps> 3 <superscript>9</superscript> (nīl°= nīla-megha PvA 251). As f. <b>abbhā</b> at Dhs 617 & DhsA 317 (used in sense of adj. "dull"; DhsA expl <superscript>s.</superscript> by valāhaka); perhaps also in <b>abbhāmaa</b> . <br /><b>-kūṭa</b> the point or summit of a storm-cloud 1, 1064; J <smallcaps>vi.</smallcaps> 249, 250; Vv 1 <superscript>1</superscript> (= valāhaka-sikhara VvA 12). <b>-ghana</b> a mass of clouds, a thick cloud It 64; Sn 348 (cp. SnA 348). <b>-paṭala</b> a mass of clouds DhsA 239. <b>-mua</b> free from clouds Sn 687 (also as abbhāmua Dh 382). <b>-saŋvilāpa</b> thundering S <smallcaps>iv.</smallcaps> 289. abhijjhitar [n. ag. fr. abhijjhita in med. function] one who covets M <smallcaps>i.</smallcaps> 287 (T. abhijjhātar, v. l.°itar) = A <smallcaps>v.</smallcaps> 265 (T.°itar, v. l.°ātar). ajja Ajja,& Ajjā (adv.)[Vedic adya & adyā,a + dyā,a°being base of demonstr. pron. (see a3)and dyā an old Loc. of dyaus (see diva) ,thus "on this day"] to-day,now Sn.75,153,158,970,998;Dh.326;J.I,279;III,425 (read bahutaṁ ajjā;not with Kern,Toev. s. v. as "food" ) ;Pv.I,117 (= idāni PvA.59) ;PvA.6, 23;Mhvs 15,64. ‹-› Freq. in phrase ajjatagge (= ajjato + agge(?)or ajja-tagge, see agga3)from this day onward,henceforth Vin.I,18;D.I,85;DA.I,235. -kālaṁ (adv.)this morning J.VI,180;-divasa the present day Mhvs 32,23. (Page 10) gūhanā Gūhanā, (f.)[abstr.fr.gūhati]=gūhanā (q.v.)Pug.19.Cp. pari°. (Page 253) pacati Pacati,[Ved.pacati,Idg.*peqǔō,Av.pac-;Obulg.peka to fry,roast, Lith,kepū bake,Gr.pέssw cook,pέpwn ripe] to cook,boil,roast Vin.IV,264; fig.torment in purgatory (trs.and intrs. ) :Niraye pacitvā aer roasting in N.S. II,225,PvA.10,14.-ppr.pacanto tormenting,Gen.pacato (+Caus. pācayato)D.I,52 (expld at DA.I,159,where read pacato for paccato,by pare daṇḍena pīḷentassa) .-pp.pakka (q.v. ) .‹-› Caus.pacāpeti & pāceti (q.v. ) . -Pass.paccati to be roasted or tormented (q.v. ) . (Page 382)8.2.3 Twitter data
Data:Latin script: German -English• (EN) own., belly, refer, buon, But, it, or, your, at, in, "staring, anyone, doesn't, else's, word, this • (FI) -• (FR) case, just, means, navel". Data: Latin script: German -Finnish -Turkish • (DE) ob, oder, Sommer, und, Nord-, arktischen, der, Der, dem, gemäßigten, mit, er, Südsommer., spricht, Jahreszeiten, Südwinter, herrscht, wärmste, vom, die, sta., nachdem, auf • (EN) ist, Nordsommer, Mart, in • (ES) en, depo • (FI) joulu-, kevään, suvi, on, eli, vuodenajoista, syksyn, koska, kesä-., kuin, Pohjoisella, man, helmikuu., tammi-, lämpimin, heinä-, niin, maapallo, maan, pinnalle, Kesä, säteilee, tavallisesti, vuodenaika, kallistunut, lasketaan, muina, eiği, jyrkemmässä, elokuu, välissä., eä, eteläisellä, silloin, ja, kulmassa • (TR) yaklaşık, ortaya, genellikle, Eylül, Sıcak, çıkar., Yaz, sonra, arasında, Kuzey, Güney, Aralık, gerade, ısıyı, gerçekleşir., Küre'de, günler, için, findet, mevsimdir., arasındadır., Haziran, iki, yazda, uzun, ise, ay, sıcak, ile, Yarım, Dünya • (TrAM) Der • (other) Klimazone., gleichzeitig,kesäkuukausiksi, vuodenaikoina., pallonpuoliskolla,Südhalbkugel Data: Latin script: English -French • (EL) "coarse"• (EN) but, both, for, while, wines, almost, sweet, of, although, only, is, "rough", used)., or, as, meaning, the, in, translate, "hard"., their, English, also, different., veryAbbreviation
Language
AR
Arabic
DE
German
EL
Greek
EN
English
ES
Spanish
FI
Finnish
FR
French
IT
Italian
PL
Polish
RU
Russian
UK
Ukrainian
TR
Turkish
TrAM
Transliterated Amharic
TrEL
Transliterated Greek
ZH
Chinese
• (TrAM) e
• (TrEL) to, German
• (other) Nabelschau, "navel-gazing"
• (FR) vier, Je
• (PL) aurinko
• (RU) 22, 21
• (ES) can
• (FI) mean
• (FR) opposite, Doux, doux, sucré, :
• (RU) "so"
• (TrEL) mou
• (other) (otherwise, (rugueux)
Data: Latin script: German -English• (HU) "navel-gazing"Abbreviation
Language
DA
Danish
DE
German
EL
Greek
EN
English
ES
Spanish
FI
Finnish
FR
French
HU
Hungarian
ID
Indonesian
IT
Italian
LT
Lithuanian
LV
Latvian
NL
Dutch
PT
Portuguese
RU
Russian
TH
ai
ZH
Chinese
• (ZH) Nabelschau
• (unknown) rest
Data: Latin script: German -Finnish -Turkish
• (DA) Südsommer., genellikle,
• (DE) Jahreszeiten, arktischen,
• (FI) vuodenajoista, kallistunut, tavallisesti,
Data: Twier 4 (English-Polish)
• (LV) strawberries,
• (unknown) rest
Data: Twier 5 (Transliterated Amharic-English)
• (unknown) rest
., vuodenajoista, yazda Second run • Südhalbkugel, Südsommer., Südwinter, arasında, gemäßigten, kesäkuukausiksi, lämpimin, säteilee, wärmste • Dünya, Güney, Küre'de, Sıcak, günler, için, sıcak, çıkar., Der • arasındadır., eteläisellä, eiği, eä, gerçekleşir., heinä-, jyrkemmässä, kesä-., kevään, välissä., yaklaşık, ısıyı • Aralık, Eylül, Yarım • Der, Haziran, Jahreszeiten, Klimazone., Kuzey, Mart, Nord-, Nordsommer, Pohjoisella, Sommer, Yaz,• Kesä
• 22
• 21
)., very, while, wines Data: Latin script: English -Transliterated Greek• or
• is
• in
• of
• as
First run
• e
• agápe, philía, storgē., éros,
e authors do not explicitly list the languages clustered, except for two-leer abbreviations which seem to correspond to ISO 639-1. e languages under investigation could have been Vietnamese, German, Farsi, French, Japanese, Spanish, Korean, English, Tamil, and 'ma', though it is impossible to tell.2 http://www.un.org/en/documents/udhr/
http://medialab.di.unipi.it/wiki/Wikipedia_Extractor 4 https://www.mediawiki.org/wiki/Help:Formatting
e full list can be found under the documentation of the Java Character class hp://docs.oracle.com/javase/7/docs/api/java/lang/Character.html
e figures shown are used for illustration purposes only and do not necessarily reflect real language models.
http://web-corpora.net/GreekCorpus/ 9 http://corpus.byu.edu/coca/
e comma has the Unicode codepoint U+FF0C (FULLWIDTH COMMA) and the dot has the Unicode codepoint U+FF0E (FULLWIDTH FULL STOP)
e question of what is to be considered 'enough' or 'adequate' is another point of contention; the data always influences the resulting models.
Data:Mixed script: English -Spanish -Arabicfor, used, been, displaying, of, ribbon, black, or, mourning., statement., tragedies, is, political, a, Wearing, as, mourning • (ES) por, has, crespón, sociedades, personas, sentimiento, representando, estados, de, El, señal, lazo, símbolo, en, utilizado, y
Java-ML: A Machine Learning Library. T Abeel, Y V De Peer, Y Saeys, Journal of Machine Learning Research. Abeel, T., de Peer, Y. V., and Saeys, Y. (2009). Java-ML: A Machine Learning Library. Journal of Machine Learning Research, pages 931-934.
Interactive data mining with 3D-parallel-coordinate-trees. E Achtert, H Kriegel, E Schubert, A Zimek, Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2013. the ACM SIGMOD International Conference on Management of Data, SIGMOD 2013New York, NY, USAAchtert, E., Kriegel, H., Schubert, E., and Zimek, A. (2013). Interactive data mining with 3D-parallel-coordinate-trees. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2013, New York, NY, USA, June 22-27, 2013, pages 1009-1012.
An unsupervised system for identifying English inclusions in German text. Alex , B , Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), Student Research Workshop. the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), Student Research WorkshopAssociation for Computational LinguisticsAlex, B. (2005). An unsupervised system for identifying English inclusions in German text. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), Student Research Workshop, pages 133--138. Association for Computational Linguistics.
Integrating language knowledge resources to extend the English inclusion classifier to a new language. Alex , B , Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC). European Language Resources Association. the 5th International Conference on Language Resources and Evaluation (LREC). European Language Resources AssociationAlex, B. (2006). Integrating language knowledge resources to extend the English inclu- sion classifier to a new language. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC). European Language Resources Asso- ciation.
Automatic detection of English inclusions in mixed-lingual data with an application to parsing. Alex , B , University of EdinburghPhD thesisAlex, B. (2007). Automatic detection of English inclusions in mixed-lingual data with an application to parsing. PhD thesis, University of Edinburgh.
Using Foreign Inclusion Detection to Improve Parsing Performance. Alex , B Dubey, A Keller, F , EMNLP-CoNLL. Alex, B., Dubey, A., and Keller, F. (2007). Using Foreign Inclusion Detection to Improve Parsing Performance. In EMNLP-CoNLL, pages 151-160.
Zum Erkennen von Anglizismen im Deutschen: der Vergleich von einer automatisierten mit einer manuellen Erhebung. Alex , B Onysko, A , Strategien der Integration und Isolation nicht-nativer Einheiten und Strukturen. Scherer, C. and Holler, A.de GruyterAlex, B. and Onysko, A. (2010). Zum Erkennen von Anglizismen im Deutschen: der Vergleich von einer automatisierten mit einer manuellen Erhebung. In Scherer, C. and Holler, A., editors, Strategien der Integration und Isolation nicht-nativer Einheiten und Strukturen, pages 223-239. de Gruyter.
On prediction using variable order Markov models. R Begleiter, R El-Yaniv, Yona , G , Journal of Artificial Intelligence Research. Begleiter, R., El-Yaniv, R., and Yona, G. (2004). On prediction using variable order Markov models. Journal of Artificial Intelligence Research, pages 385-421.
Chinese whispers: an efficient graph clustering algorithm and its application to natural language processing problems. C Biemann, Proceedings of the first workshop on graph based methods for natural language processing. the first workshop on graph based methods for natural language processingAssociation for Computational LinguisticsBiemann, C. (2006). Chinese whispers: an efficient graph clustering algorithm and its application to natural language processing problems. In Proceedings of the first workshop on graph based methods for natural language processing, pages 73-80. As- sociation for Computational Linguistics.
e TIGER treebank. S Brants, S Dipper, S Hansen, W Lezius, G Smith, Proceedings of the workshop on treebanks and linguistic theories. the workshop on treebanks and linguistic theories168Brants, S., Dipper, S., Hansen, S., Lezius, W., and Smith, G. (2002). e TIGER treebank. In Proceedings of the workshop on treebanks and linguistic theories, volume 168.
Algebraic complexity theory. P Bürgisser, M Clausen, M A Shokrollahi, Springer315Bürgisser, P., Clausen, M., and Shokrollahi, M. A. (1997). Algebraic complexity theory, volume 315. Springer.
Improving language models by clustering training sentences. D Carter, Proceedings of the fourth conference on Applied natural language processing. the fourth conference on Applied natural language processingAssociation for Computational LinguisticsCarter, D. (1994). Improving language models by clustering training sentences. In Proceedings of the fourth conference on Applied natural language processing, pages 59-64. Association for Computational Linguistics.
N-gram-based text categorization. W B Cavnar, J M Trenkle, Proceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval. SDAIR-94, 3rd Annual Symposium on Document Analysis and Information RetrievalCavnar, W. B. and Trenkle, J. M. (1994). N-gram-based text categorization. In Pro- ceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval, pages 161-175.
An empirical study of smoothing techniques for language modeling. S F Chen, J Goodman, Proceedings of the 34th annual meeting on Association for Computational Linguistics. the 34th annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsChen, S. F. and Goodman, J. (1996). An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Com- putational Linguistics, pages 310-318. Association for Computational Linguistics.
Clustering Methods for Improving Language Models. E Dreyfuss, I Goodfellow, P Baumstarck, Dreyfuss, E., Goodfellow, I., and Baumstarck, P. (2007). Clustering Methods for Improv- ing Language Models.
How many clusters are best?-an experiment. R C Dubes, Paern Recognition. 206Dubes, R. C. (1987). How many clusters are best?-an experiment. Paern Recognition, 20(6):645-663.
Statistical Identification of Language. Computing Research Laboratory. T Dunning, New Mexico State UniversityDunning, T. (1994). Statistical Identification of Language. Computing Research Labo- ratory, New Mexico State University.
Good-turing smoothing without tears. W Gale, G Sampson, Journal of antitative Linguistics. 23Gale, W. and Sampson, G. (1995). Good-turing smoothing without tears. Journal of antitative Linguistics, 2(3):217-237.
e use of clustering techniques for language modeling-application to Asian languages. J Gao, J Goodman, J Miao, International Journal of Computational Linguistics and Chinese Language Processing. 61Gao, J., Goodman, J., Miao, J., et al. (2001). e use of clustering techniques for language modeling-application to Asian languages. International Journal of Computational Linguistics and Chinese Language Processing, 6(1):27-60.
What every computer scientist should know about floating-point arithmetic. D Goldberg, ACM Computing Surveys (CSUR). 231Goldberg, D. (1991). What every computer scientist should know about floating-point arithmetic. ACM Computing Surveys (CSUR), 23(1):5-48.
Language model size reduction by pruning and clustering. J Goodman, J Gao, INTERSPEECH. Goodman, J. and Gao, J. (2000). Language model size reduction by pruning and clus- tering. In INTERSPEECH, pages 110-113.
A bit of progress in language modeling. J T Goodman, Computer Speech and Language. 154Goodman, J. T. (2001). A bit of progress in language modeling. Computer Speech and Language, 15(4):403-434.
Comparing two language identification schemes. G Grefenstee, Proceedings of the 3rd International conference on Statistical Analysis of Textual Data. JADT. the 3rd International conference on Statistical Analysis of Textual Data. JADTGrefenstee, G. (1995). Comparing two language identification schemes. In Proceedings of the 3rd International conference on Statistical Analysis of Textual Data. JADT 1995.
e minimum description length principle. P D Grünwald, MIT pressGrünwald, P. D. (2007). e minimum description length principle. MIT press.
A closer look at skip-gram modelling. D Guthrie, B Allison, W Liu, L Guthrie, Y Wilks, Proceedings of the 5th international Conference on Language Resources and Evaluation (LREC-2006). the 5th international Conference on Language Resources and Evaluation (LREC-2006)Guthrie, D., Allison, B., Liu, W., Guthrie, L., and Wilks, Y. (2006). A closer look at skip-gram modelling. In Proceedings of the 5th international Conference on Language Resources and Evaluation (LREC-2006), pages 1-4.
M Hall, E Frank, G Holmes, B Pfahringer, P Reutemann, I H Wien, e WEKA Data Mining Soware: An Update. SIGKDD Explorations. 11Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Wien, I. H. (2009). e WEKA Data Mining Soware: An Update. SIGKDD Explorations, 11.
Data clustering: a review. A K Jain, M N Murty, P J Flynn, ACM computing surveys (CSUR). 313Jain, A. K., Murty, M. N., and Flynn, P. J. (1999). Data clustering: a review. ACM computing surveys (CSUR), 31(3):264-323.
Language Identification in Code-Switching Scenario. N Jain, R A Bhat, Proceedings of the Conference on Empirical Methods on Natural Language Processing. the Conference on Empirical Methods on Natural Language ProcessingJain, N. and Bhat, R. A. (2014). Language Identification in Code-Switching Scenario. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 87-93.
Speech and language processing. An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. D Jurafsky, J H Martin, Pearson Education India2nd editionJurafsky, D. and Martin, J. H. (2000). Speech and language processing. An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Pearson Education India, 2nd edition.
Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing. S Katz, IEEE Transactions on. 353Katz, S. (1987). Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing, IEEE Transactions on, 35(3):400-401.
Labeling the Languages of Words in Mixed-Language Documents using Weakly Supervised Methods. B King, S P Abney, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies. the Conference of the North American Chapter of the Association for Computational Linguistics -Human Language TechnologiesKing, B. and Abney, S. P. (2013). Labeling the Languages of Words in Mixed-Language Documents using Weakly Supervised Methods. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics -Human Language Technologies, pages 1110-1119.
Language clustering with word co-occurrence networks based on parallel texts. H Liu, J Cong, Chinese Science Bulletin. 5810Liu, H. and Cong, J. (2013). Language clustering with word co-occurrence networks based on parallel texts. Chinese Science Bulletin, 58(10):1139-1144.
Mel frequency cepstral coefficients for music modeling. B Logan, Proceedings of the 1st International Symposium on Music Information Retrieval (ISMIR). the 1st International Symposium on Music Information Retrieval (ISMIR)Logan, B. et al. (2000). Mel frequency cepstral coefficients for music modeling. In Proceedings of the 1st International Symposium on Music Information Retrieval (ISMIR).
Automatic detection and language identification of multilingual documents. M Lui, J H Lau, T Baldwin, Transactions of the Association for Computational Linguistics. 2Lui, M., Lau, J. H., and Baldwin, T. (2014). Automatic detection and language identifi- cation of multilingual documents. Transactions of the Association for Computational Linguistics, 2:27-40.
Introduction to information retrieval. C D Manning, P Raghavan, H Schütze, Cambridge University Press1Manning, C. D., Raghavan, P., and Schütze, H. (2008). Introduction to information re- trieval, volume 1. Cambridge University Press.
Foundations of statistical natural language processing. C D Manning, H Schütze, MIT pressManning, C. D. and Schütze, H. (1999). Foundations of statistical natural language pro- cessing. MIT press.
Novelty detection in learning systems. S Marsland, Neural computing surveys. 32Marsland, S. (2003). Novelty detection in learning systems. Neural computing surveys, 3(2):157-195.
TweetSafa: Tweet language identification. I Mendizabal, J Carandell, D Horowitz, TweetLID @ SEPLNMendizabal, I., Carandell, J., and Horowitz, D. (2014). TweetSafa: Tweet language identification. TweetLID @ SEPLN.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, Proceedings of the International Conference on Learning Representations (ICLR. the International Conference on Learning Representations (ICLRMikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Representations (ICLR) 2013.
On structuring probabilistic dependences in stochastic language modelling. H Ney, U Essen, R Kneser, Computer Speech & Language. 81Ney, H., Essen, U., and Kneser, R. (1994). On structuring probabilistic dependences in stochastic language modelling. Computer Speech & Language, 8(1):1-38.
X-means: Extending K-means with Efficient Estimation of the Number of Clusters. D Pelleg, A W Moore, Proceedings of the Seventeenth International Conference on Machine Learning. the Seventeenth International Conference on Machine LearningPelleg, D. and Moore, A. W. (2000). X-means: Extending K-means with Efficient Es- timation of the Number of Clusters. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), pages 727-734.
Distributional clustering of english words. F Pereira, N Tishby, L Lee, Proceedings of the 31st annual meeting on Association for Computational Linguistics. the 31st annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsPereira, F., Tishby, N., and Lee, L. (1993). Distributional clustering of english words. In Proceedings of the 31st annual meeting on Association for Computational Linguistics, pages 183-190. Association for Computational Linguistics.
Twier Language Identification using Rational Kernels and its potential application to Sociolinguistics. J Porta, TweetLID @ SEPLNPorta, J. (2014). Twier Language Identification using Rational Kernels and its potential application to Sociolinguistics. TweetLID @ SEPLN.
Parallel Algorithms for Unsupervised Tagging. S Ravi, S Vassilivitskii, V Rastogi, Transactions of the Association for Computational Linguistics. 2Ravi, S., Vassilivitskii, S., and Rastogi, V. (2014). Parallel Algorithms for Unsupervised Tagging. Transactions of the Association for Computational Linguistics, 2:105-118.
e power of amnesia: Learning probabilistic automata with variable memory length. D Ron, Y Singer, N Tishby, Machine learning. 252-3Ron, D., Singer, Y., and Tishby, N. (1996). e power of amnesia: Learning probabilistic automata with variable memory length. Machine learning, 25(2-3):117-149.
Support vector method for novelty detection. B Schölkopf, R C Williamson, A J Smola, J Shawe-Taylor, J C Pla, Advances in Neural Information Processing Systems (NIPS). 12Schölkopf, B., Williamson, R. C., Smola, A. J., Shawe-Taylor, J., and Pla, J. C. (1999). Support vector method for novelty detection. In Advances in Neural Information Processing Systems (NIPS), volume 12, pages 582-588.
Unsupervised sequence segmentation by a mixture of switching variable memory Markov sources. Y Seldin, G Bejerano, N Tishby, Proceedings of the Seventeenth International Conference on Machine Learning (ICML). the Seventeenth International Conference on Machine Learning (ICML)Seldin, Y., Bejerano, G., and Tishby, N. (2001). Unsupervised sequence segmentation by a mixture of switching variable memory Markov sources. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML), pages 513-520.
Overview for the First Shared Task on Language Identification in Code-Switched Data. T Solorio, E Blair, S Maharjan, S Bethard, M Diab, M Gohneim, A Hawwari, F Al-Ghamdi, J Hirschberg, A Chang, Proceedings of the Conference on Empirical Methods on Natural Language Processing. the Conference on Empirical Methods on Natural Language ProcessingSolorio, T., Blair, E., Maharjan, S., Bethard, S., Diab, M., Gohneim, M., Hawwari, A., Al- Ghamdi, F., Hirschberg, J., Chang, A., et al. (2014). Overview for the First Shared Task on Language Identification in Code-Switched Data. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 62-72.
Graphing the distribution of English leers towards the beginning. D Taylor, Taylor, D. (2015). Graphing the distribution of English leers towards the be- ginning, middle or end of words. http://www.prooffreader.com/2014/05/ graphing-distribution-of-english.html.
e Unicode Consortium, e Unicode Standard. Online; accessed 21e Unicode Consortium (2014). e Unicode Standard. http://unicode.org/ standard/standard.html. [Online; accessed 21-July-2015].
Distributed word clustering for large scale classbased language modeling in machine translation. J Uszkoreit, T Brants, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. the 46th Annual Meeting of the Association for Computational LinguisticsUszkoreit, J. and Brants, T. (2008). Distributed word clustering for large scale class- based language modeling in machine translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 755-762.
Comparing clusterings: an overview. S Wagner, D Wagner, Universität Karlsruhe, Fakultät für Informatik KarlsruheWagner, S. and Wagner, D. (2007). Comparing clusterings: an overview. Universität Karlsruhe, Fakultät für Informatik Karlsruhe.
Text segmentation by language using minimum description length. H Yamaguchi, K Tanaka-Ishii, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsYamaguchi, H. and Tanaka-Ishii, K. (2012). Text segmentation by language using mini- mum description length. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 969-978. Association for Computational Lin- guistics.
Hierarchical language identification based on automatic language clustering. B Yin, E Ambikairajah, Chen , F , INTERSPEECH. Yin, B., Ambikairajah, E., and Chen, F. (2007). Hierarchical language identification based on automatic language clustering. In INTERSPEECH, pages 178-181.
Language model based on word clustering. L Yuan, Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation. the 20th Pacific Asia Conference on Language, Information and ComputationYuan, L. (2006). Language model based on word clustering. In Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation, pages 394-397.
Overview of TweetLID: Tweet language identification at SEPLN 2014. TweetLID @ SEPLN. Data: Latin script: English -Transliterated Greek • (EN) for, meanings, least, used, been. A Zubiaga, I San Vicente, P Gamallo, J R Pichel, I Alegria, N Aranberri, A Ezeiza, V Fresno, distinct, love, of, were, are, when, agápe, these, how, and, Greek, word, used., outside, ways, different, other, follows., words, respective, generally, However, is, with, it, at, as, historically, the, in, which, theirZubiaga, A., San Vicente, I., Gamallo, P., Pichel, J. R., Alegria, I., Aranberri, N., Ezeiza, A., and Fresno, V. (2014). Overview of TweetLID: Tweet language identification at SEPLN 2014. TweetLID @ SEPLN. Data: Latin script: English -Transliterated Greek • (EN) for, meanings, least, used, been, distinct, love, of, were, are, when, agápe, these, how, and, Greek, word, used., outside, ways, different, other, follows., words, respective, generally, However, is, with, it, at, as, historically, the, in, which, their
• (ES) has, separate • (FR) language, senses, Ancient, languages, difficult, four • (IT) contexts. • (ES) has, separate • (FR) language, senses, Ancient, languages, difficult, four • (IT) contexts.
• Italian -German, De) Drohe, ? Mythos, Deutschland, Ist, Dabei, Studie, Umfrage, Woche, Kaum, Jahren, Stimmen, Deutsche, • (TrAM) éros, e, love: • (TrEL) to, storgē., philía • (other) Nonetheless, distinguishes Data: Latin script. Warnung, Wissenschaangemahnte, "ein, Zeit, ein, So, vom, zu, die, seit• (TrAM) éros, e, love: • (TrEL) to, storgē., philía • (other) Nonetheless, distinguishes Data: Latin script: Italian -German • (DE) drohe., geben., allgemeiner, Studie, jüngst, ür, Ergebnis, keine, kam, dro- henden, oder, und, letzter, neue, Mythos?, Deutschland, Ist, sich, der, vergeht, studierà, Dabei, Studie, den, dem, auch, Entwarnung, dass, nur, eher, nicht, gibt., Umfrage, Woche, eine, Kaum, Jahren, bei, mehren, Stimmen, Deutsche, das, zum, mehr", angemahnte, "ein, Zeit, ein, So, vom, zu, die, seit, Warnung, Wissenscha
. • (el) Affrescò, • (EL) affrescò
• (EN) moto, aento, a, in, ad. • (EN) moto, aento, a, in, ad, also
. • Certo, Giuda, Leonardo, Szenario, • (ES) custodisce, cura, subito, Certo, Giuda, lo, del, difesa, con, definire, restauro, se, modo., la, arginato., recente, vada, movimento, Leonardo, Szenario, quel, cominciò
FR) l'esempio, non, des, acque, perché. • Baista, un, es, le, sui, condanna• (FI) va, si, Baista, ema • (FR) l'esempio, non, des, acque, perché, un, es, le, sui, condanna
. • , ) , il, psicologia, vinciano, Venezia• (IT) solo, faceva, caurata, chiave, peccato), periture, (il, delicatezza, cancro, pri- vato, bellissima, anni, bacini, ovvero, delle, sogno, di, barbaglio, ma, qualche, e, amore, ricerche, Come, per, richiamano, ne, intuizioni, punte, occhio, struggente:, nelle, vita, riccioli, solo, che, volare?, sono, alla, alle, anche, Cenacolo, quello, cosa, ali, viene, il, psicologia, vinciano, Venezia
Mixed script: Greek -Russian • (EL) κείμενα, βαλκανικό, από, το, αιώνα, Αποτελεί, ελληνική, μία, επίσης, στον, γλωσσικό, γλωσσών., είναι, Στην, έχουμε, μέλος, ανεξάρτητου, τις, γλώσσες. • (trel) Poi, Mint-Berufen, Fachkräemangel, Turchi, Annunciazione, Stierverbands, Ingenieuren Natur-Wissenschalern, Data, 15ο, Ανήκει, γραπτά, π.Χ., σήμερα., γλώσσα, γλώσσα, κλάδου, οικογένειας, τον, της, δεσμό., μέχρι, μοναδικό, ενός• (TrEL) poi, idee, stessa • (other) MINT-Berufen, Fachkräemangel, dell' angelo:, consapevole, anti-Turchi., Annunciazione, lunghissimo, consapevolezza, ossessionava, dell' aureola, appro- fonditamente, autodistruggersi, rivoluzionaria, Stierverbands, all'insù, Natur- wissenschalern, Ingenieuren Data: Mixed script: Greek -Russian • (EL) κείμενα, βαλκανικό, από, το, αιώνα, Αποτελεί, ελληνική, μία, επίσης, στον, γλωσσικό, γλωσσών., είναι, Στην, έχουμε, μέλος, ανεξάρτητου, τις, γλώσσες., 15ο, Ανήκει, γραπτά, π.Χ., σήμερα., γλώσσα, γλώσσα, κλάδου, οικογένειας, τον, της, δεσμό., μέχρι, μοναδικό, ενός
. • Ru) Слов, ) , новое, русский, империи, латинском, литература., создана, создания, путями, основном, язык., язык, (так, его, количество, считалось, обязательным, время, двумя, была, греческого, большое, языке, языка • (TrAM) Η• (RU) слов., с, богатейшая, образованного, человека., этапах, значительное, знание, научных, лексика)., называемая, технических, источником, стал, латинских, существования, слова, греческом, всех, -, В, романских, но- вых, Римской, и, проникали, в, греческие, терминов, присутствует, грече- ских, новое, русский, империи, латинском, литература., создана, создания, путями, основном, язык., язык, (так, его, количество, считалось, обязатель- ным, время, двумя, была, греческого, большое, языке, языка • (TrAM) Η
• (UK) лексику, (наряду, через, всякого, а, На, для, на. • (UK) лексику, (наряду, через, всякого, а, На, для, на
Form" • (EN) sensually, platonic, for, holding, existence;, refined, its, explained, arac-tion, of, (even, are, spiritual, given, refer, Agape, beauty, or, araction. ", like, without, not, further, will, own, love, knowledge, will, one's, most, use, express, is, another. ", e, leads, truth, suggesting, dating, relationships, inspired, "love, mostly, hence, definition:, regard., appreciation, a, ideal, us, helps, seek, Agápe, plane, recall, feeling, within, returned, chapter. Data, Mixed script: English -Greek • (DE) Symposium, Modern, being, felt • (EL). based, described, apply, physical, Although, good, by, used, love, God. ", children., his, any, charity;, Socrates, be, work, throughout, and, that, Greek, even, word, agápē), love. ", known, biblical, feelings, does, famous, In, subject, becomes, one, understanding, children, "love, through, beauty, well, It, was, initially, feast., finding, itself., 13, all, "without, feel, with, is, it, thus, New, as, the, brotherly, in, is, an, there, God, youthful, necessary, high, Lovers, also, Whether• (other) ινδοευρωπαϊκές, ινδοευρωπαϊκής,латинским), международную, меж- дународная, церковнославянский, заимствований, древнегреческий Data: Mixed script: English -Greek • (DE) Symposium, Modern, being, felt • (EL) "Form" • (EN) sensually, platonic, for, holding, existence;, refined, its, explained, arac- tion, of, (even, are, spiritual, given, refer, Agape, beauty, or, araction. ", like, without, not, further, will, own, love, knowledge, will, one's, most, use, ex- press, is, another. ", e, leads, truth, suggesting, dating, relationships, in- spired, "love, mostly, hence, definition:, regard., appreciation, a, ideal, us, helps, seek, Agápe, plane, recall, feeling, within, returned, chapter, ", based, described, apply, physical, Although, good, by, used, love, God. ", children., his, any, char- ity;, Socrates, be, work, throughout, and, that, Greek, even, word, agápē), love. ", known, biblical, feelings, does, famous, In, subject, becomes, one, understand- ing, children, "love, through, beauty, well, It, was, initially, feast., finding, itself., 13, all, "without, feel, with, is, it, thus, New, as, the, brotherly, in, is, an, there, God, youthful, necessary, high, Lovers, also, Whether
• (ES) person, Aquinas, esp., continues, has, omas, truth, can, erotic, sexual, desire. • (ES) person, Aquinas, esp., continues, has, omas, truth, can, erotic, sexual, desire
FR) (ἀγάπη, spouse, not, ancient, marriage., soul, person, content, Christians, Testament, Éros. • ( Fi) On, - , just, part, type, passage, means, humans, passion. ", aspires, contemplation, contributes, argue, affection• (FI) on, -, man, mean • (FR) (ἀγάπη, spouse, not, ancient, marriage., soul, person, content, Christians, Testament, Éros, just, part, type, passage, means, humans, passion. ", aspires, con- templation, contributes, argue, affection
• (TrAM) érōs), "love: • (TrEL) "erotas. denote, eros., to, eros• (TrAM) érōs), "love: • (TrEL) "erotas", denote, eros., to, eros
• (other) non-corporeal, Corinthians, self-benefit)., benevolence., unconditional, philosophers, transcendence. • (other) non-corporeal, Corinthians, self-benefit)., benevolence., unconditional, philosophers, transcendence.
• (FR) remembrance, remembrance, un, es. • (FR) remembrance, remembrance, un, es
. • ( It) Negro, Duelo , POW/MIA• (IT) negro, duelo., POW/MIA
• (other) político-social, organizaciones Data: Mixed script: English -Chinese • (DE) 机;, Chinese. • (other) político-social, organizaciones Data: Mixed script: English -Chinese • (DE) 机;, Chinese:, Western • (EL) 機)
• (EN) Some, for, meanings, by, of, are, 8, positive, speaking, be, composed, or, meanings., tradition, number, and, that, sound, linguists, word, some, this, other. In, have, invoked, criticized, 6, because, e, believed, words, numbers, sounds, frequently, is, pronounced, besides, traditional, the, in, represent, two, motivational, usage, their• (EN) Some, for, meanings, by, of, are, 8, positive, speaking, be, composed, or, meanings., tradition, number, and, that, sound, linguists, word, some, this, other, In, have, invoked, criticized, 6, because, e, believed, words, numbers, sounds, frequently, is, pronounced, besides, traditional, the, in, represent, two, motiva- tional, usage, their, based
. • (es) 危機;, Has, Chinese 危机;, Can, crisis", similar • (FI) on• (ES) 危機;, has, 危机;, can, Chinese, "crisis", similar • (FI) on
• (FR) (吉利). component, "danger", characters, (不利), certain, jī • (PL) pinyin• (FR) (吉利), component, "danger", characters, (不利), certain, jī • (PL) pinyin:
Молдавией, Азовского, Россией, Чёрного, Русь, • Словакией, • (TrEL) to, to., names, name • (other) inauspicious, "opportunity"., (simplified, auspicious Data: Mixed script: Ukrainian -Russian • (RU) Польшей, Румынией, Венгрией, юге, границу, с, омывается, Имеет. TrAM) й• (TrEL) to, to., names, name • (other) inauspicious, "opportunity"., (simplified, auspicious Data: Mixed script: Ukrainian -Russian • (RU) Польшей, Румынией, Венгрией, юге, границу, с, омывается, Имеет, 9 -13, Молдавией., Азовского, водами, Россией, Чёрного, Русь, и, пунктом, Словакией • (TrAM) й
. • Київська, • (UK) держави, скіфів, України, народів, На, державності, вважається, від- правним, території, української, готів, культури, але, сарматів, існували, століття., Київська, на, Віддавна, інших, та, морей.
• (other) сухопутную, Белоруссией Data: Pali: abbha • (AR). 289• (other) сухопутную, Белоруссией Data: Pali: abbha • (AR) ., 134., 289.
. • De) Miln, imber, dark), Miln• (DE) Miln), imber, dark), Miln
EN) water, mountain, of, free, (used, or, like, referred. • (EL) (=, (abbhaŋ •also, A, is, cloudy, clouds, later, a, froth, 1, summit, thundering, by, mass, Pv, Oir, obscure, scum, that, water]., thick, As, from, It, is, at, as, the, in, clouds, things• (EL) (=, (abbhaŋ • (EN) water, mountain, of, free, (used, or, like, referred, (also, A, is, cloudy, clouds, later, a, froth, 1, summit, thundering, by, mass, Pv, Oir, obscure, scum, that, water]., thick, As, from, It, is, at, as, the, in, clouds, things, also
• (ES) (dense, f., sense, expl, rajo • (FI) 239., rain;, Lat. Vin, perhaps, SnA• (ES) (dense, f., sense, expl, rajo • (FI) 239., rain;, Lat., Vin, perhaps, SnA
. • ( Fr) Cloud, Dh, point, cloud, Dhs, A), rain, VvA, DhsA, list• (FR) cloud, Dh, adj., point, cloud, Dhs, A), rain, VvA, DhsA, list
• (IT) \"dark, &, ambha, 3, 1, 317, J, sunshine, cp., abhra. 249Vedic, (megho • (PL) 487, =, S, 295, <br, moon-• (IT) \"dark, &, ambha, 3, 1, 317, J, sunshine, cp., abhra, [Vedic, (megho • (PL) 487, =, S, 295, <br, moon-, 249
. • ( Ru, 34853• (RU) 348, 53
TrAM) 687, PvA. • (tr) Viz, • Vv, °sama, 101, (nīl°, (cp., 64;, (nt.), 581, m., Sn• (TR) viz., ambu, Vv • (TrAM) 687, PvA, (°sama, 101, (nīl°, (cp., 64;, (nt.), 581, m., Sn, 1064;
. • (trel) Gr, Sk, Idg., to, pabbata, nt• (TrEL) , Gr., Sk., Idg., to, pabbata, nt.
UK) 12)., 273, 617, 348)., 250;, 251). • , 382• (UK) 12)., 273, 617, 348)., 250;, 251)., 382).
</smallcaps>, (mahiyā, <smallcaps> iv. </smallcaps>, cloud\";, <b> Rāhu </b>, <b> abbhā </b>, <b> abbhaŋ, <superscript> 9 </superscript>, marajo </b>, abbhāmua, valāhaka);, <smallcaps> i. </smallcaps>, <b> abbhāmaa </b>, valāhaka-sikhara, <superscript> s. </superscript>, <smallcaps> ii. </smallcaps>, <b> dhū-, stormcloud, /><b> -kūṭa </b>, thunder-cloud);, <at>a)fro\\s</at>, <b>-paṭala</b>, <at>o)/mbros</at>, nīla-megha. • (other) <b> -Saŋvilāpa </B>, <b> -Mua </B>, Vi <smallcaps>, <superscript>1</superscript>, *m̊bhro, \"dull\";, acchādesi);, mahikā</b>, <b> -ghana </b> Data: Pali: abhijjhitar • (DE) v• (other) <b> -saŋvilāpa </b>, <b> -mua </b>, <smallcaps> vi. </smallcaps>, (mahiyā, <smallcaps> iv. </smallcaps>, cloud\";, <b> Rāhu </b>, <b> abbhā </b>, <b> abbhaŋ, <superscript> 9 </superscript>, marajo </b>, abbhāmua, valāhaka);, <smallcaps> i. </smallcaps>, <b> abbhāmaa </b>, valāhaka-sikhara, <superscript> s. </superscript>, <smallcaps> ii. </smallcaps>, <b> dhū-, storm- cloud, /><b> -kūṭa </b>, thunder-cloud);, <at>a)fro\\s</at>, <b>-paṭala</b>, <at>o)/mbros</at>, nīla-megha, <superscript>1</superscript>, *m̊bhro, \"dull\";, acchādesi);, mahikā</b>, <b> -ghana </b> Data: Pali: abhijjhitar • (DE) v.
. • (en) A , in, who, covets, med., function• (EN) A, one, in, who, covets, med., function]
• (IT) ag., M, fr. • (IT) ag., M, fr.
• (PL) 287, = • (RU). 265• (PL) 287, = • (RU) 265
. • (TrAM) l. • (TrAM) l., [n.
. • ( Trel, • (TrEL) (T.
</smallcaps>,°itar,°itar) Data: Pali: ajja • (DE) (see, v., being, Ajjā • (EN) of, or, and. ) • (other) <smallcaps> V. </Smallcaps>, not, present, Freq., day, this, "on, from, adyā,a, with, as, the, morning, in, day"], an• (other) <smallcaps> v. </smallcaps>, abhijjhātar, abhijjhita,°ātar)., <smallcaps> i. </smallcaps>,°itar,°itar) Data: Pali: ajja • (DE) (see, v., being, Ajjā • (EN) of, or, and, not, present, Freq., day, this, "on, from, adyā,a, with, as, the, morning, in, day"], an
• (ES) bahutaṁ, • (FI) 32,23., ajjato • (FR) Loc., dyaus, 15,64., dyā, pron. • (ES) bahutaṁ, • (FI) 32,23., ajjato • (FR) Loc., dyaus, 15,64., dyā, pron.
. • ( Ru, 425agge• (RU) III,425, agge(?)
• (TR) old, adya, 10), idāni. • (TR) old, adya, 10), idāni
Da I,235 ) , J I,279;, D Sn, J 75,153,158,970,998;, Vi,180;, Pva, 6,23;, diva) ,thus, PvA.59) ;, agga3), Kern,Toev., Pv.I,117. Dh.326;, ajjatagge, (read, (Page, Vin.I,18;, dyā,a°, Ajja,&, to-day,now, "food" ) ; Data: Pali: gūhanā • (ES) 253• (other) onward,henceforth, ajjā;, DA.I,235., (adv.), J.I,279;, D.I,85;, ajja-tagge,see, Sn.75,153,158,970,998;, J.VI,180;, PvA.6,23;, -kālaṁ, diva) ,thus, PvA.59) ;, agga3), Kern,Toev., Pv.I,117, Dh.326;, ajjatagge, (read, (Page, Vin.I,18;, dyā,a°, Ajja,&, to-day,now, "food" ) ; Data: Pali: gūhanā • (ES) 253)
Data: Pali: pacati • (EL) 382). Data: Pali: pacati • (EL) 382)
• (EN) for, aer, roasting, read, roasted, be, or, at, tormented, in • (FR) pare. 52• (EN) for, aer, roasting, read, roasted, be, or, at, tormented, in • (FR) pare, D.I,52
. • (it) & , TrAM) pāceti, ripe• (IT) &, pacato, purgatory • (TrAM) pāceti, ripe]
(q. v. ) . (Page, DA. I, 159, where, Caus.pacāpeti, intrs. ) :Niraye, pacitvā, Pass.paccati,(trs.and, tormenting, Gen.pacato, pīḷentassa) .-, fig.torment, cook,pέpwn, Pacati,[Ved.pacati, Idg.*peqǔō,Av.pac-;, paccato,by, ppr.pacanto, cook,boil,roast, fry, roast,Lith,kepū, (q.v. ) .-, (expld, Vin.IV,264;, Obulg.peka, pp. pakka, (q.v. ) .‹-›. • (TrEL) to, daṇḍena • (other) bake, Gr. pέssw, (+Caus. pācayato). N.S.II,225,PvA.10,14.-Data: Twier 1 (Greek-English• (TrEL) to, daṇḍena • (other) bake, Gr. pέssw, (+Caus. pācayato) , (q. v. ) . (Page, DA. I, 159, where, Caus.pacāpeti, intrs. ) :Niraye, pacitvā, Pass.paccati,(trs.and, tormenting, Gen.pacato, pīḷentassa) .-, fig.torment, cook,pέpwn, Pacati,[Ved.pacati, Idg.*peqǔō,Av.pac-;, paccato,by, ppr.pacanto, cook,boil,roast, fry, roast,Lith,kepū, (q.v. ) .-, (expld, Vin.IV,264;, Obulg.peka, pp. pakka, (q.v. ) .‹-›, N.S.II,225,PvA.10,14.- Data: Twier 1 (Greek-English)
Internet • (EL) στο, τη, αυτή, διαγωνισμό, λύση, ψήφισα • (EN) of, IT, ings • (ES) BUSINESS. • ( De, • (DE) Internet • (EL) στο, τη, αυτή, διαγωνισμό, λύση, ψήφισα • (EN) of, IT, ings • (ES) BUSINESS
• (TrAM) Μόλις • (other) EXCELLENCE. • (TrAM) Μόλις • (other) EXCELLENCE.
. Data: Twier. 2Data: Twier 2 (French-English)
. • (en) David, e, is, it, perish;, or, collective, Demain, counts?", that, of, dynamics• (EN) David, "e, is, it, perish;, or, collective, Demain, counts?", that, of, dynam- ics, all
. • ( Fi, 18• (FI) 18h
• (FR) par, Keynote • (other) #dhiha6, @dhiparis, science-publish Data: Twier. 3French-English• (FR) par, Keynote • (other) #dhiha6, @dhiparis, science-publish Data: Twier 3 (French-English)
• (EN) for, Food, waiting, the, in, ready, and, are • (ES) go • (FI) Edmonton • (FR) just, breuvages, fans • (TrEL) to • (other) #bilingualism. 2015• (EN) for, Food, waiting, the, in, ready, and, are • (ES) go • (FI) Edmonton • (FR) just, breuvages, fans • (TrEL) to • (other) #bilingualism, #FWWC2015
. Data: Twier. 4English-PolishData: Twier 4 (English-Polish)
ES) dad, adidas • (TrAM) my • (TrEL) omg • (other) żubrówka, strawberries Data: Twier 5. • , Transliterated Amharic-English• (EN) with, back, from, comes, crates, and, poland, two, of, jackets • (ES) dad, adidas • (TrAM) my • (TrEL) omg • (other) żubrówka, strawberries Data: Twier 5 (Transliterated Amharic-English)
• (EN) is, bread). • (EN) is, bread).
• (FR) our • (IT) (coffee • (PL) naw. • (FR) our • (IT) (coffee • (PL) naw
. • (tram) Buna, Klimazone Zh) Gemäßigten, Südhalbkugel, Nordsommer, Pohjoisella, • (TrAM) Buna, dabo • (ZH) gemäßigten, Klimazone., Südhalbkugel, Nordsommer, gleichzeitig, vuoden- aika, jyrkemmässä, vuodenaikoina., Pohjoisella, pallonpuoliskolla, kesäkuukausik- si, eteläisellä, mevsimdir., gerçekleşir., arasındadır.,
• (unknown) rest Data: Latin script: English -Transliterated Greek • (EN) historically, respective, • (LT) languages, • (ZH) distinguishes, Nonetheless, • (unknown) rest Data: Latin script: Italian -German • (DE) allgemeiner, angemahnte, • (ES) delicatezza, • (HU) bellissima, • (IT) dell'aureola, consapevole, richiamano, anti-Turchi., ossessionava, • (NL) Ingenieuren, • (PT) approfonditamente, • (ZH) custodisce, struggente:, rivoluzionaria, psicologia, consapevolezza, autodistruggersi, lunghissimo, Fachkräemangel, Deutschland, intuizioni, Entwarnung, Stierverbands, Wissenscha, MINT-Berufen, Annunciazione, dell'angelo:, Naturwissenschalern, • (unknown) rest Data: Mixed script: Greek -Russian • (EL) ανεξάρτητου, οικογένειας, • (RU) существования, богатейшая, литература. • (unknown) rest Data: Latin script. English -French • (HU) different., • (ZH) (rugueux),(otherwiseгреческого, обязательным, образованного, присутствует, количество, заимствований, значительное, источником, технических, называемая, международная, • (TH) латинским• (unknown) rest Data: Latin script: English -French • (HU) different., • (ZH) (rugueux),(otherwise, • (unknown) rest Data: Latin script: English -Transliterated Greek • (EN) historically, respective, • (LT) languages, • (ZH) distinguishes, Nonetheless, • (unknown) rest Data: Latin script: Italian -German • (DE) allgemeiner, angemahnte, • (ES) delicatezza, • (HU) bellissima, • (IT) dell'aureola, consapevole, richiamano, anti-Turchi., ossessionava, • (NL) Ingenieuren, • (PT) approfonditamente, • (ZH) custodisce, struggente:, rivoluzionaria, psicologia, consapevolezza, auto- distruggersi, lunghissimo, Fachkräemangel, Deutschland, intuizioni, Entwar- nung, Stierverbands, Wissenscha, MINT-Berufen, Annunciazione, dell'an- gelo:, Naturwissenschalern, • (unknown) rest Data: Mixed script: Greek -Russian • (EL) ανεξάρτητου, οικογένειας, • (RU) существования, богатейшая, литература., греческого, обязательным, образованного, присутствует, количество, заимствований, значительное, источником, технических, называемая, международная, • (TH) латинским),
• (unknown) rest Data: Mixed script: English -Greek • (DA) definition:, understanding, • (EN) affection, unconditional, suggesting, • (FR) relationships, contemplation, appreciation, araction, araction. • , transcendence• (ZH) ινδοευρωπαϊκές, ινδοευρωπαϊκής, древнегреческий, международную, церковнославянский, • (unknown) rest Data: Mixed script: English -Greek • (DA) definition:, understanding, • (EN) affection, unconditional, suggesting, • (FR) relationships, contemplation, appreciation, araction, araction. ", transcen- dence.,
• (HU) benevolence., self-benefit). • (HU) benevolence., self-benefit).,
Symposium, existence;, philosophers, • (unknown) rest Data: Mixed script: English -Spanish -Arabic • (ES) sociedades, organizaciones, sentimiento, político-social, • (FR) remembrance, remembrance, statement., • (ID) displaying, • (PT) representando, • (unknown) rest Data: Mixed script: English -Chinese • (EN) traditional, motivational, pronounced, tradition" • (FR) characters, • (ZH) simplified, frequently, "opportunity"., criticized, auspicious, inauspicious, • (unknown) rest Data: Mixed script: Ukrainian -Russian • (RU) державності, Словакией, Молдавией., • (TH) вважається, • (ZH) відправним, української, сухопутную, Белоруссией, • (unknown) rest Data: Pali: abbha • (DA) storm-cloud, thundering, • (HU) marajo</b>, nīla-megha, valāhaka-sikhara, • (ZH) <at> a)fro\\</at>, <at> o)/mbros </at>, <smallcaps> ii. </smallcaps>, mahikā</b>. • (IT) non-corporeal, • (PT) contributes, • (ZH) Corinthians, throughout, Christians. <b> Rāhu </b>, <smallcaps> i. </smallcaps>, thunder-cloud);, <smallcaps> vi• (IT) non-corporeal, • (PT) contributes, • (ZH) Corinthians, throughout, Christians, Symposium, existence;, philosophers, • (unknown) rest Data: Mixed script: English -Spanish -Arabic • (ES) sociedades, organizaciones, sentimiento, político-social, • (FR) remembrance, remembrance, statement., • (ID) displaying, • (PT) representando, • (unknown) rest Data: Mixed script: English -Chinese • (EN) traditional, motivational, pronounced, tradition" • (FR) characters, • (ZH) simplified, frequently, "opportunity"., criticized, auspicious, inauspicious, • (unknown) rest Data: Mixed script: Ukrainian -Russian • (RU) державності, Словакией, Молдавией., • (TH) вважається, • (ZH) відправним, української, сухопутную, Белоруссией, • (unknown) rest Data: Pali: abbha • (DA) storm-cloud, thundering, • (HU) marajo</b>, nīla-megha, valāhaka-sikhara, • (ZH) <at> a)fro\\</at>, <at> o)/mbros </at>, <smallcaps> ii. </smallcaps>, mahikā</b>, <b> Rāhu </b>, <smallcaps> i. </smallcaps>, thunder-cloud);, <smallcaps> vi.
</smallcaps>, <superscript> 9 </su-perscript>, <b> abbhā </b>, <superscript> s. </superscript>, valāhaka);, <b> abbhāmaa </b>, /><b> -kūṭa </b>, <superscript> 1 </superscript>, <b> -ghana </b>, <b> -paṭala </b>, <b> -mua </b>, abbhāmua. ); , </smallcaps>, acchādesi);, <smallcaps> iv. </smallcaps>, <superscript> 9 </su- perscript>, <b> abbhā </b>, <superscript> s. </superscript>, valāhaka);, <b> abbhāmaa </b>, /><b> -kūṭa </b>, <superscript> 1 </superscript>, <b> -ghana </b>, <b> -paṭala </b>, <b> -mua </b>, abbhāmua, <b> -saŋvilāpa </b>
• (unknown) rest Data: Pali: abhijjhitar • (ZH) abhijjhita, <smallcaps> i. </smallcaps>, abhijjhātar. <smallcaps> v• (unknown) rest Data: Pali: abhijjhitar • (ZH) abhijjhita, <smallcaps> i. </smallcaps>, abhijjhātar, <smallcaps> v.
• (unknown) rest Data: Pali: gūhanā • (ZH) Gūhanā, (f.), [abstr.fr.gūhati]hanā, Pug.19.Cp.pari°. (Page, • (unknown) rest Data: Pali: pacati • (ZH) fig.torment, Pacati,[Ved.pacati,Idg.*peqǔō,Av.pac-;, Obulg.peka, fry,roast,Lith,kepū, bake,Gr.pέssw, cook,pέpwn, cook,boil,roast, Vin. IV,264;, intrs. ) :Niraye, N.S.II,225,PvA.10,14.-, ppr.pacanto, tormenting, Gen. pacato. </Smallcaps>, • (unknown) rest Data: Pali: ajja • (ZH) diva) ,thus, to-day,now, Sn.75,153,158,970,998;, Kern,Toev., ajja-tagge,see, onward,henceforth. DA. I, 159, where, paccato, by, pīḷentassa) .-, (q.v. ) .‹-›, Caus.pacāpeti, Pass.paccati. q.v. ) . (Page, • (unknown) rest Data: Twier 1 (Greek-English</smallcaps>, • (unknown) rest Data: Pali: ajja • (ZH) diva) ,thus, to-day,now, Sn.75,153,158,970,998;, Kern,Toev., ajja-tagge,see, onward,henceforth, • (unknown) rest Data: Pali: gūhanā • (ZH) Gūhanā, (f.), [abstr.fr.gūhati]hanā, Pug.19.Cp.pari°. (Page, • (unknown) rest Data: Pali: pacati • (ZH) fig.torment, Pacati,[Ved.pacati,Idg.*peqǔō,Av.pac-;, Obulg.peka, fry,roast,Lith,kepū, bake,Gr.pέssw, cook,pέpwn, cook,boil,roast, Vin. IV,264;, intrs. ) :Niraye, N.S.II,225,PvA.10,14.-, ppr.pacanto, tormenting, Gen. pacato,(+Caus. pācayato), DA. I, 159, where, paccato, by, pīḷentassa) .-, (q.v. ) .‹-›, Caus.pacāpeti, Pass.paccati, (q.v. ) . (Page, • (unknown) rest Data: Twier 1 (Greek-English)
• (unknown) rest Data: Twier. • ( Zh) Διαγωνισμό, Excellence , 2French-English• (ZH) διαγωνισμό, EXCELLENCE., • (unknown) rest Data: Twier 2 (French-English)
• (IT) collective, • (ZH) science-publish, • (unknown) rest Data: Twier. French-English3• (IT) collective, • (ZH) science-publish, • (unknown) rest Data: Twier 3 (French-English)
• (ZH) #bilingualism, • (unknown) rest. • (ZH) #bilingualism, • (unknown) rest
moto, movimento, nelle, neue, nicht, non, nur, occhio, oder, ossessionava, ovvero, peccato), per, periture, poi, privato, psicologia, punte, qualche, quel, quello, recente, restauro, riccioli, ricerche, richiamano, rivoluzionaria, seit, sich, sogno, solo, solo, sono, stessa, struggente:, subito, sui, und, vada, vergeht, viene, vinciano, vita, volare?, vom, zum • all'insù, dell'angelo:, dell'aureola, l'esempio, Milano • Fachkräemangel, affrescò, cominciò, ür, jüngst, perché, più, studierà Second run • a, e, i • E • So • (il, ad, da, di, es. • "ein, Annunciazione, Baista, Cenacolo, Certo, Come, Dabei, Deutsche, Deutschland, Entwarnung, Ergebnis, Giuda, Ingenieuren, Ist, Jahren, Leonardo Kaum, Mint-Berufen, ? Mythos, Naturwissenschalern, Stierverbands, Stimmen, Studie, Studie, Szenario, Umfrage, Venezia, Warnung, Wissenscha, Woche, Zeit, Turchi, Annunciazione, Baista, Cenacolo, Certo, Come, Dabei, Deutsche, Deutschland, Entwarnung, Ergebnis, Giuda, Ingenieuren, Ist, Jahren, Leonardo Kaum, Mint-Berufen, ha, il, in, la, le, lo, ma, ne, se, si, un, va; Umfrage, Venezia, Warnung, Wissenscha, Woche, ZeitMythos?, Naturwissenschalern, Stierverbands, Stimmen, Studie, Studie, Szenario, ema• "ein , Annunciazione, Baista, Cenacolo, Certo, Come, Dabei, Deutsche, Deutsch- land, Entwarnung, Ergebnis, Giuda, Ingenieuren, Ist, Jahren, Kaum, Leonardo, MINT-Berufen, Mythos?, Naturwissenschalern, Stierverbands, Stimmen, Stu- die, Studie, Szenario, ema, Umfrage, Venezia, Warnung, Wissenscha, Woche, Zeit, acque, ali, alla, alle, allgemeiner, also, amore, anche, angemahnte, anni, anti-Turchi., approfonditamente, arginato., aento, auch, autodistruggersi, baci- ni, barbaglio, bei, bellissima, cancro, caurata, che, chiave, con, condanna, consa- pevole, consapevolezza, cosa, cura, custodisce, das, dass, definire, del, delicatezza, delle, dem, den, der, des, die, difesa, drohe., drohenden, eher, ein, eine, faceva, geben., gibt., idee, intuizioni, kam, keine, letzter, lunghissimo, mehr", mehren, modo., moto, movimento, nelle, neue, nicht, non, nur, occhio, oder, ossessiona- va, ovvero, peccato), per, periture, poi, privato, psicologia, punte, qualche, quel, quello, recente, restauro, riccioli, ricerche, richiamano, rivoluzionaria, seit, sich, sogno, solo, solo, sono, stessa, struggente:, subito, sui, und, vada, vergeht, viene, vinciano, vita, volare?, vom, zum • all'insù, dell'angelo:, dell'aureola, l'esempio, Milano • Fachkräemangel, affrescò, cominciò, ür, jüngst, perché, più, studierà Second run • a, e, i • E • So • (il, ad, da, di, es, ha, il, in, la, le, lo, ma, ne, se, si, un, va, zu • Annunciazione, Baista, Cenacolo, Certo, Come, Dabei, Deutsche, Deutschland, Entwarnung, Ergebnis, Giuda, Ingenieuren, Ist, Jahren, Kaum, Leonardo, MINT- Berufen, Mythos?, Naturwissenschalern, Stierverbands, Stimmen, Studie, Stu- die, Szenario, ema, Umfrage, Venezia, Warnung, Wissenscha, Woche, Zeit
per, periture, poi, privato, psicologia, punte, qualche, quel, quello, recente, restauro, riccioli, ricerche, richiamano, rivoluzionaria, seit, sich, sogno, solo, solo, sono, stessa, struggente:, subito, sui, und, vada, vergeht, viene, vinciano, vita, volare?, vom, zum • all'insù, dell'angelo:, dell'aureola, l'esempio, Milano • Fachkräemangel • affrescò, cominciò, jüngst, perché, studierà • ür • più Data: Mixed script: Greek -Russian First run • 15ο. Turchi, • το, В, На, а, в, и, на, с • (наряду, (так, γλωσσών., γλώσσα, γλώσσες., δεσμό., π.Χ., σήμερα., заимствований, латинским), лексика)., литература., слов., человека., язык• "ein, acque, ali, alla, alle, allgemeiner, also, amore, anche, angemahnte, an- ni, anti-Turchi., approfonditamente, arginato., aento, auch, autodistruggersi, bacini, barbaglio, bei, bellissima, cancro, caurata, che, chiave, con, condanna, consapevole, consapevolezza, cosa, cura, custodisce, das, dass, definire, del, de- licatezza, delle, dem, den, der, des, die, difesa, drohe., drohenden, eher, ein, eine, faceva, geben., gibt., idee, intuizioni, kam, keine, letzter, lunghissimo, mehr", mehren, modo., moto, movimento, nelle, neue, nicht, non, nur, oc- chio, oder, ossessionava, ovvero, peccato), per, periture, poi, privato, psicologia, punte, qualche, quel, quello, recente, restauro, riccioli, ricerche, richiamano, ri- voluzionaria, seit, sich, sogno, solo, solo, sono, stessa, struggente:, subito, sui, und, vada, vergeht, viene, vinciano, vita, volare?, vom, zum • all'insù, dell'angelo:, dell'aureola, l'esempio, Milano • Fachkräemangel • affrescò, cominciò, jüngst, perché, studierà • ür • più Data: Mixed script: Greek -Russian First run • 15ο, -, Η • το, В, На, а, в, и, на, с • (наряду, (так, γλωσσών., γλώσσα, γλώσσες., δεσμό., π.Χ., σήμερα., заимство- ваний, латинским), лексика)., литература., слов., человека., язык.
. • Ανήκει, Αποτελεί, Στην, источником, количество, латинских, латинском, лексику, международная, международную, называемая, научных, новое, новых, образованного, обязательным, основном, присутствует, проникали, путями, романских, русский, слова, создана, создания, стал, существования, считалось, терминов, технических, церковнославянский, через, этапах, язык, языка, языке• Ανήκει, Αποτελεί, Στην, έχουμε, αιώνα, ανεξάρτητου, από, βαλκανικό, γλωσ- σικό, γλώσσα, γραπτά, είναι, ελληνική, ενός, επίσης, ινδοευρωπαϊκές, ινδο- ευρωπαϊκής, κείμενα, κλάδου, μέλος, μέχρι, μία, μοναδικό, οικογένειας, στον, της, τις, τον, Римской, богатейшая, большое, была, время, всех, всякого, греческие, греческих, греческого, греческом, двумя, для, древнегреческий, его, знание, значительное, империи, источником, количество, латинских, латинском, лексику, международная, международную, называемая, науч- ных, новое, новых, образованного, обязательным, основном, присутству- ет, проникали, путями, романских, русский, слова, создана, создания, стал, существования, считалось, терминов, технических, церковнославянский, через, этапах, язык, языка, языке
• , • В • Το, На, на • (наряду, (так • γλωσσών., γλώσσα, γλώσσες., δεσμό., π.Χ., σήμερα., заимствований, латинским), лексика)., литература., слов., человека., язык. • а, в, и, с • В • το, На, на • (наряду, (так • γλωσσών., γλώσσα, γλώσσες., δεσμό., π.Χ., σήμερα., заимствований, латин- ским), лексика)., литература., слов., человека., язык.
. • Ανήκει, Αποτελεί, международную, называемая, образованного, обязательным, основном, присутствует, проникали, романских, создания, существования, считалось, терминов, технических, церковнославянский• Ανήκει, Αποτελεί, Στην • богатейшая, греческие, греческих, греческого, греческом, древнегреческий, значительное, источником, количество, латинских, латинском, междуна- родная, международную, называемая, образованного, обязательным, ос- новном, присутствует, проникали, романских, создания, существования, считалось, терминов, технических, церковнославянский
Mixed script: English -Greek First run • "intimate, "without, Although, Aquinas, Christians, Corinthians, Socrates, Symposium, Testament, Whether, affection, ancient, another. ", appreciation, aspires, araction, araction. • Римской, Data, becomes, benevolence., biblical, brotherly, chapter. charity;, children, children., contemplation, content, continues, contributes, definition:, described, existence;, explained, express, feeling, feelings, finding, further, holding, initially, inspired, knowledge, marriage., necessary, non-corporeal, passage, passion. ", philosophers, physical, platonic, refined, relationships, returned, self-benefit)., sensually, spiritual, subject, suggesting, through, throughout, transcendence., unconditional, understanding, without, youthful• Римской, большое, была, время, всех, всякого, двумя, для, его, знание, им- перии, лексику, научных, новое, новых, путями, русский, слова, создана, стал, через, этапах, язык, языка, языке Data: Mixed script: English -Greek First run • "intimate, "without, Although, Aquinas, Christians, Corinthians, Socrates, Sym- posium, Testament, Whether, affection, ancient, another. ", appreciation, aspires, araction, araction. ", becomes, benevolence., biblical, brotherly, chapter, ", char- ity;, children, children., contemplation, content, continues, contributes, defini- tion:, described, existence;, explained, express, feeling, feelings, finding, further, holding, initially, inspired, knowledge, marriage., necessary, non-corporeal, pas- sage, passion. ", philosophers, physical, platonic, refined, relationships, returned, self-benefit)., sensually, spiritual, subject, suggesting, through, throughout, tran- scendence., unconditional, understanding, without, youthful
• (ἀγάπη, Éros, érōs. • (ἀγάπη, (ἔρως, Agápe, agápē), Éros, érōs), -
Agape, Greek, Lovers, Modern, Plato, is, omas, also, apply, argue, based, beauty, beauty, being, dating, denote, desire, does, eros, eros., erotic, even, famous, feast., feel, felt, given, good, helps, hence, high, humans, ideal, itself., just, known, leads, like, love, love, love. ", mean, means, most, mostly, one's, part, person, person, plane, recall, refer, regard., seek, sexual, soul, spouse, talk, texts. • "Form", "erotas", "love, "love, "love:, (even. that, there, thus, truth, truth, type, used, well, will, will, with, within, word, work• "Form", "erotas", "love, "love, "love:, (even, Agape, Greek, Lovers, Modern, Plato, is, omas, also, apply, argue, based, beauty, beauty, being, dating, denote, desire, does, eros, eros., erotic, even, famous, feast., feel, felt, given, good, helps, hence, high, humans, ideal, itself., just, known, leads, like, love, love, love. ", mean, means, most, mostly, one's, part, person, person, plane, recall, refer, regard., seek, sexual, soul, spouse, talk, texts, that, there, thus, truth, truth, type, used, well, will, will, with, within, word, work
New, e, a, all, an, and, any, are, as, be, by, can, esp., for, has, his, in, is, is, it, its, man, not, not, of, on, one, or, own, the, to, us, use, was Second run • affection, ancient, another. aspires, becomes, biblical, chapter. • "to, 1, 13, God, God. ", In, It. charity;, children, children., content, definition:, feeling, feelings, finding, holding, marriage., necessary, passage, passion. ", platonic, refined, returned, subject, through, without • Although, Aquinas, Christians, Corinthians, Socrates, Symposium, Testament, Whether• "to, 1, 13, God, God. ", In, It, New, e, a, all, an, and, any, are, as, be, by, can, esp., for, has, his, in, is, is, it, its, man, not, not, of, on, one, or, own, the, to, us, use, was Second run • affection, ancient, another. ", aspires, becomes, biblical, chapter, ", charity;, chil- dren, children., content, definition:, feeling, feelings, finding, holding, marriage., necessary, passage, passion. ", platonic, refined, returned, subject, through, with- out • Although, Aquinas, Christians, Corinthians, Socrates, Symposium, Testament, Whether
benevolence., brotherly, contemplation, continues, contributes, described, existence;, explained, express, further, initially, inspired, knowledge, non-corporeal, philosophers, physical, relationships, self-benefit)., sensually, spiritual, suggesting, throughout. intimate, appreciation, araction, araction.. transcendence., unconditional, understanding, youthful• "intimate, appreciation, araction, araction. ", benevolence., brotherly, contem- plation, continues, contributes, described, existence;, explained, express, further, initially, inspired, knowledge, non-corporeal, philosophers, physical, relation- ships, self-benefit)., sensually, spiritual, suggesting, throughout, transcendence., unconditional, understanding, youthful
. • Agápe, Éros, érōs• Agápe, agápē), Éros, érōs)
. • (ἀγάπη, • (ἀγάπη, (ἔρως • -
. • "erotas, beauty, beauty, dating, denote, desire, erotic, famous, humans, itself., mostly, person, person, recall, regard., sexual, spouse, within• "erotas", beauty, beauty, dating, denote, desire, erotic, famous, humans, itself., mostly, person, person, recall, regard., sexual, spouse, within
. • "form ; Agape, Greek, Lovers, Modern, Plato, based, being, feast., hence, ideal, leads, means, plane, refer, there • apply, felt, helps, high, just, known, most, part, talk, texts, that, thus, truth, truth, type, well, will, will, with, word, work• "Form", Agape, Greek, Lovers, Modern, Plato, is, omas, based, being, feast., hence, ideal, leads, means, plane, refer, there • apply, felt, helps, high, just, known, most, part, talk, texts, that, thus, truth, truth, type, well, will, will, with, word, work
New, e, all, and, any, esp., its, own, the • are, can, for, has, his, man, not, not, one, use, was Data: Mixed script: English -Spanish -Arabic First run • El, POW/MIA, Wearing, a, as, been, black, de, displaying, duelo., en, es, estados, for, has, is, lazo, mourning, mourning., negro, o, of, or, organizaciones, personas, political, por, remembrance, remembrance, representando, ribbon, sentimiento, sociedades, statement. • , , mean, one's, seek, soul, used • 1, 13, In, It • "to, a, an, as, be, by, in, is, is, it, of, on, or, to, us • God, God. symbol, tragedies, un, used, utilizado, y • crespón, político-social, señal, símbolo• "love, "love, "love:, (even, also, argue, does, eros, eros., even, feel, given, good, like, love, love, love. ", mean, one's, seek, soul, used • 1, 13, In, It • "to, a, an, as, be, by, in, is, is, it, of, on, or, to, us • God, God. ", New, e, all, and, any, esp., its, own, the • are, can, for, has, his, man, not, not, one, use, was Data: Mixed script: English -Spanish -Arabic First run • El, POW/MIA, Wearing, a, as, been, black, de, displaying, duelo., en, es, estados, for, has, is, lazo, mourning, mourning., negro, o, of, or, organizaciones, personas, political, por, remembrance, remembrance, representando, ribbon, sentimiento, sociedades, statement., symbol, tragedies, un, used, utilizado, y • crespón, político-social, señal, símbolo
. • رسالتها، دون، تنقل، بنظرة، المعروف، المصريين، العلامة، العلامات، الرمز، الرسم، الذي، الحاجة، استخدم، ٔان، ٔاكثر، ا، ٔاستخدمو ينبغي يعني، يعبر، ولكن، وعموما، اغريق، ٔ ال و احدة، و و، هم، من، معين، لاية، كلمات، قدماء، ٔان، ف عن، شيء، Second, El, as, de, en, es, is, of, orun • Wearing, been, black, displaying, duelo., estados, for, has, lazo, mourning, mourning., negro, organizaciones, personas, political, por, remembrance, remembrance, representando, ribbon, sentimiento, sociedades, statement., symbol, tragedies, used• رسالتها، دون، تنقل، بنظرة، المعروف، المصريين، العلامة، العلامات، الرمز، الرسم، الذي، الحاجة، استخدم، ٔان، ٔاكثر، ا، ٔاستخدمو ينبغي يعني، يعبر، ولكن، وعموما، اغريق، ٔ ال و احدة، و و، هم، من، معين، لاية، كلمات، قدماء، ٔان، ف عن، شيء، Second run • a, o, y • El, as, de, en, es, is, of, or, un • Wearing, been, black, displaying, duelo., estados, for, has, lazo, mourning, mourn- ing., negro, organizaciones, personas, political, por, remembrance, remembrance, representando, ribbon, sentimiento, sociedades, statement., symbol, tragedies, used, utilizado
. • وعموما اغريق، ٔ ال و رسالتها، المعروف، المصريين، العلامة، العلامات، الحاجة، استخدم، ا، ٔاستخدمو, • وعموما اغريق، ٔ ال و رسالتها، المعروف، المصريين، العلامة، العلامات، الحاجة، استخدم، ا، ٔاستخدمو
Mixed script: English -Chinese First run • "crisis. • ينبغي يعني، يعبر، ولكن، احدة، و معين، لاية، كلمات، قدماء، ٔان، ف شيء، دون، تنقل، بنظرة، الرمز، الرسم، الذي، ٔاكثر، • هم من، عن، ٔان، • و Data, danger", "opportunity"., (simplified, Chinese, Chinese:, Western, auspicious, because, believed, besides, certain, characters, component, composed, criticized, frequently, inauspicious, invoked, linguists, meanings, meanings., motivational, number, numbers, pinyin:, positive, pronounced, represent, similar, sounds, speaking, tradition, traditional, wēijī• ينبغي يعني، يعبر، ولكن، احدة، و معين، لاية، كلمات، قدماء، ٔان، ف شيء، دون، تنقل، بنظرة، الرمز، الرسم، الذي، ٔاكثر، • هم من، عن، ٔان، • و Data: Mixed script: English -Chinese First run • "crisis", "danger", "opportunity"., (simplified, Chinese, Chinese:, Western, aus- picious, because, believed, besides, certain, characters, component, composed, criticized, frequently, inauspicious, invoked, linguists, meanings, meanings., mo- tivational, number, numbers, pinyin:, positive, pronounced, represent, similar, sounds, speaking, tradition, traditional, wēijī)
. • (不利) , ( 吉利), 危机; 危機;, 机; , 機 , • (不利), (吉利), 危机;, 危機;, 机;, 機)
. • In, Some, in, is, jī, name, names, of, on, or, other, some, sound, that, the, their, this, to, to., two, usage, word, words Second run • Chinese, Chinese: • Western • "crisis", "danger", "opportunity"., (simplified, auspicious, because, believed, besides, certain, characters, component, composed, criticized, frequently, inauspicious, invoked, linguists, meanings, meanings., motivational, number, numbers, pinyin:, positive, pronounced, represent, similar, sounds, speaking, tradition, traditional, wēijī• In, Some, e, and, are, based, be, by, can, for, has, have, in, is, jī, name, names, of, on, or, other, some, sound, that, the, their, this, to, to., two, usage, word, words Second run • Chinese, Chinese: • Western • "crisis", "danger", "opportunity"., (simplified, auspicious, because, believed, be- sides, certain, characters, component, composed, criticized, frequently, inaus- picious, invoked, linguists, meanings, meanings., motivational, number, num- bers, pinyin:, positive, pronounced, represent, similar, sounds, speaking, tradi- tion, traditional, wēijī)
• (不利), (吉利). • (不利), (吉利)
. • 危机;, 危機;, • 危机;, 危機;
. • 机; , 機 , • 机;, 機)
. • Some, based, can, for, has, have, name, names, other, some, sound, that, the, their, this, two, usage, word, words • In, be, by, in, is, of, on, or, to, to. • jī Data: Mixed script: Ukrainian -Russian• Some, e, and, are, based, can, for, has, have, name, names, other, some, sound, that, the, their, this, two, usage, word, words • In, be, by, in, is, of, on, or, to, to. • jī Data: Mixed script: Ukrainian -Russian
. • Белоруссией, Молдавией Венгрией, Польшей, Россией, Словакией, • Белоруссией, Венгрией, Молдавией., Польшей, Россией, Словакией, мо- рей., народів, сарматів, скіфів, століття.
. • Азовского, Віддавна, Київська, Румынией, України, Чёрного, Имеет, На, Русь, Second, • Азовского, Віддавна, Київська, Румынией, України, , Чёрного, вважаєть- ся, відправним, , границу, держави, державності, , культури, омывается, пунктом, сухопутную, території, української, існували, • Имеет, На, Русь, але, водами, готів, и, й, на, с, та, юге, інших Second run • 9-13
Pali: abbha First run • (also, (cp., (dense, (megho, (used, (°sama, 1, 1, 101, 1064;, 12)., 134., 239., 249, 250;, 251)., 273, 289., 295, 3, 317, 348, 348)., 382)., 487, 53, 581, 617, 64;, 687, <at> a)fro\\s </at>, <at> o)/mbros </at>, <smallcaps> i. </smallcaps>, <smallcaps> ii. </smallcaps>, <smallcaps> iv. </smallcaps>, <smallcaps> vi. </smallcaps>, <superscript> 1 </superscript>, <superscript> 9 </superscript>, <superscript> s. </superscript>. • Белоруссией, Молдавией Венгрией, Польшей, Россией, Словакией, Азовского, Віддавна, Київська, Румынией, України, Чёрного, На, Русь Имеет, ; A Data, A ) As, Dh, Dhs, Dhsa, Gr, Idg, J It, Lat Miln, ) Miln, Oir, Pv, S Pva, Sk Sn, Sna, Vin, Vv, Vva, Vedic, a, abhra, adj., also, ambha, ambu, as, at, by, cloud, cloud, cloud\";, clouds, clouds, cloudy, cp., dark), expl, f., free, from, froth, imber, in, is, later, like, list, m., marajo</b>, mass, moon-, mountain, nt., obscure, of, or, pabbata, perhaps, point, rain, rain;, rajo, referred, scum, sense, storm-cloud, summit, sunshine, that, the, thick, things, thunder-cloud);, thundering, to, viz., water, water• Белоруссией, Венгрией, Молдавией., Польшей, Россией, Словакией, • Азовского, Віддавна, Київська, Румынией, України, Чёрного, границу, держави, культури, пунктом, існували • вважається, відправним, державності, омывается, сухопутную, території, української • и, й, с • На, на, та • але, водами, готів, юге, інших • Имеет, Русь Data: Pali: abbha First run • (also, (cp., (dense, (megho, (used, (°sama, 1, 1, 101, 1064;, 12)., 134., 239., 249, 250;, 251)., 273, 289., 295, 3, 317, 348, 348)., 382)., 487, 53, 581, 617, 64;, 687, <at> a)fro\\s </at>, <at> o)/mbros </at>, <smallcaps> i. </smallcaps>, <smallcaps> ii. </smallcaps>, <smallcaps> iv. </smallcaps>, <smallcaps> vi. </smallcaps>, <superscript> 1 </superscript>, <superscript> 9 </superscript>, <superscript> s. </superscript>, A, A), As, Dh, Dhs, DhsA, Gr., Idg., It, J, Lat., Miln, Miln), Oir, Pv, PvA, S, Sk., Sn, SnA, , is, Vin, Vv, VvA, [Vedic, a, abhra, adj., also, ambha, ambu, as, at, by, cloud, cloud, cloud\";, clouds, clouds, cloudy, cp., dark), expl, f., free, from, froth, imber, in, is, later, like, list, m., marajo</b>, mass, moon-, mountain, nt., obscure, of, or, pabbata, perhaps, point, rain, rain;, rajo, referred, scum, sense, storm-cloud, summit, sunshine, that, the, thick, things, thunder-cloud);, thundering, to, viz., water, water].
<b>-ghana</b>, <b>-mua</b>, <br, =, \"dark, \"dull\. • & , • &, (=, <b>-ghana</b>, <b>-mua</b>, <br, =, \"dark, \"dull\";
<b> -saŋvilāpa </b>, <b> Rāhu </b>, <b> abbhā </b>, <b> abbhāmaa </b>, abbhāmua, acchādesi);, mahikā </b>, nīla-megha, valāhaka). • (abbhaŋ, (mahiyā, (nīl°• (abbhaŋ, (mahiyā, (nīl°, <b> -saŋvilāpa </b>, <b> Rāhu </b>, <b> abbhā </b>, <b> abbhāmaa </b>, abbhāmua, acchādesi);, mahikā </b>, nīla-megha, valā- haka);, valāhaka-sikhara
<b>-paṭala</b>, <b>abbhaŋ, <b>dhū-, (nt.) Second run • (cp. • *m̊bhrocite /><b>-Kūṭa</B> ; Dhs, Dhsa, Idg, Lat, Miln, ) Miln, Oir, Pva, Sna, Vin, Vva, Vedic, as, at, by, cp., in, is, nt., of, or, to• *m̊bhrocite /><b>-kūṭa</b>, <b>-paṭala</b>, <b>abbhaŋ, <b>dhū-, (nt.) Second run • (cp., Dhs, DhsA, Idg., Lat., Miln, Miln), Oir, PvA, SnA, is, Vin, VvA, [Vedic, as, at, by, cp., in, is, nt., of, or, to
(megho, (used, (°sama, <at> a)fro\\s </at>, <at> o)/mbros </at>, <smallcaps> ii. </smallcaps>, <smallcaps> iv. </smallcaps>. • (also, (dense. <smallcaps> vi• (also, (dense, (megho, (used, (°sama, <at> a)fro\\s </at>, <at> o)/mbros </at>, <smallcaps> ii. </smallcaps>, <smallcaps> iv. </smallcaps>, <smallcaps> vi.
cloud\";, clouds, clouds, cloudy, dark), expl, free, from, froth, imber, later, like, list, marajo </b>, mass, moon-, mountain, obscure, pabbata, perhaps, point, rain, rain;, rajo, referred, scum, sense, storm-cloud. summit, sunshine, that, the, thick, things, thundercloud);, thundering, viz., water, water</smallcaps>, abhra, adj., also, ambha, ambu, cloud, cloud, cloud\";, clouds, clouds, cloudy, dark), expl, free, from, froth, imber, later, like, list, marajo </b>, mass, moon-, mountain, obscure, pabbata, perhaps, point, rain, rain;, rajo, referred, scum, sense, storm-cloud, summit, sunshine, that, the, thick, things, thunder- cloud);, thundering, viz., water, water].
295, 3, 317, 348, 348)., 382)., 487, 53, 581, 617, 64;, 687, <superscript> 1 </superscript>, <superscript> 9 </superscript> • <smallcaps> i. </smallcaps>, <superscript> s. </superscript>, A, A). Gr Dh, J It, S Pv, Sk, 273289As• 1, 1, 101, 1064;, 12)., 134., 239., 249, 250;, 251. Sn, , Vv, a, f., m• 1, 1, 101, 1064;, 12)., 134., 239., 249, 250;, 251)., 273, 289., 295, 3, 317, 348, 348)., 382)., 487, 53, 581, 617, 64;, 687, <superscript> 1 </superscript>, <superscript> 9 </superscript> • <smallcaps> i. </smallcaps>, <superscript> s. </superscript>, A, A), As, Dh, Gr., It, J, Pv, S, Sk., Sn, , Vv, a, f., m.
• <b> -ghana </b>, <b> -mua </b>, <br, \"dark, \"dull\. • <b> -ghana </b>, <b> -mua </b>, <br, \"dark, \"dull\";
. • & , (= , = , • &, (=, =
<b> Rāhu </b>, <b> abbhā </b>, nīla-megha • <b> -saŋvilāpa </b>, <b> abbhāmaa </b>, abbhāmua, acchādesi);, mahikā </b>, valāhaka). • (abbhaŋ, (mahiyā, (nīl°• (abbhaŋ, (mahiyā, (nīl°, <b> Rāhu </b>, <b> abbhā </b>, nīla-megha • <b> -saŋvilāpa </b>, <b> abbhāmaa </b>, abbhāmua, acchādesi);, mahikā </b>, valāhaka);, valāhaka-sikhara
/><b> -kūṭa </b>, <b> -paṭala </b>, <b> abbhaŋ, <b> dhū-• (nt.) Data: Pali: abhijjhitar First run • abhijjhita, abhijjhātar, covets, function. • *m̊bhro, med., one, who,°itar),°itar,°ātar• *m̊bhro, /><b> -kūṭa </b>, <b> -paṭala </b>, <b> abbhaŋ, <b> dhū- • (nt.) Data: Pali: abhijjhitar First run • abhijjhita, abhijjhātar, covets, function], med., one, who,°itar),°itar,°ātar).
. • , T <smallcaps> I. </Smallcaps>, = <smallcaps> V. </Smallcaps>, A , M , 265287• (T., <smallcaps> i. </smallcaps>, <smallcaps> v. </smallcaps>, =, A, M, ag., fr., in, l., v. • 265, 287
Second run • abhijjhita, abhijjhātar, covets, function], med., one, who,°itar),°itar,°ātar). • [n. Second run • abhijjhita, abhijjhātar, covets, function], med., one, who,°itar),°itar,°ātar).
. • , T , A , • (T., A, M
. • =, <smallcaps> I. </Smallcaps>, <smallcaps> V. </Smallcaps>, 265287• =, l., v. • <smallcaps> i. </smallcaps>, <smallcaps> v. </smallcaps>, ag., fr., in • 265, 287
• [n. Data ; Freq, Loc, Pali: ajja First run • -divasa. Vedic, adya, ajjatagge, ajjato, an, and, as, base, being, day, demonstr., dyaus, from, in, morning, not, of, old, or, phrase, present, pron., the, this, with• [n. Data: Pali: ajja First run • -divasa, Freq., Loc., [Vedic, adya, ajjatagge, ajjato, an, and, as, base, being, day, demonstr., dyaus, from, in, morning, not, of, old, or, phrase, present, pron., the, this, with
. • & , + Mhvs, • &, +, Mhvs, s., v.
• -Kālaṁ ; 15,64 32,23, D Ajjā, Da I,235 I,85;, Dh, 326;, J Iii,425, J 279;, Kern,toev Vi,180;, Pv.I,117, PvA.59) ;, PvA.6,23;, Sn.75,153,. • -kālaṁ, 10), 15,64., 32,23., Ajjā, D.I,85;, DA.I,235., Dh.326;, III,425, J.I, 279;, J.VI,180;, Kern,Toev., Pv.I,117, PvA.59) ;, PvA.6,23;, Sn.75,153,
agge(?), ajja-tagge,see, ajjā;, bahutaṁ, day"], diva) ,thus, dyā, dyā,a°, idāni, onward,henceforth, to-day,now. ; , " On, ‹-› , Ajja,& , 158,970,998;, Vin.I,18;, a3), adyā,a, agga3). =, (Page, (adv.), (readsee Second run • an, as, in, of, or • Freq., Loc., [Vedic • -divasa, adya, ajjatagge, ajjato, and, base, being, day, demonstr., dyaus, from, morning, not, old, phrase, present, pron., the, this, with158,970,998;, Vin.I,18;, a3), adyā,a, agga3), agge(?), ajja-tagge,see, ajjā;, bahutaṁ, day"], diva) ,thus, dyā, dyā,a°, idāni, onward,henceforth, to-day,now, "food" ) ;, "on, ‹-›, Ajja,&, (=, (Page, (adv.), (read, (see Second run • an, as, in, of, or • Freq., Loc., [Vedic • -divasa, adya, ajjatagge, ajjato, and, base, being, day, demonstr., dyaus, from, morning, not, old, phrase, present, pron., the, this, with
. ‹-› •"on, Ajja,& , =, (Page, (adv.), (read, (see • -kālaṁ, 10), 15,64., 32,23., Ajjā, D.I,85;, DA.I,235., Dh.326;, III,425, J.I,279;, J.VI,180;, Kern,Toev., Pv.I,117, PvA.6,23;, Sn.75,153,158,•"on, ‹-›, Ajja,&, (=, (Page, (adv.), (read, (see • -kālaṁ, 10), 15,64., 32,23., Ajjā, D.I,85;, DA.I,235., Dh.326;, III,425, J.I,279;, J.VI,180;, Kern,Toev., Pv.I,117, PvA.6,23;, Sn.75,153,158,
ajja-tagge,see, ajjā;, bahutaṁ, day" ], diva) ,thus, dyā, idāni, onward,henceforth, to-day,now • PvA.59) ;, adyā,a, agge(?), dyā,a°. Data: Pali: gūhanā First run • 253). 970,998;, Vin.I,18;, a3), agga3). Pug.19.Cp.pari°. (Page, [abstr.fr.gūhati]=gūhanā, Gūhanā, (f.) , (q.v.) Second run • 253), Pug.19.Cp.pari°. (Page, [abstr.fr.gūhati]=gūhanā, Gūhanā, (f.) , (q.v.970,998;, Vin.I,18;, a3), agga3), ajja-tagge,see, ajjā;, bahutaṁ, day" ], diva) ,thus, dyā, idāni, onward,henceforth, to-day,now • PvA.59) ;, adyā,a, agge(?), dyā,a°, "food" ) ; Data: Pali: gūhanā First run • 253), Pug.19.Cp.pari°. (Page, [abstr.fr.gūhati]=gūhanā, Gūhanā, (f.) , (q.v.) Second run • 253), Pug.19.Cp.pari°. (Page, [abstr.fr.gūhati]=gūhanā, Gūhanā, (f.) , (q.v.)
Data: Pali: pacati First run • 382), Caus.pacāpeti, DA.I,159,where, Obulg.peka, Pass.paccati, Vin. IV,264;, bake,Gr.pέssw, cook,boil,roast, cook,pέpwn, daṇḍena, fig. torment, fry,roast,Lith,kepū, intrs. ) :Niraye, paccato,by, ppr.pacanto, pp.pakka, pīḷentassa) .-, tormenting,Gen.pacato. Data: Pali: pacati First run • 382), Caus.pacāpeti, DA.I,159,where, Obulg.peka, Pass.paccati, Vin. IV,264;, bake,Gr.pέssw, cook,boil,roast, cook,pέpwn, daṇḍena, fig. torment, fry,roast,Lith,kepū, intrs. ) :Niraye, paccato,by, ppr.pacanto, pp.pakka, pīḷentassa) .-, tormenting,Gen.pacato
. • D.i,52, - N.s.ii,225,pva.10,14., Pacati, , Ved.pacati,Idg.*peqǔō, Av.pac-;, (+Caus.pācayato), (expld, (q.v. ) .-, (q.v. ) .‹-›, (q. v. ) . (Pagetrs.and • aer, at, be, for, in, or, pacato, pare, purgatory, read, ripe. roasted, roasting, to, tormented• D.I,52, N.S.II,225,PvA.10,14.-, Pacati,[Ved.pacati,Idg.*peqǔō, Av.pac-;, (+Caus.pācayato), (expld, (q.v. ) .-, (q.v. ) .‹-›, (q. v. ) . (Page, (trs.and • aer, at, be, for, in, or, pacato, pare, purgatory, read, ripe], roasted, roasting, to, tormented
. • & Second Run • Caus.pacāpeti, Da.i,159,where, Obulg.peka, Pass.paccati, Vin.iv,264;, Roast,lith,kepū, intrs. ) :Niraye, paccato,by, ppr.pacanto, pīḷentassa) .-, tormenting,Gen.pacato• &, pacitvā, pāceti Second run • Caus.pacāpeti, DA.I,159,where, Obulg.peka, Pass.paccati, Vin.IV,264; , bake,Gr.pέssw, cook,boil,roast, cook,pέpwn, daṇḍena, fig.torment, fry, roast,Lith,kepū, intrs. ) :Niraye, paccato,by, ppr.pacanto, pīḷentassa) .-, tormenting,Gen.pacato
• 382), pp.pakka • D.I,52, N.S.II,225,PvA.10,14.-, (q.v. ) .. • 382), pp.pakka • D.I,52, N.S.II,225,PvA.10,14.-, (q.v. ) .-
. • Pacati,, Ved.pacati,Idg.*peqǔō,Av.pac-;,(+Caus.pācayato),(expld, (q.v. ) .‹-›, (q.v. ) . (Pagetrs.and • for, pacato, pare, read, ripe• Pacati,[Ved.pacati,Idg.*peqǔō,Av.pac-;,(+Caus.pācayato),(expld, (q.v. ) .‹-›, (q.v. ) . (Page, (trs.and • for, pacato, pare, read, ripe]
• aer, purgatory, roasted, roasting. • aer, purgatory, roasted, roasting, tormented
. • Μόλις, Excellence Business, It, Internet, • Second Run • Μόλις, • It, Internet, Business, Excellence, Data: Twier. 2First run • "e, 18h, @dhiparis, David, Demain, Keynote, all, collective, counts?", dynamics, par, perish;, science-publish, that • is, it, of, or Second run • "e, @dhiparis, David, Demain, Keynote, all, collective, counts?", dynamics, par, perish;, science-publish. that• αυτή, διαγωνισμό, λύση, στο, τη, ψήφισα, Μόλις • BUSINESS, EXCELLENCE., IT, Internet, ings, of Second run • Μόλις • αυτή, διαγωνισμό, λύση, στο, τη, ψήφισα • IT, of • Internet, ings, • BUSINESS, EXCELLENCE. Data: Twier 2 (French-English) First run • "e, 18h, @dhiparis, David, Demain, Keynote, all, collective, counts?", dynam- ics, par, perish;, science-publish, that • is, it, of, or Second run • "e, @dhiparis, David, Demain, Keynote, all, collective, counts?", dynamics, par, perish;, science-publish, that
Food • to • go, in • for, just • and, are, breuvages, fans, ready, the, waiting Data: Twier 4 (English-Polish) First run • żubrówka, my • adidas, and, back, comes, crates, dad, from, jackets, of, omg, poland, strawberries, two, with Second run • żubrówka. • Edmonton, Second Run • Edmonton, my • adidas, comes, dad, of • and, back, crates, from, jackets, omg, poland, strawberries, two, with Data: Twier 5 (Transliterated Amharic-English• Edmonton, Food • go, in, to • and, are, breuvages, fans, for, just, ready, the, waiting Second run • Edmonton, Food • to • go, in • for, just • and, are, breuvages, fans, ready, the, waiting Data: Twier 4 (English-Polish) First run • żubrówka, my • adidas, and, back, comes, crates, dad, from, jackets, of, omg, poland, strawberries, two, with Second run • żubrówka, my • adidas, comes, dad, of • and, back, crates, from, jackets, omg, poland, strawberries, two, with Data: Twier 5 (Transliterated Amharic-English)
• (coffee, bread)., dabo, is, naw, our. • (coffee, bread)., dabo, is, naw, our
• (coffee, bread)., dabo, is. • (coffee, bread)., dabo, is, naw
staring, at, your, But, in, this, it, doesn't, refer, to, anyone, else's, buon, just, your, own., • -• "navel-gazing", navel"., case, belly Data: Latin script: German-Finnish-Turkish • die. German-English • e, German, Klimazone Nabelschau, Je, Pohjoisella, Data: Latin. Data: Latin script: German-English • e, German, word, Nabelschau, means, or, "staring, at, your, But, in, this, it, doesn't, refer, to, anyone, else's, buon, just, your, own., • - • "navel-gazing", navel"., case, belly Data: Latin script: German-Finnish-Turkish • die, in, und, Klimazone., Je, ob, auf, Südhalbkugel, vom, eli, on, vuodenaika, ja, on, vuodenajoista, koska, maapallo, on, silloin, kallistunut, aurinko, maan, pinnalle, kulmassa, muina, vuodenaikoina., Pohjoisella, pallonpuoliskolla, lasketaan, ta- vallisesti, ja, elokuu, eteläisellä, pallonpuoliskolla, joulu-, ja, helmikuu., en, sı- cak, en, yazda, Dünya, depo, en, sıcak, yaklaşık, ay, sonra, ortaya, Sıcak, Haziran, Eylül, ise, Aralık, arasındadır.
. • Der, - Nord, - Nord, Der, - Südwinter, Yaz, 22Küre'de, Küre'de, 21arasında, Küre'de, 22, 21, Mart • gemäßigten, gerade, gleichzeitig, kuin, Kuzey, uzun, günler, gerçekleşir., eiği, için, günler, genellikle, iki, günler, Kuzey, ile, ile• Der, ist, wärmste, der, vier, Jahreszeiten, der, arktischen, nachdem, er, der, Nord-, oder, herrscht, spricht, Nord-, oder, Der, findet, mit, Südwinter, sta., suvi, läm- pimin, niin, eä, säteilee, heinä-, Yaz, mevsimdir., Küre'de, Küre'de, 21, 22, ara- sında, Küre'de, 22, 21, Mart • gemäßigten, gerade, gleichzeitig, kuin, Kuzey, uzun, günler, gerçekleşir., eiği, için, günler, genellikle, iki, günler, Kuzey, ile, ile
Doux • while • "hard"., used)., • translate, as, meaning, very. • Sommer, Südsommer, Nordsommer, - Kesä, Yarım, Yarım, Yarım Güney, ; Data, Mou, Doux, Latin script: English-French • both, "so. different., "coarse", can, also, mean, almost,sucré, Data: Latin script: English-Transliterated Greek • at, least, ways, as, to, is, has, philía, and, storgē., as, has, historically, difficult, to, which, generally, as • e, language, distinguishes, different, the, Ancient, distinct, with, languages, it, been, separate, the, meanings, these, used, outside, their, respective, the, senses, in, these, used• Sommer, man, Südsommer., Nordsommer, dem, Kesä, kevään, syksyn, välissä., Kesä, jyrkemmässä, kesäkuukausiksi, kesä-., tammi-, Yarım, ısıyı, çıkar., Yarım, Güney, Yarım Data: Latin script: English-French • both, "so", in, English, although, their, is, is, the, opposite, of, "rough", or, is, the, opposite, of, sweet, only, for, wines, (otherwise, is • mou, :, mou, but • doux, • Doux, (rugueux), Doux • while • "hard"., used)., • translate, as, meaning, very, different., "coarse", can, also, mean, almost,sucré, Data: Latin script: English-Transliterated Greek • at, least, ways, as, to, is, has, philía, and, storgē., as, has, historically, difficult, to, which, generally, as • e, language, distinguishes, different, the, Ancient, distinct, with, languages, it, been, separate, the, meanings, these, used, outside, their, respective, the, senses, in, these, used
Fachkräe-mangel, Ingenieuren, ein • Milano, l'esempio, psicologia, (il, subito, autodistruggersi, solo, lunghissimo, So, il, movimento, moto, sui, si, bellissima, occhio, all'insù, sono, sogno. • Greek, However, Italian-German, Studie, Stierverbands, Wissenscha, Szenario, Come, Ist, Cenacolo, Dabei, Ergebnis, Umfrage, Warnung, Deutschland, Certo, Zeit, Stimmen, Entwarnung, Deutsche, Baista, , Venezia, • Mint-Berufen, Turchi, Nur Data, Nonetheless, words, follows. Data: Latin script. Mixed script: Greek-Russian • ελληνική. γλώσσα, είναι, μία, από, τις, ινδοευρωπαϊκές, γλώσσες., Αποτελεί, το, μοναδικό, μέλος, ενός, ανεξάρτητου, κλάδου, της, ινδοευρωπαϊκής, οικογένειας, γλωσσών., Ανήκει, επίσης, στον, βαλκανικό, γλωσσικό, δεσμό., Στην, ελληνική, γλώσσα, έχουμε, γραπτά, κείμενα, από, τον, 15ο, αιώνα, μέχρι, σήμερα• Greek, how, word, Greek, agápe, éros, However, other, when, were, are • four, love, used., four, words, for, love:, of, words, of, contexts., Nonetheless, words, follows. Data: Latin script: Italian-German • affrescò, privato, Studie, definire, periture,Stierverbands, Wissenscha,studierà, difesa, ovvero, Szenario, Naturwissenschalern • dell'aureola, da, del, di, der, zum, modo., dem, den, drohe., Come, vom • custodisce, quel, es, oder, per, le, idee, stessa, des, dass, delle, E, se, Ist, das, seit • più, Cenacolo, vinciano, rivoluzionaria, Giuda, condanna, con, peccato), comin- ciò, con, cancro, faceva, intuizioni, vita, va, Dabei, Ergebnis, in, i, riccioli, poi, più, bacini, in, Annunciazione, con, ali, la, cosa, barbaglio, anni, bei, • ne, struggente:, che, amore, e, non, viene, ma, consapevolezza, ad, che, ha, re- cente, Kaum, eine, Woche, vergeht, keine, neue, Umfrage, Warnung, ema, Fachkräemangel, Deutschland, Certo, ma, anche, consapevole, che, qualche, mehren, letzter, Zeit, Stimmen, Entwarnung, geben., kam, jüngst, eine, Deutsche, "ein, allgemeiner, Fachkräemangel, eher, mehr", anche, Baista, che, Leonardo, approfonditamente, a, Venezia, nelle, vada, alla, aento, alle, dell'angelo:, deli- catezza, punte, che, non, che, volare?, Jahren, angemahnte, drohenden, Fachkräe- mangel, Ingenieuren, ein • Milano, l'esempio, psicologia, (il, subito, autodistruggersi, solo, lunghissimo, So, il, movimento, moto, sui, si, bellissima, occhio, all'insù, sono, sogno, lo, ossessionava, quello, und, also, Mythos? • un, ür, MINT-Berufen • cura, restauro, arginato., gibt., perché, caurata, sich, auch, zu, nicht, richia- mano, acque, ricerche, chiave, anti-Turchi., nur Data: Mixed script: Greek-Russian • ελληνική, γλώσσα, είναι, μία, από, τις, ινδοευρωπαϊκές, γλώσσες., Αποτελεί, το, μοναδικό, μέλος, ενός, ανεξάρτητου, κλάδου, της, ινδοευρωπαϊκής, οικογένειας, γλωσσών., Ανήκει, επίσης, στον, βαλκανικό, γλωσσικό, δεσμό., Στην, ελληνική, γλώσσα, έχουμε, γραπτά, κείμενα, από, τον, 15ο, αιώνα, μέχρι, σήμερα.
. • На, а, в, греческом, новое, время, (наряду, новых, научных, терминов, называемая, международная, слова, в, основном, двумя, через• На, греческом, на, всех, его, существования, была, создана, богатейшая, греческого, обязательным, всякого, образованного, большое, заимствова- ний, а, в, греческом, новое, время, (наряду, новых, научных, терминов, на- зываемая, международная, слова, в, основном, двумя, через
Data: Mixed script: English-Greek • is, biblical, is, will, is, without, self-benefit). . Χ • Η, ) , is, feelings, feelings, it, be, feeling, being, high, is, by, his, is, by, will, mostly, sexual, "intimate, well, refined, his, definition:, is, initially, felt, with, it, beauty, within, beauty, itself., use, "without, helps, soul, beauty, spiritual, youthful, beauty, feel, suggesting, sensually, spiritual, finding, its, like, finding, all, seek• Η, π.Χ.,языке, этапах, литература., В, Римской, империи, знание, языка, считалось, для, человека., В, латинском, языке, присутствует, количество, греческих, -, значительное, количество, латинских, и, романских, слов., В, древнегреческий, язык, стал, с, латинским), источником, создания, и, тех- нических, (так, лексика)., В, русский, язык, греческие, проникали, путями, -, международную, лексику, и, церковнославянский, язык. Data: Mixed script: English-Greek • is, biblical, is, will, is, without, self-benefit)., is, feelings, feelings, it, be, feeling, being, high, is, by, his, is, by, will, mostly, sexual, "intimate, well, refined, his, definition:, is, initially, felt, with, it, beauty, within, beauty, itself., use, "with- out, helps, soul, beauty, spiritual, youthful, beauty, feel, suggesting, sensually, spiritual, finding, its, like, finding, all, seek
Corinthians, and, described, there, and, the, Testament, as, and, benevolence., Whether, the, returned, the, to, any, Agape, also, used, ancient, texts, to, denote, children, and, the, a, and, was, also, used, to, to, a, feast. • (ἀγάπη, Éros, Modern, Greek, God, English-Spanish-ArabicAgape, used, the, passage, as, the, chapter, own, Although, eros, for, person, contemplation, becomes, of, person, or, even, becomes, of, not, of, of, love, of, word, mean, In, Symposium, work, on, subject, eros, knowledge, of, of. It, can, also, described, as, the, regard., Agape, used, Christians, to, express, the, children., type, was, further, explained, omas, Aquinas, as, the, another. ", érōs), means, the, passion. ", "erotas", means, It, can, also, apply, to, dating, relationships, as, as, marriage., Plato, a, an, appreciation, the, that, appreciation, Plato, does, talk, physical, araction, as, a, necessary, part, hence, the, the, platonic, to, physical, araction. ", the, the, most, famous, ancient, the, Plato, has, Socrates, argue, that, the, recall, and, contributes, to, an, understanding, truth, the, ideal, that, leads, us, humans, to, desire, thus, that, that, based, aspires, to, the, plane, existence;, that, truth, just, any, truth, leads, to, transcendence., and, are, inspired, to, truth, the, means, eros. Data: Mixed script• (ἀγάπη, (ἔρως • Agápe, "love:, brotherly, love, love, of, God, for, of, for, in, known, "love, 1, 13, throughout, New, brotherly, love, affection, good, love, love, given, or, not, per- son, continues, love, (even, in, for, one's, for, spouse, refer, love, of, content, or, holding, one, in, unconditional, love, of, God, for, of, love, "to, good, of, Éros, "love, of, e, Modern, Greek, word, love. ", own, Although, eros, for, person, contemplation, becomes, of, person, or, even, becomes, of, not, of, of, love, of, word, mean, In, Symposium, work, on, subject, eros, knowledge, of, of, "Form", of, erotic, -, even, love, non-corporeal, of, is, Lovers, philosophers, through, of, • agápē), means, esp., charity;, the, man, and, man, God. ", Agape, used, the, pas- sage, as, the, chapter, ", Corinthians, and, described, there, and, the, Testament, as, and, benevolence., Whether, the, returned, the, to, any, Agape, also, used, ancient, texts, to, denote, children, and, the, a, and, was, also, used, to, to, a, feast., It, can, also, described, as, the, regard., Agape, used, Christians, to, express, the, children., type, was, further, explained, omas, Aquinas, as, the, another. ", érōs), means, the, passion. ", "erotas", means, It, can, also, apply, to, dating, re- lationships, as, as, marriage., Plato, a, an, appreciation, the, that, appreciation, Plato, does, talk, physical, araction, as, a, necessary, part, hence, the, the, pla- tonic, to, physical, araction. ", the, the, most, famous, ancient, the, Plato, has, Socrates, argue, that, the, recall, and, contributes, to, an, understanding, truth, the, ideal, that, leads, us, humans, to, desire, thus, that, that, based, aspires, to, the, plane, existence;, that, truth, just, any, truth, leads, to, transcendence., and, are, inspired, to, truth, the, means, eros. Data: Mixed script: English-Spanish-Arabic
. • دون، احدة، و بنظرة، رسالتها، تنقل، ٔان، ينبغي، العلامة، ٔان، ف وعموما، معين، شيء، عن، يعبر، الذي، الرسم، يعني، الرمز، استخدم، من، ٔاكثر، ولكن، العلامات، ا، ٔاستخدمو اغريق، ٔ ال و المصريين، قدماء، ٔان، المعروف، من، و، كلمات، لاية، الحاجة، هم العلامات، • Ribbon, ribbon, mourning, El, un, y, un, en • black, is, a, of, remembrance, or, Wearing, ordisplaying, a, black, has, been, used, for, remembrance, tragedies, or, as, a, political, statement., crespón, negro, o, lazo, negro, es, símbolo, utilizado, por, personas, estados, sociedades, organizaciones, representando, sentimiento, político-social, señal, de, duelo• دون، احدة، و بنظرة، رسالتها، تنقل، ٔان، ينبغي، العلامة، ٔان، ف وعموما، معين، شيء، عن، يعبر، الذي، الرسم، يعني، الرمز، استخدم، من، ٔاكثر، ولكن، العلامات، ا، ٔاستخدمو اغريق، ٔ ال و المصريين، قدماء، ٔان، المعروف، من، و، كلمات، لاية، الحاجة، هم العلامات، • ribbon, symbol, mourning., ribbon, mourning, El, un, y, un, en • black, is, a, of, remembrance, or, Wearing, or, displaying, a, black, has, been, used, for, remembrance, tragedies, or, as, a, political, statement., crespón, negro, o, lazo, negro, es, símbolo, utilizado, por, personas, estados, sociedades, organizaciones, representando, sentimiento, político-social, señal, de, duelo.
• A , Pow/Mia Data, Mixed script: English-Chinese • e, Chinese, (simplified, traditional. invoked, motivational, speaking, because, the, composed, characters, that, represent, linguists, have, criticized, this, usage, because, the, component, (simplified, Chinese:, traditional, Chinese:, has, other, besides, Chinese, certain, some, be, based, the, Chinese, that, the, e, numbers, believed, have, because, their, similar, words, that, have, positive• A, POW/MIA Data: Mixed script: English-Chinese • e, Chinese, (simplified, traditional, Chinese:, invoked, motivational, speaking, because, the, composed, characters, that, represent, linguists, have, criticized, this, usage, because, the, component, (simplified, Chinese:, traditional, Chinese:, has, other, besides, Chinese, certain, some, be, based, the, Chinese, that, the, e, numbers, believed, have, because, their, similar, words, that, have, positive
. • ( 不利, • (不利)
. • Western, meanings, In, are, number, name, and, are, meanings, names, meanings• Western, can, and, Some, meanings, In, are, number, name, and, are, meanings, names, meanings.
• 0. 69• 0, 6, 8, 9
(吉利) Data: Mixed script: Ukrainian-Russian • й. 危機; 危机;, ) Jī, 机 ; , 機 Русь, Морей Россией, Белоруссией, Польшей, Словакией, Венгрией, Молдавией Имеет, Чёрного, Азовского, України, Gr, Lat, Sk, На, водами • та, вважається, Київська, століття., омывается, с Data: Pali: abbha • (nt.), nt., Sk., \"dark, Idg. SnA, Sn, S• "crisis", is, auspicious, inauspicious, sounds, sound • for, pinyin:, frequently, in, word, of, two, "danger", "opportunity"., pronounced, tradition. на, держави, скіфів, сарматів, готів, народів, відправним, державності. water, Gr., water]., dark), at, SnA, S, at, It, Sn, (cp.• "crisis", is, auspicious, inauspicious, sounds, sound • for, pinyin:, frequently, in, word, of, two, "danger", "opportunity"., pronounced, tradition, by, or, on, word, to., to • 危机;, 危機;, wēijī), jī, 机;, 機), (吉利) Data: Mixed script: Ukrainian-Russian • й, Русь, морей., Россией, Белоруссией, Польшей, Словакией, Венгрией, Ру- мынией • але, 9-13, юге, Имеет, Молдавией. • існували, інших • Чёрного, Азовского, границу • культури • території, України, пунктом, української, и, сухопутную, и • Віддавна, на, держави, скіфів, сарматів, готів, народів, відправним, держав- ності, На, водами • та, вважається, Київська, століття., омывается, с Data: Pali: abbha • (nt.), nt., Sk., \"dark, Idg., cp., Gr., Lat., Sk., water, Gr., water]., dark), at, SnA, S, at, It, Sn, (cp., SnA, Sn, S
. • & , A , A ) , 3822511, 1064;, 249, 250;, 12)., 64;, 348• &, A, A), ., J, 251)., 1, 1064;, 249, 250;, 12)., 64;, 348)., 382).
</smallcaps>, =, list, is, <smallcaps> i. </smallcaps>, (°sama, <smallcaps> vi. </smallcaps>, (abbhaŋ, <smallcaps> iv. • , " Ii, • cloud\";, also, cloud, cloudy, <smallcaps> ii. </smallcaps>, =, list, is, <smallcaps> i. </smallcaps>, (°sama, <smallcaps> vi. </smallcaps>, (abbhaŋ, <smallcaps> iv.
VvA, acchādesi);, Pv, PvA, \"dull\";, valāhaka);, Vv, valāhakasikhara • <at>a)fro\\s</at>, froth, of, <superscript> 9 </superscript>, <superscript> s. </su-perscript>, <superscript> 1 </superscript> • later, scum, rain;, ambha, rain, a, Miln. </Smallcaps>, nīl°, As, Dhs, DhsA, (used, (=, clouds, cloud, (also, as • m., adj. • abhra, (mahiyānīla-megha, sense, expl, , Dh</smallcaps>, (nīl°, As, Dhs, DhsA, (used, (=, clouds, cloud, (also, as • m., adj. • abhra, (mahiyā, VvA, acchādesi);, Pv, PvA, \"dull\";, valāhaka);, Vv, valāhaka- sikhara • <at>a)fro\\s</at>, froth, of, <superscript> 9 </superscript>, <superscript> s. </su- perscript>, <superscript> 1 </superscript> • later, scum, rain;, ambha, rain, a, Miln, (megho, Miln), nīla-megha, sense, expl, , Dh
marajo</b>, <b>Rāhu</b>, pabbata, rajo, <b>abbhā</b>, by, perhaps, <b>abbhāmaa</b>, <br, /><b>-kūṭa</b>, or, summit, storm-cloud, <b>-ghana</b>, <b>-paṭala</b>, mass, <b>-mua</b>, from, abbhāmua, <b> -saŋvilāpa</b>. • , Vedic, imber, Oir, (dense, Vin, in, things, that, sunshine, is, referred, mountain, like, thunder-cloud);, the, point, thick, free, thundering Data: Pali: abhijjhitar • <smallcaps>i.</smallcaps>, v., l., <smallcaps>v.</smallcaps> • abhijjhita, abhijjhātar,°itar),°itar,°ātar• *m̊bhro, <at>o)/mbros</at>, ambu, mass, to, obscure, moon-, <b>abbhaŋ, mahikā </b>, <b>dhū-, marajo</b>, <b>Rāhu</b>, pabbata, rajo, <b>abbhā</b>, by, perhaps, <b>abbhāmaa</b>, <br, /><b>-kūṭa</b>, or, summit, storm-cloud, <b>-ghana</b>, <b>-paṭala</b>, mass, <b>-mua</b>, from, abbhāmua, <b> -saŋvilāpa</b>, • [Vedic, imber, Oir, (dense, Vin, in, things, that, sunshine, is, referred, moun- tain, like, thunder-cloud);, the, point, thick, free, thundering Data: Pali: abhijjhitar • <smallcaps>i.</smallcaps>, v., l., <smallcaps>v.</smallcaps> • abhijjhita, abhijjhātar,°itar),°itar,°ātar).,
. • [n, 265• [n., ag., fr., med., M, 287, (T., =, A, 265
PvA.59) ;, PvA.6,23;, phrase, ajjatagge, ajjato, agge(?), ajja-tagge,see, agga3),(adv. Dh ,thus, 326;, base, a3), diva). the, 32,23., (Page • ‹-›, -kālaṁ •Vedic, &, +, being, (see, (see, (read, as, (=, Mhvs, (=, +, Mhvs • of• in, function], one, who, covets Data. of, "on, "food" ) ; • and, an, old, not • adya, adyā,a, dyā,a°, dyā, dyaus, day"], to-day,now, bahutaṁ, with, day, -divasa, day • demonstr., pron., Loc., this, Kern,Toev., s., Freq., or, from, this, onward,hence-forth, this, morning, present• in, function], one, who, covets Data: Pali: ajja • Ajja,&, Ajjā, (adv.), base, a3), diva) ,thus, Dh.326;, ajjā;, v., PvA.59) ;, PvA.6,23;, phrase, ajjatagge, ajjato, agge(?), ajja-tagge,see, agga3),(adv.) , the, 32,23., (Page • ‹-›, -kālaṁ • [Vedic, &, +, being, (see, (see, (read, as, (=, Mhvs, (=, +, Mhvs • of, of, "on, "food" ) ; • and, an, old, not • adya, adyā,a, dyā,a°, dyā, dyaus, day"], to-day,now, bahutaṁ, with, day, -divasa, day • demonstr., pron., Loc., this, Kern,Toev., s., Freq., or, from, this, onward,hence- forth, this, morning, present
. • Sn, J 75,153,158,970,998;, I,279;, Pv Iii,425, I,117, Vin.I,18;, D.I,85;, DA.I,235., J.VI,180;, 10) Data: Pali: gūhanā • Pug.19.Cp.pari°. (Page • Gūhanā,f.), [abstr.fr.gūhati]=gūhanā • 253),(q.v.• Sn.75,153,158,970,998;, J.I,279;, III,425, Pv.I,117, idāni, 15,64., in, Vin.I,18;, D.I,85;, DA.I,235., J.VI,180;, 10) Data: Pali: gūhanā • Pug.19.Cp.pari°. (Page • Gūhanā, (f.), [abstr.fr.gūhati]=gūhanā • 253),(q.v.)
Data: Pali: pacati • Vin.IV,264;. 52Data: Pali: pacati • Vin.IV,264;, N.S.II,225,PvA.10,14.-, D.I,52
ripe], to, fig.torment, purgatory,(trs.and, pacitvā, aer, roasting, ppr.pacanto, tormenting,Gen.pacato,(+Caus.pācayato), read, pacato, for, paccato,by, pare, pp.pakka, Caus.pacāpeti, pāceti, Pass.paccati, to, roasted, or, tormented • bake,Gr.pέssw, intrs. ) :Niraye, (expld, daṇḍena, pīḷentassa) .-, (q.v. ) . ‹-›, (q.v. ) .-, be, (q.v. ) . (Page Normalized data • pacati, peka, pέssw. ; • Da.i,159,where, ; Pacati,, Ved.pacati,idg.*peqǔō,av.pac-;, Obulg.peka, Lith,kepū, 382pέpwn, pacitvā, ppr., pacanto, Gen., pacato, (+Caus., pācayato), pacato, paccato, pare, pīḷentassa)., pp., pakka, Caus., pacāpeti, Pass., paccati• DA.I,159,where, 382) • in • at, & • cook,pέpwn, cook,boil,roast • Pacati,[Ved.pacati,Idg.*peqǔō,Av.pac-;, Obulg.peka, to, fry,roast, Lith,kepū, ripe], to, fig.torment, purgatory,(trs.and, pacitvā, aer, roasting, ppr.pacanto, tormenting,Gen.pacato,(+Caus.pācayato), read, pacato, for, paccato,by, pare, pp.pakka, Caus.pacāpeti, pāceti, Pass.paccati, to, roasted, or, tormented • bake,Gr.pέssw, intrs. ) :Niraye, (expld, daṇḍena, pīḷentassa) .-, (q.v. ) . ‹-›, (q.v. ) .-, be, (q.v. ) . (Page Normalized data • pacati, peka, pέssw, pέpwn, pacitvā, ppr., pacanto, Gen., pacato, (+Caus., pācay- ato), pacato, paccato, pare, pīḷentassa)., pp., pakka, Caus., pacāpeti, Pass., paccati
. I • *peqǔō ; <->, -• Fry, Niraye, I Av, Obulg, Gr, 159bake • pac-;, 264;, 52, &, 382) • 10,14.-. (trs., D., DA., (q.v.)., (q.v.)., (q.v.• *peqǔō, bake • pac-;, 264;, 52, &, 382) • 10,14.-, 159, -, <->, - • fry, Niraye, I, I, by • Av., Obulg., Gr., (trs., D., DA., (q.v.)., (q.v.)., (q.v.).
to, cook, roast, torment, purgatory, and, aer, roasting, tormenting, (expld, at, where, read, for, daṇḍena, pāceti, to, be, roasted, or, tormented. Lith, boil, Vin.IV, fig., in, intrs.):, in, N.S.II. Greek-EnglishPage • Pacati225Ved., to, roast, kepū, cook, ripe• [Ved., to, roast, kepū, cook, ripe], to, cook, roast, torment, purgatory, and, aer, roasting, tormenting, (expld, at, where, read, for, daṇḍena, pāceti, to, be, roasted, or, tormented, (Page • Pacati, Idg., Lith, boil, Vin.IV, fig., in, intrs.):, in, N.S.II,225,PvA. Data: Twier 1 (Greek-English)
. • Business, Excellence Μόλις, IT Data: Twier. 2• BUSINESS, EXCELLENCE. • Μόλις, ψήφισα, αυτή, τη, λύση, Internet, of, στο, διαγωνισμό • ings, IT Data: Twier 2 (French-English)
e, collective, of, science-publish, or, perish;, it, all, that, counts?" • Demain, 18h, par • #dhiha6, David • @dhiparis. • Keynote, dynamics, is Data: Twier. 32015• Keynote, "e, collective, of, science-publish, or, perish;, it, all, that, counts?" • Demain, 18h, par • #dhiha6, David • @dhiparis, dynamics, is Data: Twier 3 (French-English) • #FWWC2015
Edmonton, to, for, the • in, waiting, #bilingualism • and, are, ready, just, fans Data: Twier. Food, English-Polish4• breuvages, go, • Food, Edmonton, to, for, the • in, waiting, #bilingualism • and, are, ready, just, fans Data: Twier 4 (English-Polish)
• comes, from, with, two, crates, of, strawberries, jackets, omg • my, dad, poland, and, adidas • back, żubrówka Data: Twier 5. Transliterated Amharic-English• comes, from, with, two, crates, of, strawberries, jackets, omg • my, dad, poland, and, adidas • back, żubrówka Data: Twier 5 (Transliterated Amharic-English)
• (coffee • bread). is, our • Buna, dabo. • (coffee • bread). is, our • Buna, dabo, naw
| [] |
[
"Sequential Attention: A Context-Aware Alignment Function for Machine Reading",
"Sequential Attention: A Context-Aware Alignment Function for Machine Reading"
] | [
"Sebastian Brarda \nCenter for Data Science\nCenter for Data Science\nCenter for Data Science and Department of Linguistics\nNew York University\nNew York University\nNew York University\n\n",
"Philip Yeres yeres@nyu.edu \nCenter for Data Science\nCenter for Data Science\nCenter for Data Science and Department of Linguistics\nNew York University\nNew York University\nNew York University\n\n",
"Samuel R Bowman bowman@nyu.edu \nCenter for Data Science\nCenter for Data Science\nCenter for Data Science and Department of Linguistics\nNew York University\nNew York University\nNew York University\n\n"
] | [
"Center for Data Science\nCenter for Data Science\nCenter for Data Science and Department of Linguistics\nNew York University\nNew York University\nNew York University\n",
"Center for Data Science\nCenter for Data Science\nCenter for Data Science and Department of Linguistics\nNew York University\nNew York University\nNew York University\n",
"Center for Data Science\nCenter for Data Science\nCenter for Data Science and Department of Linguistics\nNew York University\nNew York University\nNew York University\n"
] | [
"Proceedings of the 2nd Workshop on Representation Learning for NLP"
] | In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence | 10.18653/v1/w17-2610 | [
"https://www.aclweb.org/anthology/W17-2610.pdf"
] | 2,202,801 | 1705.02269 | 4211f97691dc29627d17be500be0c43a04688171 |
Sequential Attention: A Context-Aware Alignment Function for Machine Reading
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 3, 2017. 2017
Sebastian Brarda
Center for Data Science
Center for Data Science
Center for Data Science and Department of Linguistics
New York University
New York University
New York University
Philip Yeres yeres@nyu.edu
Center for Data Science
Center for Data Science
Center for Data Science and Department of Linguistics
New York University
New York University
New York University
Samuel R Bowman bowman@nyu.edu
Center for Data Science
Center for Data Science
Center for Data Science and Department of Linguistics
New York University
New York University
New York University
Sequential Attention: A Context-Aware Alignment Function for Machine Reading
Proceedings of the 2nd Workshop on Representation Learning for NLP
the 2nd Workshop on Representation Learning for NLPVancouver, CanadaAssociation for Computational LinguisticsAugust 3, 2017. 2017
In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence
Introduction
Soft attention (Bahdanau et al., 2014), a differentiable method for selecting the inputs for a component of a model from a set of possibilities, has been crucial to the success of artificial neural network models for natural language understanding tasks like reading comprehension that take short passages as inputs. However, standard approaches to attention in NLP select words with only very indirect consideration of their context, limiting their effectiveness. This paper presents a method to address this by adding explicit context sensitivity into the soft attention scoring function.
We demonstrate the effectiveness of this approach on the task of cloze-style reading comprehension. A problem in the cloze style consists of a passage p, a question q and an answer a drawn from among the entities mentioned in the passage. In particular, we use the CNN dataset (Hermann et al., 2015), which introduced the task into widespread use in evaluating neural networks for language understanding, and the newer and more * These authors contributed equally to this work. Figure 1: The Sequential Attention Model. RNNs first encode the question into a vector j and the document into a sequence of vectors H. For each word index i in the document, a scoring vector γ i is then computed from j and h i using a function like the partial bilinear function shown here. These vectors are then used as inputs to another RNN layer, the outputs of which (η i ) are summed elementwise and used as attention scores (α i ) in answer selection.
carefully quality-controlled Who did What dataset (Onishi et al., 2016).
In standard approaches to soft attention over passages, a scoring function is first applied to every word in the source text to evaluate how closely that word matches a query vector (here, a function of the question). The resulting scores are then normalized and used as the weights in a weighted sum which produces an output or context vector summarizing the most salient words of the input, which is then used in a downstream model (here, to select an answer).
In this work we propose a novel scoring function for soft attention that we call Sequential Attention (SA), shown in Figure 1. In an SA model, a mutiplicative interaction scoring function is used to produce a scoring vector for each word in the source text. A newly-added bidirectional RNN then consumes those vectors and uses them to produce a context-aware scalar score for each word. We evaluate this scoring function within the context of the Stanford Reader (Chen et al., 2016), and show that it yields dramatic improvements in performance. On both datasets, it is outperformed only by the Gated Attention Reader (Dhingra et al., 2016), which in some cases has access to features not explicitly seen by our model.
Related Work
In addition to Chen et al. (2016)'s Stanford Reader model, there have been several other modeling approaches developed to address these reading comprehension tasks. Seo et al. (2016) introduced the Bi-Directional Attention Flow which consists of a multi-stage hierarchical process to represent context at different levels of granularity; it use the concatenation of passage word representation, question word representation, and the element-wise product of these vectors in their attention flow layer. This is a more complex variant of the classic bi-linear term that multiplies this concatenated vector with a vector of weights, producing attention scalars. Dhingra et al. (2016)'s Gated-Attention Reader integrates a multi-hop structure with a novel attention mechanism, essentially building query specific representations of the tokens in the document to improve prediction. This model conducts a classic dotproduct soft attention to weight the query representations which are then multiplied element-wise with the context representations, and fed into the next layer of RNN. After several hidden layers that repeat the same process, the dot product between the context representation and the query is used to compute a classic soft-attention.
Outside the task of reading comprehension there has been other work on soft attention over text, largely focusing on the problem of attending over single sentences. Luong et al. (2015) study several issues in the design of soft attention models in the context of translation, and introduce the bilinear scoring function. They also propose the idea of attention input-feeding where the original attention vectors are concatenated with the hidden representations of the words and fed into the next RNN step. The goal is to make the model fully aware of the previous alignment choices.
In work largely concurrent to our own, Kim et al. (2017) explore the use of conditional random fields (CRFs) to impose a variety of constraints on attention distributions achieving strong results on several sentence level tasks.
Modeling
Given the tuple (passage, question, answer), our goal is to predict Pr(a|d, q) where a refers to answer, d to passage, and q to question. We define the words of each passage and question as d = d 1 , .., d m and q = q 1 , ..., q l , respectively, where exactly one q i contains the token @blank, representing a blank that can be correctly filled in by the answer. With calibrated probabilities Pr(a|d, q), we take the argmax a Pr(a|d, q) where possible a's are restricted to the subset of anonymized entity symbols present in d. In this section, we present two models for this reading comprehension task: Chen et al. (2016)'s Stanford Reader, and our version with a novel attention mechanism which we call the Sequential Attention model.
Stanford Reader
Encoding Each word or entity symbol is mapped to a d-dimensional vector via embedding matrix E ∈ R d×|V | . For simplicity, we denote the vectors of the passage and question as d = d 1 , .., d m and q = q 1 , ..., q l , respectively. The Stanford Reader (Chen et al., 2016) uses bidirectional GRUs to encode the passage and questions. For the passage, the hidden state is defined:
h i = concat( − → h i , ← − h i ).
Where contextual embeddings d i of each word in the passage are encoded in both directions.
← − h i = GRU( ← − − h i+1 , d i ) (1) − → h i = GRU( − − → h i−1 , d i )(2)
And for the question, the last hidden representation of each direction is concatenated:
j = concat( − → j l , ← − j 1 )(3)
Attention and answer selection The Stanford Reader uses bilinear attention (Luong et al., 2015):
α i = softmax i (jWh i )(4)
Where W is a learned parameters matrix of the bilinear term that computes the similarity between j and h i with greater flexibility than a dot product. The output vector is then computed as a linear combination of the hidden representations of the passage, weighted by the attention coefficients:
o = α i h i (5)
The prediction is the answer, a, with highest probability from among the anonymized entities:
a = argmax a∈p∩entities M T a o(6)
Here, M is the weight matrix that maps the output to the entities, and M a represents the column of a certain entity. Finally a softmax layer is added on top of M T a o with a negative log-likelihood objective for training.
Sequential Attention
In the Sequential Attention model instead of producing a single scalar value α i for each word in the passage by using a bilinear term, we define the vectors γ i with a partial-bilinear term 1 . Instead of doing the dot product as in the bilinear term, we conduct an element wise multiplication to produce a vector instead of a scalar:
γ i = j • Wh i(7)
Where W is a matrix of learned parameters. It is also possible to use an element-wise multiplication, thus prescinding the parameters W:
γ i = j • h i(8)
We then feed the γ i vectors into a new bidirectional GRU layer to get the hidden attention η i vector representation.
← − η i = GRU( ← − − η i+1 , γ i )(9)− → η i = GRU( − − → η i−1 , γ i )(10)
We concatenate the directional η vectors to be consistent with the structure of previous layers.
η i = concat( − → η i , ← − η i )(11)
Finally, we compute the α weights as below, and proceed as before.
α i = softmax i (1 η i ])(12)o = α i h i (13) a = argmax a∈p∩entities M T a o(14)
Experiments and Results
We evaluate our model on two tasks, CNN and Who did What (WDW initialized from a U ∼ (−0.01, 0.01) while GRU weights were initialized from a N ∼ (0, 0.1).
Learning was carried out with SGD with a learning rate of 0.1, batch size of 32, gradient clipping of norm 10 and dropout of 0.2 in all the vertical layers 4 (including the Sequential Attention layer). Also, all the anonymized entities were relabeled according to the order of occurrence, as in the Stanford Reader. We trained all models for 30 epochs.
Results
Who did What In our experiments the Stanford Reader (SR) achieved an accuracy of 65.6% on the strict WDW dataset compared to the 64% that Onishi et al. (2016) reported. The Sequential Attention model (SA) with partial-bilinear scoring function got 67.21%, which is the second best performance on the W DW leaderboard, only surpassed by the 71.2% from the Gated Attention Reader (GA) with qe-comm (Li et al., 2016) features and fixed GloVe embeddings. However, the GA model without qe-comm features and fixed embeddings performs significantly worse at 67%. We did not use these features in our SA models, and it is likely that adding these features could further improve SA model performance. We also experimented with fixed embeddings in SA models, but fixed embeddings reduced SA performance. Another experiment we conducted was to add 100K training samples from CNN to the WDW data. This increase in the training data size boosted accuracy by 1.4% with the SR and 1.8% with the Sequential Attention model reaching a 69% accuracy. This improvement strongly suggests that the gap in performance/difficulty between the CNN and the WDW datasets is partially related to the difference in the training set sizes which results in overfitting.
CNN For a final sanity check and a fair comparison against a well known benchmark, we ran our Sequential Attention model on exactly the same CNN data used by Chen et al. (2016).
The Sequential Attention model with partialbilinear attention scoring function took an average of 2X more time per epoch to train vs. the Stanford Reader. However, our model converged in only 17 epochs vs. 30 for the SR. The results of training the SR on CNN were slightly lower than the 73.6% reported by Chen et al. (2016). The Sequential Attention model achieved 77.1% accuracy, a 3.7% gain with respect to SR.
Model comparison on CNN
After achieving good performance with SA we wanted to understand what was driving the increase in accuracy. It is clear that SA has more trainable parameters compared to SR. However, it was not clear if the additional computation required to learn those parameters should be allocated in the attention mechanism, or used to compute richer hidden representations of the passage and questions. Additionally, the bilinear parameters increase the computational requirements, but their impact on performance was not clear. To answer these questions we compared the following models: i) SR with dot-product attention; ii) SR with bilinear attention; iii) SR with two layers (to compute the hidden question and passage representations) and dot-product attention; iv) SR with two layers and bilinear attention; v) SA with elementwise multiplication scoring function; vi) SA with partial-bilinear scoring function.
Surprisingly, the element-wise version of SA performed better than the partial-bilinear version, with an accuracy of 77.3% which, to our knowledge, has only been surpassed by Dhingra et al. (2016) with their Gated-Attention Reader model.
Additionally, 1-layer SR with dot-product attention got 0.3% lower accuracy than the 1-layer SR with bilinear attention. These results suggest that the bilinear parameters do not significantly improve performance over dot-product attention.
Adding an additional GRU layer to encode the passage and question in the SR model increased performance over the original 1-layer model. With dot-product attention the increase was 1.1% whereas with bilinear attention, the increase was 1.3%. However, these performance in-
Model
CNN Params SR, dot prod. att. 73.1% 5.44 × 10 6 SR, bilinear att.
73.4% 5.50 × 10 6 SR, 2-layer, dot prod. att. 74.2% 5.83 × 10 6 SR, 2-layer, bilinear att.
74.7% 5.90 × 10 6 SA, element-wise att.
77.3% 5.73 × 10 6 SA, partial-bilinear att.
77.1% 5.80 × 10 6 creases were considerably less than the lift from using an SA model (and SA has fewer parameters).
Discussion
The difference between our Sequential Attention and standard approaches to attention is that we conserve the distributed representation of similarity for each token and use that contextual information when computing attention over other words. In other words, when the bilinear attention layer computes α i = softmax i (jWh i ), it only cares about the magnitude of the resulting α i (the amount of attention that it gives to that word). Whereas if we keep the vector γ i we can also know which were the dimensions of the distributed representation of the attention that weighted in that decision. Furthermore, if we use that information to feed a new GRU, it helps the model to learn how to assign attention to surrounding words. Compared to Sequential Attention, Bidirectional attention flow uses a considerably more complex architecture with a query representations for each word in the question. Unlike the Gated Attention Reader, SA does not require intermediate soft attention and it uses only one additional RNN layer. Furthermore, in SA no dot product is required to compute attention, only the sum of the elements of the η vector. SA's simpler architecture performs close to the state-of-the-art. Figure 2 shows some sample model behavior. In this example and elsewhere, SA results in less sparse attention vectors compared to SR, and this helps the model assign attention not only to potential target strings (anonymized entities) but also to relevant contextual words that are related to those entities. This ultimately leads to richer semantic representations o = α i h i of the passage.
Finally, we found: i) bilinear attention does not yield dramatically higher performance compared to dot-product attention; ii) bilinear parameters do not improve SA performance; iii) Increasing the number of layers in the attention mechanism yields considerably greater performance gains with fewer parameters compared to increasing the number of layers used to compute the hidden representations of the question and passage.
Conclusion and Discussion
In this this paper we created a novel and simple model with a Sequential Attention mechanism that performs near the state of the art on the CNN and WDW datasets by improving the bilinear and dotproduct attention mechanisms with an additional bi-directional RNN layer. This additional layer allows local alignment information to be used when computing the attentional score for each token. Furthermore, it provides higher performance gains with fewer parameters compared to adding an additional layer to compute the question and passage hidden representations. For future work we would like to try other machine reading datasets such as SQuAD and MS MARCO. Also, we think that some elements of the SA model could be mixed with ideas applied in recent research from Dhingra et al. (2016) and Seo et al. (2016). We believe that the SA mechanism may benefit other tasks as well, such as machine translation.
Figure 2 :
2Representative sample output for the Stanford Reader and our model.
We used the strict version of WDW.). For CNN, we used the
anonymized version of the dataset released by
Hermann et al. (2015), containing 380,298 train-
ing, 3,924 dev, and 3,198 test examples. For
WDW we used Onishi et al. (2016)'s data gener-
ation script to reproduce their WDW data, yielding
127,786 training, 10,000 dev, and 10,000 test ex-
amples. 2 Training We implemented all our models in
Theano (Theano Development Team, 2016) and
Lasagne (Dieleman et al., 2015) and used the Stan-
ford Reader (Chen et al., 2016) open source im-
plementation as a reference. We largely used the
same hyperparameters as Chen et al. (2016) in
the Stanford Reader: |V | = 50K, embedding
size d = 100, GloVe (Pennington et al., 2014)
word embeddings 3 for initialization, hidden size
h = 128. The size of the hidden layer of the bidi-
rectional RNN used to encode the attention vec-
tors is double the size of the one that encodes the
words, since it receives vectors that result from the
concatenation of GRUs that go in both directions,
η ∈ R 256 . Attention and output parameters were
Model
WDW Strict
CNN
Attentive Reader
53%
63%
Stanford Reader
65.6% 73.4%
+ SA partial-bilinear
67.2% 77.1%
Gated Att. Reader
71.2% 77.9%
Table 1: Accuracy on WDW and CNN test sets
Table 2 :
2Accuracy on CNN test sets and number of
trainable parameters for various Stanford Reader
(SR) and Sequential Attention (SA) models.
in a way that takes into account not just how well that word matches a query, but how well surrounding words match. We evaluate this approach on the task of reading comprehension (on the Who did What and CNN datasets) and show that it dramatically improves a strong baseline-the Stanford Reader-and is competitive with the state of the art.
Note that doing softmax over the sum of the terms of the γi vectors would lead to the same αi of the Stanford Reader.
In the WDW data we found 340 examples in the strict training set, 545 examples in the relaxed training set, 20 examples in the test set, and 30 examples in the validation set that were not answerable because the anonymized answer entity did not exist in the passage. We removed these examples, reducing the size of the WDW test set by 0.2%, to 9,980. We believe this difference is not significant and did not bias the comparison between models.3 The GloVe word vectors used were pretrained with 6 billion tokens with an uncased vocab of 400K words, and were obtained from Wikipedia 2014 and Gigaword 5.
We also tried increasing the hidden size to 200, using 200d GloVe word representations and increasing the dropout rate to 0.3. Finally we increased the number of hidden encoding layers to two. None of these changes resulted in significant performance improvements in accordance withChen et al. (2016).
AcknowledgementsThis paper was the result of a term project for the NYU Course DS-GA 3001, Natural Language Understanding with Distributed Representations. Bowman acknowledges support from a Google Faculty Research Award and gifts from Tencent Holdings and the NVIDIA Corporation.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, CoRR abs/1409.0473Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473. http://arxiv.org/abs/1409.0473.
A thorough examination of the cnn/daily mail reading comprehension task. Danqi Chen, Jason Bolton, Christopher D Manning, CoRR abs/1606.02858Danqi Chen, Jason Bolton, and Christopher D. Man- ning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. CoRR abs/1606.02858. http://arxiv.org/abs/1606.02858.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merrienboer, Fethi Aglar Gülçehre, Holger Bougares, Yoshua Schwenk, Bengio, Kyunghyun Cho, Bart van Merrienboer, Ç aglar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoder-decoder for statis- tical machine translation. CoRR abs/1406.1078. http://arxiv.org/abs/1406.1078.
Gated-attention readers for text comprehension. Bhuwan Dhingra, Hanxiao Liu, William W Cohen, Ruslan Salakhutdinov, CoRR abs/1606.01549Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. 2016. Gated-attention read- ers for text comprehension. CoRR abs/1606.01549. http://arxiv.org/abs/1606.01549.
Jan Sander Dieleman, Colin Schlter, Eben Raffel, Olson, Sren Kaae, Daniel Snderby, Daniel Nouri, Martin Maturana, Eric Thoma, Jack Battenberg, Jeffrey De Kelly, Michael Fauw, Heilman, 10.5281/zenodo.27878Geoffrey French, and Jonas Degrave. 2015. Lasagne: First release. Brian McFee, Hendrik Weideman, Gbor Takcs, Peter de Rivaz, Jon Crall, Gregory Sanders, Kashif Rasul, Cong LiuDiogo Moitinho de AlmeidaSander Dieleman, Jan Schlter, Colin Raffel, Eben Ol- son, Sren Kaae Snderby, Daniel Nouri, Daniel Mat- urana, Martin Thoma, Eric Battenberg, Jack Kelly, Jeffrey De Fauw, Michael Heilman, Diogo Moit- inho de Almeida, Brian McFee, Hendrik Weide- man, Gbor Takcs, Peter de Rivaz, Jon Crall, Gregory Sanders, Kashif Rasul, Cong Liu, Geoffrey French, and Jonas Degrave. 2015. Lasagne: First release. https://doi.org/10.5281/zenodo.27878.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, CoRR abs/1506.03340Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teach- ing machines to read and comprehend. CoRR abs/1506.03340. http://arxiv.org/abs/1506.03340.
Structured attention networks. Yoon Kim, Carl Denton, Luong Hoang, Alexander M Rush, CoRR abs/1702.00887Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. 2017. Structured at- tention networks. CoRR abs/1702.00887.
Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, Wei Xu, CoRR abs/1607.06275Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. CoRR abs/1607.06275. http://arxiv.org/abs/1607.06275.
Effective approaches to attention-based neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, CoRR abs/1508.04025Minh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective approaches to attention-based neural machine translation. CoRR abs/1508.04025. http://arxiv.org/abs/1508.04025.
Who did what: A large-scale person-centered cloze dataset. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David A Mcallester, CoRR abs/1608.05457Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David A. McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. CoRR abs/1608.05457. http://arxiv.org/abs/1608.05457.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Nat- ural Language Processing (EMNLP). pages 1532- 1543. http://www.aclweb.org/anthology/D14-1162.
Bidirectional attention flow for machine comprehension. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, CoRR abs/1611.01603Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional at- tention flow for machine comprehension. CoRR abs/1611.01603. http://arxiv.org/abs/1611.01603.
Theano: A python framework for fast computation of mathematical expressions. CoRR abs/1605.02688Theano Development TeamTheano Development Team. 2016. Theano: A python framework for fast computation of math- ematical expressions. CoRR abs/1605.02688.
| [] |
[
"Reconciliation of Pre-trained Models and Prototypical Neural Networks in Few-shot Named Entity Recognition",
"Reconciliation of Pre-trained Models and Prototypical Neural Networks in Few-shot Named Entity Recognition"
] | [
"Youcheng Huang \nCollege of Computer Science\nSichuan University\n\n",
"♠ ♥ ",
"Wenqiang Lei \nCollege of Computer Science\nSichuan University\n\n",
"♠ † ",
"Jie Fu \nBeijing Academy of Artificial Intelligence\n\n",
"♣ ",
"Jiancheng Lv \nCollege of Computer Science\nSichuan University\n\n"
] | [
"College of Computer Science\nSichuan University\n",
"College of Computer Science\nSichuan University\n",
"Beijing Academy of Artificial Intelligence\n",
"College of Computer Science\nSichuan University\n"
] | [] | Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition. Existing methods, unfortunately, are not aware of the fact that embeddings from pre-trained models contain a prominently large amount of information regarding word frequencies, biasing prototypical neural networks against learning word entities. This discrepancy constrains the two models' synergy. Thus, we propose a one-line-code normalization method to reconcile such a mismatch with empirical and theoretical grounds. Our experiments based on nine benchmark datasets show the superiority of our method over the counterpart models and are comparable to the stateof-the-art methods. In addition to the model enhancement, our work also provides an analytical viewpoint for addressing the general problems in few-shot name entity recognition or other tasks that rely on pre-trained models or prototypical neural networks. 1 | 10.48550/arxiv.2211.03270 | [
"https://export.arxiv.org/pdf/2211.03270v1.pdf"
] | 253,384,038 | 2211.03270 | 5c4070cb73a3c2045d99773bc36ba6520c501662 |
Reconciliation of Pre-trained Models and Prototypical Neural Networks in Few-shot Named Entity Recognition
Youcheng Huang
College of Computer Science
Sichuan University
♠ ♥
Wenqiang Lei
College of Computer Science
Sichuan University
♠ †
Jie Fu
Beijing Academy of Artificial Intelligence
♣
Jiancheng Lv
College of Computer Science
Sichuan University
Reconciliation of Pre-trained Models and Prototypical Neural Networks in Few-shot Named Entity Recognition
Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition. Existing methods, unfortunately, are not aware of the fact that embeddings from pre-trained models contain a prominently large amount of information regarding word frequencies, biasing prototypical neural networks against learning word entities. This discrepancy constrains the two models' synergy. Thus, we propose a one-line-code normalization method to reconcile such a mismatch with empirical and theoretical grounds. Our experiments based on nine benchmark datasets show the superiority of our method over the counterpart models and are comparable to the stateof-the-art methods. In addition to the model enhancement, our work also provides an analytical viewpoint for addressing the general problems in few-shot name entity recognition or other tasks that rely on pre-trained models or prototypical neural networks. 1
Introduction
Named entity recognition (NER) is a classical task in natural language processing (NLP) which aims to automatically identify entities in the plain text by classifying each word to a set of pre-defined entities, e.g. "person/location", or to the "others" (no-entity) (Yadav and Bethard, 2019). As a crucial sub-component of many language understanding tasks, NER has been widely adopted to different applications, e.g. news (Sang and De Meulder, 2003) and the medical (Stubbs and Uzuner, 2015).
Neural networks (NNs) have achieved great success in NER (Lample et al., 2016). However, NNs face the adaptation challenge (Wilson and Cook, 2020) as words in different entities can change to a great extent (Yang and Katiyar, 2020), e.g. "Mr. † Correspondence to Wenqiang Lei. 1 Our code is available at https://github.com/ HamLaertes/EMNLP_2022_Reconciliation
Bush" in the "person" v.s. "budgets" in the "money", and obtaining sufficient annotations of new entities can be expensive (Ding et al., 2021). Few-shot NER, a cost-efficient solution, aims at training a model to be aware of unseen entities given few labeled examples (Huang et al., 2021). Few-shot NER has received a rising interest in the NLP community, where new datasets (Ding et al., 2021) and methods (Das et al., 2022;Yang and Katiyar, 2020;Tong et al., 2021) have been constantly proposed.
Low-dim manifold encodes more adaptive information (Wang et al., 2018). Prototypical neural networks (PNNs) (Snell et al., 2017) learn an embedding space where the same-entity datapoints are clustered around a center, called the prototype, and distances between the query data to all prototypes represent its entity probabilities. In addition to using an embedding network, PNNs calculate the prototypes and distances via a non-parameteric algorithm, gaining popularity for the flexibility and low computing cost (Wang et al., 2020). A supplementary enhancement will be using embeddings from large-scale pre-trained models (PTMs), like BERT (Devlin et al., 2019), to provide extra knowledge that helps PNNs' learning of entities. As such, incorporating PTMs with PNNs has become a de-facto paradigm for few-shot NER that achieves competitive results to state-of-the-arts (Ding et al., 2021;Huang et al., 2021;Bao et al., 2020). Related works consider NER-specific properties (Tong et al., 2021) or new learning algorithms (Das et al., 2022;Yang and Katiyar, 2020) to enhance the model, but they tend not to examine the coordinating effects between PTMs and PNNs in terms of the information contained in embeddings.
It should be reminded that PNNs calculate distances between word embeddings and prototypes to represent entity probabilities. However, PTMs embeddings may not effectively provide entity information as they prominently contain information on word frequencies (Mu and Viswanath, 2018;Li et al., 2020b), and we find frequencies are shallow statistics that can cause a loss of in-depth and useful entity-denoting information. By probing into PNNs, we find that words tend to be classified to the entity centred with words of higher frequencies. Therefore, the distance measure is biased towards focusing on frequencies. Such a bias can cause the over-fitting of the PNNs and the unreliability on classifying new entities. As a consequence, when frequencies are changed on a new corpus, the distances can no longer effectively represent the entity probabilities.
Form a mathematical view, the biased distance is mainly caused by the varying prototype 2-norms. However, we argue that those 2-norms contribute little to but actually undermine the correct classification. We propose to normalize all prototypes to unit vectors as a simple yet effective remedy to reconcile PNNs and PTMs for few-shot NER. Our experiments on nine few-shot NER datasets (Huang et al., 2021;Ding et al., 2021) demonstrate the effectiveness of our one-line-code remedy. The normalized PNNs achieve competitive results compared to the state-of-the-art methods while retaining all the PNNs' advantages, such as easy implementation and low computation cost. We also demonstrate normalization can make PNNs learn more effectively about correctly classifying entities, and conduct ablation studies on different normalization strategies.
Our study on reconciling PTMs and PNNs, and the promising performance of the simple normalization method may inspire new research motivations to the few-shot NER, as well as other fields that involve the use of PTMs /or PNNs. Named entity recognition can be formalized as the word classification ( Figure 1). For few-shot classification (FSC), "K-way N -shot" describes the task setting: after the training, the model needs to classify the query data to K training-unseen classes, given N labeled examples per class. The core issue in FSC is the unreliable empirical loss minimization: as the labeled data is extremely limited during testing, the loss defined on new classes will result in a sub-optimal solution that may lead to undesired performance (Wang et al., 2020).
To tackle this issue, researchers seek solutions with the embedding-based methods (Wang et al., 2020;Koch et al., 2015;Vinyals et al., 2016;Sung et al., 2018;Snell et al., 2017). Specifically, an embedding network projects datapoints to a lowdim manifold that contains some general features shared among training and testing classes. On the embedding space, to train only a small classifier for new classes consumes fewer data and can achieve equivalently good results. The recent embeddingbased classifiers with meta-learning (Hochreiter et al., 2001) divides the training data into several "episodes" mimicking the "K-way N -shot" testing format. Such a method is popularly known for its effectiveness in FSC.
Prototypical Neural Network
PNNs (Snell et al., 2017) assume in the embedding space, the same-class datapoints are clustered around class centers, called the prototypes, and the distances between datapoints to prototypes represent their class probabilities. Based on this assumption, PNNs need only calculate: 1) the prototypes using the embedded labeled data, and 2) the distances between the embedded query data and prototypes to conduct the classification. The detailed discussions about PNNs will be presented in section 3 and 5. Utilizing large-scale PTMs as the embedding networks, PNNs can achieve competitive results in various natural language FSC tasks (Ding et al., 2021;Holla et al., 2020;Huang et al., 2021;Bao et al., 2020).
In NER, recent works consider a bunch of methods to enhance the coordinating usage of PTMs and PNNs, including in-domain pre-training (Huang et al., 2021), NER specific properties (Tong et al., 2021), and sophisticated learning algorithm (Das et al., 2022;Yang and Katiyar, 2020). However, to best of our knowledge, little has been explored for the correct combination of PTMs and PNNs. There have been works that find both the smallscale (Mikolov et al., 2013;Pennington et al., 2014) and recent large-scale (Devlin et al., 2019;Liu et al., 2019) PTMs have limitations in representing diverse language semantics (Mu and Viswanath, 2018;Gao et al., 2018;Li et al., 2020b). Such limitations may prevent PNNs from correctly adopting entity information, reducing the possibility of getting optimal results.
Distance in Prototypical Neural Networks
In this section, we describe PNNs' feed-forward propagation from the mathematical viewpoint focusing on the PNNs' distance function. In K-way N -shot, let S k denote the small support set containing N labeled examples with the class k. PNNs calculate the prototype of each class through meanaggregating the embedded support examples:
c k = 1 |S k | (x i ∈S k ) f φ (x i )(1)
where f φ is the embedding network. The class probabilities of a query data x are given by a distance function d following a softmax operation:
p φ (y = k | x) = exp(−d(f φ (x), c k ) k exp(−d(f φ (x), c k )(2)
Theorem 1. Assume data embeddings of the support and query set are independent and identically distributed. Let c k be the class prototype calculated by an aggregation function proto(·) :
N i=1 H i → h ∈ H, the problem: min proto(·) J , where J is the classification loss, achieves minimization given by proto(·) being the arithmetic mean. Corollary 1.1. Based on the support set, PNNs estimate a Gaussian distribution N k (c k , σ 2 ) for the embeddings in the class k (σ is a constant vector). The corresponding choice of the Bregman divergence d should be the squared Euclidean distance.
Proofs are provided in the Appendix B. While d is proposed to be any Bregman Divergence (Banerjee et al., 2005;Snell et al., 2017), we prove the optimal distance function should be the squared Euclidean distance: z − z 2 . 2 PNNs consider that distances between embeddings and prototypes represent the entity probabilities. Therefore, we count on the distance to capture the sharing entity information between the word and the prototypes. Factorizing the distance
(− f φ (x) − c k 2 ) to (− f φ (x) 2 + 2f φ (x) T c k − c k 2 )
, the entity probabilities are not only proportional to the dot production, but are also reversely proportional to the two 2-norms. While f φ (x) 2 represents query data information, different c k In this section, we introduce the concept of representation degeneration in PTMs and explain its associated effects to PNNs. Small-scale PTMs, like GloVe (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013), are argued by researchers as low-capacity models for representing the richness in semantics of natural languages Zhao et al., 2018). Both theoretical (Gao et al., 2018) and empirical (Mu and Viswanath, 2018) results in literature have proven: the learned word embeddings contain substantial non-semantic statistics information, i.e. the frequencies of the words, causing a lower performance on various downstream tasks, like the task of word classification (Mu and Viswanath, 2018). Recent Transformer (Vaswani et al., 2017)-based large-scale PTMs (Devlin et al., 2019;Liu et al., 2019) are groundbreaking in modeling natural language. However, we are concerned that the learned embeddings might also contain the information regarding the non-semantic word frequencies. In line with (Mu and Viswanath, 2018), we use the online statistics data 3 , get the word embeddings from BERT and RoBERTa, and do principal component analysis (PCA) to extract the first two coefficients, and plot them on point diagrams. Figure 2 displays the result. The results show both models' embeddings have correlation to frequencies.
In addition, (Li et al., 2020b) finds in the embedding space, word embedding 2-norms are inversely proportional to their frequencies. As in PNNs, the prototypes are the mean-aggregation of the words in the support set. Therefore, the prototype 2-norms are also correlated to word frequencies as well as the priorly determined probabilities we find in section 3. However, we hypothesize that word frequencies are shallow statistics that are irrelevant to word entities, and the priorly determined probabilities represent little entity information. We empirically demonstrate the irrelevance between entities and frequencies in this section. We will demonstrate the irrelevance between prototype 2-norms and entities in the next section. In a fewshot NER dataset (Ding et al., 2021), we count the mean word frequencies of different entities and the frequencies of each word in two random sampled entities. 4 Figure 3 and Figure 4 display the results. Frequencies can be similar among different entities yet distinct in the same entity. Same as the analysis in the next section, we suppose that this irrelevance introduces non-entity information into PNNs probabilities, and biases the PNNs distance towards focusing on frequencies.
Entity ID Frequencies
Distance Bias of Prototypical Neural Networks
In section 3, we have shown that PNNs have a priori on the distances between word embeddings and different entities: embeddings are more likely to be close to the entity that has a smaller prototype 2norm, and the word is more likely to be classified to that entity. However, in section 4, we argue this priori will introduce non-entity information that confuses the calculation of probabilities in PNNs.
We have shown frequencies and entities are irrelevant. In the following two figures, we further show the prototype 2-norms vary in a manner that is also irrelevant to entities. Figure 5 displays the average prototype l2-norms of all classes. The 2-norms vary greatly among different classes (min=7.25, max=17.13, coefficient of variation=0.202). In Figure 6, the blue column represents the largest class-prototype 2-norm, the orange one the smallest and the green one the average. Even within the same class, the prototypes 2-norms demonstrate large variance due to the contrasting difference among episodes. Distances between prototypes and word embeddings should represent entity probabilities. Unfor-tunately, with respect to the above problem, the distances in the original PNNs are biased towards frequencies instead of being entity-oriented. As a result, PNNs tend to overfit the training data and be trained with unreliable loss minimization.
The Overfitting Problem
In this section, we aim to account for the overfitting problem caused by the biased distance. Let S u be the embeddings of few-labeled data set and Q u be the embeddings of the query data set.
Theorem 2. PNNs learn on a Markov Chain: S u → Q u , and maximizes the information bound on the mutual information between S u and Q u .
Corollary 2.1. Let S g u be unknown embeddings that the Markov chain: S g u → Q u holds according to entity information. The integrated Markove chain becomes: S g u → S u → Q u , and PNNs will overfit the words frequencies information in S u .
Proofs are provided in the Appendix A-B. PNNs learn to maximize the information bound of the mutual information between the support and query data, where the information bound is modeled by the frequency-related distances. However, it is because frequencies are irrelevant to entities. Thus, frequency-related distances will confuse PNNs with incorrect evidences, i.e. word frequencies, when connecting labeled and query data, preventing PNNs from learning meaningful entity information. As the frequencies can change randomly on new classes, the distances can no longer correctly model the entity probabilities on a new testing data.
Unreliable Empirical Loss Minimization
In this section, we provide a further explanation to the problem of unreliable empirical loss minimization of training PNNs with biased distances. Given a hypothesis space H and its element h 5 , we aim at minimizing the expected loss to find the optimal solution for a given task:
R(h) = (h(x i , y i ))dp(x, y)(3)
Noted that p(x, y) is unknown and we use the empirical loss in practical as a proxy for R(h):
R I (h) = 1 I I i=1 (h(x i , y i ))(4)
Let h * = argmin h∈H R(h) be the hypothesis that minimizes the expected loss and h I = argmin h∈H R I (h) be the hypothesis that minimizes the empirical loss. The approximation error [R(h I ) − R(h * )] quantifies the degree of closeness to the optimal result h I . Noting that the frequency information guides the loss minimization during training PNNs as analyzed in section 5.1. Due to the uncertainty of word frequencies, a good approximation on the training data can have a large approximation error on the testing, which can jeopardize PNNs testing performance. Moreover, the labeled examples for each episode are limited to N -shot, where data in each episode is not likely to cover many words. As such, the frequencies of the words and prototype 2-norms can vary among episodes, resulting in unstable training with low efficiency in model learning and lowering the testing performance.
Normalizing the Prototypes
In this section, we aim to provide a solution to the above-mentioned problems through a normalizing method. Varying 2-norms mainly causes frequency-biased distances and the above two problems. As a result, we consider normalizing the prototypes to 2-norm-invariant vectors. Earlier works in Computer Vision find normalizing both prototypes and the query data embeddings can achieve better and more stable results (Gidaris and Komodakis, 2018). However, we do not normalize the query data embeddings, because word embeddings represent more detail and other useful information that may be eliminated by the normalization.
Representing high-level entity information, prototypes should not be priorly distinguished from each other. Furthermore, observing the following evidence, we argue that prototype 2-norms have limited contribution to the correct classification. In both the BERT's pre-training (1) and the original PNNs (2), we find the 2-norms of class fea- (1) In the BERT's pre-training that predicts a word by its context, the 2-norms of the words features, i.e. rows of the classifier, show subtle variance. Figure 7 presents the 2-norms of the classifier rows: min=0.766, max=2.045, coefficient of variation=0.138.
(2) Without any intervention to the original PNNs, after the training, the prototype 2-norms vary much less compared to the original, i.e. after the training: (min=14.00, max=15.25, coefficient of variation=0.014) compared to the original :(min=7.25, max=17.13, coefficient of variation=0.202), and Figure 8 compared to Figure 5.
Based on the above analysis, we propose to normalize the prototypes to unit vectors before calculating the class probabilities.
Algorithm 1 Normalizing the Prototypes *** Pseudo-code in PyTorch *** import torch.nn.functional as F C = Calculate Prototype (S) ∈ R k×h *** The Normalization *** C = F.normalize (C, dim=-1) ... the same as the original PNNs ...
Connection to the Adaptive Loss: Different data may associate with different difficulties to be classified. Adaptive loss is proposed to be able to change dynamically in magnitude so as to capture the difficult ones (Han et al., 2021;Oreshkin et al., 2018;Li et al., 2020a). Humans are prone to processing high-frequency words as reported in psychological studies (Brysbaert et al., 2018). Applying this psychological finding to the named entity recognition in natural language processing, we postulate that if a word appears more frequently, its entity should be easier to be classified. To this end, PNNs well adapt to task difficulty through the frequencyrelated embedding 2-norms of the query data.
Experiments & Results
To demonstrate the effectiveness of our normalized PNNs, we conduct experiments on nine fewshot named entity recognition datasets proposed by (Huang et al., 2021) and (Ding et al., 2021). Datasets: Being a classical and basic natural language understanding task, dozens of supervised NER datasets have been proposed, including WikiGold (Balasuriya et al., 2009) (Budzianowski et al., 2018). Based on these datasets, researchers (Huang et al., 2021) restructure them to the "K-way N -shot" few-shot setting into a comprehensive few-shot NER benchmark. However, except for the formatting change of data, the simple and direct re-structuring shall lose track of some critical NER properties, such as the task-difficulty differences between the finegrained and coarse-grained entities (Ding et al., 2021). Therefore, a new expert and challenging dataset has been proposed as a benchmark in fewshot NER (Ding et al., 2021). Experimental Settings: Without special notations, we basically follow the original implementations in the two open sources 67 , including models, training/testing pipelines, hyper-parameters, and sampled episodes. We report results using the standard evaluation metrics: micro averaged F1 score. We re-run all the experiments of the origin PNNs to examine the performance improvements by our normalization method based on the same hardware device. We add early stop constraints when reproducing results of (Huang et al., 2021)) and relocate the comparable results from the peer models (Das et al., 2022;Ding et al., 2021). 8 All the experiments are conducted on a single 3090Ti GPU.
Comparison to the State-Of-The-Art Methods:
We compare the normalized PNN (Proto ours ) to four advanced methods on Few-NERD. "Struct" and "NNShot" are proposed by (Yang and Katiyar, 2020). "NNShot" classifies the query data to its nearest data entity in the embedding space, and "Struct" further leverages the Viterbi decoding (Forney, 1973) to produce the final results. "CONTaiNER" as well as the Viterbi enhanced version are proposed by (Das et al., 2022). It utilizes contrastive learning to differentiate word entities. And unlike PNN, "NNShot" and "Struct", "CON-TaiNER" will be fine-tuned on the new entities using the limited labeled examples.
We briefly introduce the main characteristic of Few-NERD: it defines entity types from two perspectives called the fine-grained (INTER) and coarse-grained (INTRA). Under the fine-grained definition, different entities can share more abstract similarities. For example, entities "Island" and "Mountain" are both "Location", and entities "Director" and "Athlete" are both "Person". Under the coarse-grained definition, entities have more differences, such as "Location" v.s. "Person" and "Event" v.s. "Organization". If the training classes contain "Island", the model can easily identify the entity "Mountain" at the testing because they share the same "Location" information. Therefore, training on the fine-grained set is less challenging for NER on new testing entities. Table 1 reports our normalized PNNs on Few-NERD as well as the results of state-of-the-art models, and the original PNNs for comparisons. Compared with the original PNNs, the normalization achieves at least 8.14% performance gain (largest: 16.07% and average: 12.82%). The sophisticated contrastive learning-based CONTaiNER outper-forms our method in certain settings. On average, our model is slightly superior (49.84% (Proto ours ) v.s. 49.80%). Besides, CONTaiNER needs to be fine-tuned on the testing data in order to alleviate the differences between training and testing entities, which can account for its superior performance on the coarse-grained (INTRA) set. It should be noted that our normalization method shows competitive performances yet maintains the PNNs' advantages, i.e. the low computation cost and easy implementation. In addition, our model achieves the highest average F1 scores (57.52% (Proto ours )) on the finegrained (INTER) set, demonstrating its superiority in a more practical setting (Ding et al., 2021).
Incorporation with the Data-Driven Pretraining: (Huang et al., 2021) proposes two pre-training techniques called noisy supervised pre-training (NSP) and self-training (ST). NSP utilizes the large-scale noisy labeled entities on Wikipedia to pre-train the models, while ST utilizes an NER system (teacher model) to label large-scale unlabeled datasets to pre-train the student models. Both the techniques seek extra supervisions to help the model tackle the challenges of the few-shot classification. (Huang et al., 2021) chooses two baselines: the linear classification (LC) and PNNs. And on ten re-structured few-shot NER datasets, they compare the performances of the two baselines as well as the two baselines plus the two pre-training techniques. They report the best performance is achieved by the combination of "LC+NSP+ST". * For meaningful comparison and to calculate the performance gains, we re-run the baseline models "Proto" and "Protoours" with the same setting. To reduce the time cost, we add the early stop constraints, i.e. stop the training if a continual 5 epochs training does not improve the dev-set F1 scores. ** The replicated results in [*] are lower than the reported results in the original paper. Therefore, we directly copy the results in the original paper as a comparison for demonstrating our method's effectiveness.
Because the processed datasets "I2B2" and "Onto" are not open-sourced by (Huang et al., 2021), we conduct the experiments on the other eight datasets. For more details of the datasets, please refer to (Huang et al., 2021). Table 2 reports the results on the eight datasets. Results vary among different datasets, but the normalized PNNs consistently outperform the original PNNs (min:0.48%, max:20.07%, average:10.20%). In certain datasets, normalized PNNs achieves extremely close even higher results than the original PNNs plus a pre-training method that is expensive in time cost (Proto+NSP). Furthermore, higher performance gains are obtained when incorporating the normalized PNNs with the NSP technique (Proto+NSP +6.48% v.s. Proto ours +NSP +7.88%). Our results show that the classical PNNs combined with the simple normalization and NSP can achieve the best results on the eight few-shot NER datasets (the open sources do not provide the ST checkpoints for PNN). This finding is innovative compared to the results in (Huang et al., 2021). Effective Learning: Figure 9 (in Appendix C) visualizes the training and dev F1 scores on two settings of Few-NERD, including the original and the normalized PNNs (the * mark denotes that we set the learning rate to 1e − 5 as the same as our experimental settings). Comparing the red with blue lines (with the same learning rate), normalized PNNs can fit the training data in a faster mode yet can achieve higher Dev F1 scores. Comparing the red with green lines, setting the learning rate to 1e −4 and without normalizing, PNNs learn unstably and more significantly overfit the training data (in INTER 5 way 5~5 shot, dev F1 scores decreases before increasing, and in INTRA 10 way 1~2 shot, the increasing of training F1 scores results in decreasing of dev F1 scores). Ablation Studies: Based on our analysis in sec-tion 6, we only normalize the prototypes and leave the query data embeddings unchanged. We conduct ablation studies about the normalization strategies on Few-NERD as shown in Table 3 in Appendix C. Proto AB1 means we normalize only the query data embeddings and leave the prototypes unchanged, and Proto AB2 means we normalize both the prototypes and the query data embeddings. We provide four sub-cases for ablation studies. All cases report substantial performance decrease.
Conclusion
We examine the synergistic effects of the largescale PTMs and the classical PNNs in the few-shot NER. Our theoretical analysis of PNNs shows that PNNs' distances that represent the query data's entity probabilities are partly priorily determined in terms of the prototype 2-norms. However, on the embeddings of the PTMs, we empirically verify that embedding 2-norms contain little entity information, being a type of PTMs' representation degeneration. Furthermore, we show that such representation degeneration makes PNNs' distance biased towards frequencies instead of entity-denoting. This distance bias prevents PNNs from learning useful entity information and causes PNNs to overfit the training corpus and become unreliable on new entities. Therefore, We propose a one-linecode normalization remedy to reconcile PTMs and PNNs for few-shot NER. The experimental results based on nine datasets suggest that the normalized PNNs proposed in this work achieve significant performance improvements over the original PNNs and get competitive results compared with the latest sophisticated methods while maintaining PNNs' all advantages, such as easy implementation and low computation cost. Considering the promising results and the innovation in normalizing the existing models, our results and analysis may be an interest of reference study for researchers and practitioners working with few-shot NER or other relevant tasks that involve the use of PTMs or PNNs.
Limitations
There are certain limitations in this paper. While our theoretical analysis about PNNs and the concept of PTMs' representation degeneration are not limited to the few-shot named-entity recognition, our focused problem, e.g. PNNs' distance is biased towards frequencies, is based on the fact that the greatly varied word frequencies represent limited entity information. It is possible that in other tasks, the corpus frequencies can represent semantic features, or the frequencies change much less. Our normalization remedy, therefore, cannot be directly applied to those tasks. Also, representation degeneration is a crucial intrinsic problem of large-scale PTMs. Our focused aspects, e.g. frequencies and entity information, is one type of practical issue. We argue that such intrinsic problems can result in different practical issues affecting other NLP tasks beyond this current work's scope.
A Bregman Divergence
Definition 1 (Bregman (1967); Censor and Zenios (1998)). Let φ : S → R, S = dom(φ) be a strictly convex function defined on a convex set S ⊆ R d such that φ is differentiable on ri(S), assumed to be nonempty. The Bregman divergence d φ :
S × ri(S) → [0, ∞) is defined as d φ = φ(x) − φ(y) − x − y, ∇φ(y)(5)
where ∇φ(y) represents the gradient vector of φ evaluated at y.
Proposition 1 (Banerjee (2005)). Let X be a random variable that take values in
X = {x i } n i=1 ⊂ S ⊆ R d following a positive probability measure ν such that E ν [X] ∈ ri(S). Given a Bregman di- vergence d φ : S × ri(S) → [0, ∞), the problem min s∈ri(S) E ν [d φ (X, s)](6)
has a unique minimizer given by
s † = µ = E ν [X].
Theorem 3 (Banerjee (2005)). Let p (ψ,θ) be the probability density function of a regular exponential family distribution. Let φ be the conjugate function of ψ so that (int(dom(φ)), φ) is the Legendre dual of (Θ, Ψ). Let θ ∈ Θ be the natural parameter and µ ∈ int(dom(φ))be the corresponding expectation parameter. Let d φ be the Bregman divergence derived from φ. Then p (ψ,θ) can be uniquely expressed as
p (ψ,θ) (x) = exp(−d φ (x, µ))b φ (x), ∀x ∈ dom(φ) (7) where b φ : dom(φ) → R + is a uniquely deter- mined function.
B Prototypical Neural Networks
Algorithm 2 K-way N-shot Prototypical Neural Network
Input: An episode E i containing: support data S u and query data Q u . Output: The loss J for the episode E i . #Calculating prototypes on S u C = NewEmptyList(Length=K)
for k = 1 to K do c k = 1 N k (x i ,y i ==k) f enc (x i ) end for #Classification on Q u and calculating the loss J J = NewEmptyList(Length=0) for k = 1 to K do for (x i , y i == k) in Q u do Q(ŷ i == k|x i ) = exp(−d φ (fenc(x i ),c k )) K k =1 exp(−d φ (fenc(x i ),c k ))
J.Add(CrossEntropyLoss(ŷ i , y i )) end for end for J = Mean(J)
Remark. Prototype calculation and query data classification are independent but have the same goal of minimizing the classifying loss.
Theorem. Assume data embeddings of the support and query data are independent and identically distributed. Let c k be the class prototype calculated by an aggregation function proto(·) : N i=1 H i → h ∈ H, the problem min proto(·) J ,where J is the classifying loss, achieves minimization given by proto(·) being the arithmetic mean.
Proof. In the above Remark, we argue the prototype calculation should also minimize the classifying loss while the query data is unseen. As the optimal prototypes should minimize the classification loss on query data, and the support and query data are independent and identically distributed, we let the support data be the agency of the query data. Therefore, the optimal prototype should minimize the classification loss on support data.
Let us consider the m th class, the corresponding cross-entropy loss is:
J m = − i log exp(−d φ (f enc (x i ), c m )) K k =1 exp(−d φ (f enc (x i ), c k )) = − i [−d φ (f enc (x i ), c m ) − log K k =1 exp(−d φ (f enc (x i , c k )))] = i d φ (f enc (x i ), c m ) + i log K k =1 exp(−d φ (f enc (x i , c k ))) (8)
where x i is the support data with the class m, c m and x k be the m th and k th class prototype. As we aim to find the optimal c m , we take the derivative of J m respect to c m :
∂J m ∂c m = ∂ i d φ (h i , c m ) ∂c m + ∂ i log k exp(−d φ (h i , c k )) ∂c m = ∂ i d φ (h i , c m ) ∂c m + i ∂(−d φ (h i , c m ))/∂c m k exp(−d φ (h i , c k )) = i (1 − 1 k exp(−d φ (h i , c k )) ) × ∂(d φ (h i , c m ))/∂c m(9)
where h i = f enc (x i ). As d φ is a Bregman Divergence, according to Proposition 1, we have
c m = i α(1 − 1 k exp(−d φ (h i , c k ))
)h i (10) The Equation 10 show the optimized c m should be the arithmetic mean of the support data embeddings minus the category confidences. But the category confidences correspond to the probability normalization of Softmax. If we ignore this, the optimal prototype calculation is the arithmetic mean.
Corollary. Based on the support data, PNNs estimate a Gaussian distribution N k (c k , σ 2 ) for embeddings in class k, where σ is a constant vector.
C Effective Learning and Ablation Studies
Figure 1 :
1An example of the input and output of NER.
Figure 2 :
2The first two coefficients of PCA analysis on the word embeddings. Color represents frequencies.The deep colors are clustered.
Figure 3 :Figure 4 :
34A bar chart displaying the mean word frequencies of different entities. The deeper the color, the larger the mean frequency. A histogram displaying the word frequencies in two entities.
Figure 5 :
5A histogram displays the average prototypes 2-norm of all classes.
Figure 6 :
6Max(Blue)/Avg.(Green)/Min(Orange) prototypes 2-norms within a same class.
Figure 7 :
7A histogram displays the 2-norms of the pretrained classifier in BERT.
Figure 8 :
8A histogram displays the average prototypes 2-norms of all classes after training.tures play limited roles to the correct classification.
, CoNLL 2003 (
2003Sang and De Meulder, 2003), WNUT 2017 (Derczynski et al., 2017), MIT Movie (Liu et al., 2013b), MIT Restaurant (Liu et al., 2013a), SNIPS (Coucke et al., 2018), ATIS (Hakkani-Tür et al., 2016), Multiwoz
∂Eν [d φ (H,s)] ∂s = 0 if and only if s = E ν [H]. If we use α to normalize the weight of Equation 9 to havei α(1 − exp(−d φ (h i ,cm)) k exp(−d φ (h i ,c k )) ) = 1, then the optimized c m can be calculated as:
Figure 9 :
9Training and Dev F1 Scores on Few-NERD of two cases.
Table 1 :
1The performance State-of-the-art models and our method on FEW-NERD. way 1~2 shot 5 way 5~10 shot 10 way 1~2 shot 10 way 5~10 shotModel
FEW-NERD(INTRA) F1 scores
Avg.
5 Struct (EMNLP 2020)
30.21
38.00
21.03
26.42
28.92
NNShot (EMNLP 2020)
25.75
36.18
18.27
27.38
26.90
CONTaiNER (ACL 2020)
40.43
53.70
33.84
47.49
43.87
+Viterbi (ACL 2020)
40.43
53.71
33.82
47.51
43.86
Proto (Neurips 2017)
20.76
42.54
15.05
35.40
28.43
Protoours*
36.83
54.62
30.06
47.61
42.28
Model
FEW-NERD(INTER) F1 scores
Avg.
5 way 1~2 shot 5 way 5~10 shot 10 way 1~2 shot 10 way 5~10 shot
Struct (EMNLP 2020)
51.88
57.32
43.34
49.57
50.53
NNShot (EMNLP 2020)
47.24
55.64
38.87
49.57
47.83
CONTaiNER (ACL 2020)
55.95
61.83
48.35
57.12
55.81
+Viterbi (ACL 2020)
56.10
61.90
48.36
57.13
55.87
Proto (Neurips 2017)
38.83
58.79
32.34
52.92
45.72
Protoours*
54.35
66.93
47.32
61.50
57.52
* We change the learning rate from 1e-4 to 1e-5. We lower the learning rate because normalized PNN converges too
rapidly to be tested on dev set (given the same evaluation steps) before it overfits the training set.
Table 2 :
2The performance on benchmark datasets proposed by(Huang et al., 2021).Model
Datasets (5-shot) F1 scors
Avg.
CoNLL
WikiGold
WNUT17
MIT Movie
MIT Restaurant
SNIPS
ATIS
Multiwoz
Proto*
58.22
47.58
20.51
29.94
43.65
56.96
73.82
23.74
44.30
Protoours*
58.70
55.69
28.46
50.01
51.34
76.69
87.41
27.78
54.51
Proto+NSP*
62.92
63.33
33.87
35.25
44.15
51.66
74.58
40.52
50.79
Protoours+NSP*
66.50
67.63
37.75
51.32
54.98
83.17
90.47
47.26
62.39
LC+NSP+ST**
65.4
68.4
37.6
55.9
51.3
83.0
90.5
45.1
62.12
Jiale Han, Bo Cheng, and Wei Lu. 2021. Exploring task difficulty for few-shot relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2605-2616, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Sepp Hochreiter, A Steven Younger, and Peter R Conwell. 2001. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pages 87-94. Springer. Nithin Holla, Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2020. Learning to learn to disambiguate: Meta-learning for few-shot word sense disambiguation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4517-4533. Yuqing Hu, Stéphane Pateux, and Vincent Gripon. 2022. Squeezing backbone feature distributions to the max for efficient few-shot learning. Algorithms, 15(5):147. Jiaxin Huang, Chunyuan Li, Krishan Subudhi, Damien Jose, Shobana Balakrishnan, Weizhu Chen, Baolin Peng, Jianfeng Gao, and Jiawei Han. 2021. Fewshot named entity recognition: An empirical baseline study. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10408-10423. Gregory Koch et al. 2015. Siamese neural networks for one-shot image recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.
Table 3 :
3Ablation studies of our method on FEW-NERD.
According to corollary 1.1, PNNs require the embeddings to follow Gaussian distribution. Similarly, works Hu et al., 2022) empirically follow the assumption and propose corresponding embedding post-processions to achieve performance gains.
implies part of the probabilities are priorly determined, and the word is more likely to be classified to the entity that has the smaller prototype 2-norm. Unfortunately, because of the representation degeneration of PTMs, these priorly determined probabilities tend to introduce non-entity information, and bias the PNNs' distance towards frequencies.
Data are taken from the Corpus of Contemporary American English (COCA) that provides 60000 English words with frequencies (COCA_60000).4 The words frequencies are counted on the first 2.5 million sentences in BookCorpus(Zhu et al., 2015) processed by HuggingFace(Wolf et al., 2020).
H can be the all potential parameters of a given network structure and h can be an arbitrary parameter.
https://github.com/thunlp/Few-NERD 7 https://github.com/few-shot-NER-benchmark 8 The replicated performances are inferior to the reported results in the related works, so we use the reported results for a standard reference.
http://www.scholarpedia.org/article/Mutual_information
AcknowledgementThis work is supported by National Key R&D Program of China (2020AAA0105200).And the corresponding choice of the Bregman divergence d should be the squared Euclidean distance.Proof. According to(Banerjee et al., 2005), for the d-dimension spherical Gaussian distribution, the parameter formula is:The µ in PNNs is the prototypes, i.e. the arithmetic mean of sampled observations, and it exactly estimates the parameter in Gaussian distribution. Therefore, the optimal prototype calculation results in estimating a Gaussian distribution for each class. On a Gaussian distribution where σ is a constant, d φ corresponds to the squared Euclidean distance.Theorem. PNNs learn on a Markov Chain: S u → Q u , and maximizes the information bound on the mutual information between S u and Q u .Proof. According to the Theorem 3, a Bregman divergence and a Distribution are connected:2σ 2 h 2 and p 0 is uniquely determined. PNNs calculate the distance between h and µ, which can be viewed as the probability of observing h given µ. This relationship between the support and query data implies the Markov Chain: S u → Q u , for observing the query data is dependent on the support data.In the right of Equation 15, −d φ (h, µ) can be viewed as the probability of observing h given µ, and the rest φ(h) + log(p 0 (h)) can be viewed as the probability of observing h unknown µ: p(h). The first term p(h | µ) is inversely proportional to h 2 , while the second p(h) is proportional to h 2 . PNNs maximize −d φ (h, µ), resulting in the implicit minimizing of p(h). Integratedly, the learnt probability P (ψ,θ) (h) is proportional to p(h|µ) p(h) . Substitute this back to the loss:The results show I(h k , µ k ) ≥ log(K) − J, which means that PNNs minimize the classification loss to maximize the information bound on the mutual information between rvh and µ, and integratedly, between the support and query data. We notice the above detail proof follows the same mathematical process in the works on contrastive learning (Van den Oord et al., 2018)Corollary. Let S g u be unknown embeddings that the Markov chain: S g u → Q u holds according to entity information. The integrated Markove chain becomes: S g u → S u → Q u , and PNNs will overfit the words frequencies information in S u .Proof. In the Markov Chain: S g u → S u → Q u , using the data processing inequality, 9 we have:The learnt extra information I(S u , Q u ) − I(S g u , Q u ) ≥ 0 represents PNN's overfitting to S u 's words frequencies introduced by the frequencyrelated distances.
Named entity recognition in wikipedia. Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, James R Curran, Proceedings of the 2009 workshop on the people's web meets NLP: Collaboratively constructed semantic resources. the 2009 workshop on the people's web meets NLP: Collaboratively constructed semantic resourcesPeople's WebDominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R Curran. 2009. Named en- tity recognition in wikipedia. In Proceedings of the 2009 workshop on the people's web meets NLP: Col- laboratively constructed semantic resources (Peo- ple's Web), pages 10-18.
Clustering with bregman divergences. Arindam Banerjee, Srujana Merugu, S Inderjit, Joydeep Dhillon, John Ghosh, Lafferty, Journal of machine learning research. 610Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, Joydeep Ghosh, and John Lafferty. 2005. Cluster- ing with bregman divergences. Journal of machine learning research, 6(10).
Few-shot text classification with distributional signatures. Yujia Bao, Menghua Wu, Shiyu Chang, Regina Barzilay, International Conference on Learning Representations. Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. 2020. Few-shot text classification with dis- tributional signatures. In International Conference on Learning Representations.
The word frequency effect in word processing: An updated review. Marc Brysbaert, Paweł Mandera, Emmanuel Keuleers, Current Directions in Psychological Science. 271Marc Brysbaert, Paweł Mandera, and Emmanuel Keuleers. 2018. The word frequency effect in word processing: An updated review. Current Directions in Psychological Science, 27(1):45-50.
Multiwoz-a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gasic, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gasic. 2018. Multiwoz-a large- scale multi-domain wizard-of-oz dataset for task- oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026.
Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, arXiv:1805.10190arXiv preprintAlice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, et al. 2018. Snips voice plat- form: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190.
CONTaiNER: Few-shot named entity recognition via contrastive learning. Sarkar Snigdha, Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, Rui Zhang, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Sarkar Snigdha Sarathi Das, Arzoo Katiyar, Rebecca Passonneau, and Rui Zhang. 2022. CONTaiNER: Few-shot named entity recognition via contrastive learning. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6338-6353, Dublin, Ireland. Association for Computational Linguistics.
Results of the wnut2017 shared task on novel and emerging entity recognition. Leon Derczynski, Eric Nichols, Nut Marieke Van Erp, Limsopatham, Proceedings of the 3rd Workshop on Noisy User-generated Text. the 3rd Workshop on Noisy User-generated TextLeon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the wnut2017 shared task on novel and emerging entity recogni- tion. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Few-NERD: A few-shot named entity recognition dataset. Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, Zhiyuan Liu, 10.18653/v1/2021.acl-long.248Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Online. Association for Computational LinguisticsNing Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021. Few-NERD: A few-shot named entity recognition dataset. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 3198-3213, Online. Associa- tion for Computational Linguistics.
Boosting few-shot learning with adaptive margin loss. Aoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhenguo Li, Liwei Wang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionAoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhen- guo Li, and Liwei Wang. 2020a. Boosting few-shot learning with adaptive margin loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12576-12584.
On the sentence embeddings from pre-trained language models. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, Lei Li, 10.18653/v1/2020.emnlp-main.733Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020b. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130, Online. Association for Computa- tional Linguistics.
Asgard: A portable architecture for multilingual dialogue systems. Jingjing Liu, Panupong Pasupat, Scott Cyphers, Jim Glass, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEJingjing Liu, Panupong Pasupat, Scott Cyphers, and Jim Glass. 2013a. Asgard: A portable architecture for multilingual dialogue systems. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8386-8390. IEEE.
Query understanding enhanced by hierarchical parsing structures. Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, Jim Glass, 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. IEEEJingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers, and Jim Glass. 2013b. Query understand- ing enhanced by hierarchical parsing structures. In 2013 IEEE Workshop on Automatic Speech Recogni- tion and Understanding, pages 72-77. IEEE.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Gregory S Corrado, Jeffrey Dean, ICLR. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word rep- resentations in vector space. In ICLR.
All-but-thetop: Simple and effective postprocessing for word representations. Jiaqi Mu, Pramod Viswanath, International Conference on Learning Representations. Jiaqi Mu and Pramod Viswanath. 2018. All-but-the- top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.
Tadam: Task dependent adaptive metric for improved few-shot learning. Boris Oreshkin, Alexandre Pau Rodríguez López, Lacoste, Advances in neural information processing systems. 31Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. 2018. Tadam: Task dependent adaptive metric for improved few-shot learning. Advances in neural information processing systems, 31.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Introduction to the conll-2003 shared task: Languageindependent named entity recognition. Erik Tjong , Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: Language- independent named entity recognition. In Proceed- ings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.
Prototypical networks for few-shot learning. Advances in neural information processing systems. Jake Snell, Kevin Swersky, Richard Zemel, 30Jake Snell, Kevin Swersky, and Richard Zemel. 2017. Prototypical networks for few-shot learning. Ad- vances in neural information processing systems, 30.
Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus. Amber Stubbs, Özlem Uzuner, Journal of biomedical informatics. 58Amber Stubbs and Özlem Uzuner. 2015. Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/uthealth corpus. Journal of biomedi- cal informatics, 58:S20-S29.
Learning to compare: Relation network for few-shot learning. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, H S Philip, Timothy M Torr, Hospedales, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFlood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1199-1208.
Learning from miscellaneous other-class words for fewshot named entity recognition. Meihan Tong, Shuai Wang, Bin Xu, Yixin Cao, Minghui Liu, Lei Hou, Juanzi Li, 10.18653/v1/2021.acl-long.487Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsMeihan Tong, Shuai Wang, Bin Xu, Yixin Cao, Minghui Liu, Lei Hou, and Juanzi Li. 2021. Learn- ing from miscellaneous other-class words for few- shot named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 6236-6247, Online. As- sociation for Computational Linguistics.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, Representation learning with contrastive predictive coding. arXiv e-prints. 1807Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv e-prints, pages arXiv-1807.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30.
Matching networks for one shot learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, Advances in neural information processing systems. 29Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. 2016. Matching networks for one shot learning. Advances in neural information processing systems, 29.
Visual domain adaptation with manifold embedded distribution alignment. Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, Philip S Yu, Proceedings of the 26th ACM international conference on Multimedia. the 26th ACM international conference on MultimediaJindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, and Philip S Yu. 2018. Visual do- main adaptation with manifold embedded distribu- tion alignment. In Proceedings of the 26th ACM international conference on Multimedia, pages 402- 410.
Generalizing from a few examples: A survey on few-shot learning. Yaqing Wang, Quanming Yao, T James, Lionel M Kwok, Ni, ACM computing surveys (csur). 533Yaqing Wang, Quanming Yao, James T Kwok, and Li- onel M Ni. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM computing sur- veys (csur), 53(3):1-34.
A survey of unsupervised deep domain adaptation. Garrett Wilson, Diane J Cook, ACM Transactions on Intelligent Systems and Technology (TIST). 115Garrett Wilson and Diane J Cook. 2020. A survey of unsupervised deep domain adaptation. ACM Transactions on Intelligent Systems and Technology (TIST), 11(5):1-46.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45.
A survey on recent advances in named entity recognition from deep learning models. Vikas Yadav, Steven Bethard, arXiv:1910.11470arXiv preprintVikas Yadav and Steven Bethard. 2019. A survey on re- cent advances in named entity recognition from deep learning models. arXiv preprint arXiv:1910.11470.
Free lunch for few-shot learning: Distribution calibration. Shuo Yang, Lu Liu, Min Xu, International Conference on Learning Representations. Shuo Yang, Lu Liu, and Min Xu. 2020. Free lunch for few-shot learning: Distribution calibration. In Inter- national Conference on Learning Representations.
Simple and effective few-shot named entity recognition with structured nearest neighbor learning. Yi Yang, Arzoo Katiyar, 10.18653/v1/2020.emnlp-main.516Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Yi Yang and Arzoo Katiyar. 2020. Simple and effec- tive few-shot named entity recognition with struc- tured nearest neighbor learning. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6365- 6375.
Breaking the softmax bottleneck: A high-rank rnn language model. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, William W Cohen, International Conference on Learning Representations. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bot- tleneck: A high-rank rnn language model. In Inter- national Conference on Learning Representations.
Softmax supervision with isotropic normalization. Yue Zhao, Deli Zhao, Shaohua Wan, Bo Zhang, Yue Zhao, Deli Zhao, Shaohua Wan, and Bo Zhang. 2018. Softmax supervision with isotropic normal- ization.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionYukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE inter- national conference on computer vision, pages 19- 27.
| [
"https://github.com/thunlp/Few-NERD",
"https://github.com/few-shot-NER-benchmark"
] |
[
"THE DIRHA-ENGLISH CORPUS AND RELATED TASKS FOR DISTANT-SPEECH RECOGNITION IN DOMESTIC ENVIRONMENTS",
"THE DIRHA-ENGLISH CORPUS AND RELATED TASKS FOR DISTANT-SPEECH RECOGNITION IN DOMESTIC ENVIRONMENTS"
] | [
"Mirco Ravanelli mravanelli@fbk.eu \nFBK\n38123Povo, TrentoItaly\n",
"Luca Cristoforetti \nFBK\n38123Povo, TrentoItaly\n",
"Roberto Gretter gretter@fbk.eu \nFBK\n38123Povo, TrentoItaly\n",
"Marco Pellin pellin@fbk.eu \nFBK\n38123Povo, TrentoItaly\n",
"MaurizioAlessandro Sosi alesosi@fbk.eu \nFBK\n38123Povo, TrentoItaly\n",
"Omologo Fondazione omologo@fbk.eu \nFBK\n38123Povo, TrentoItaly\n",
"Bruno Kessler \nFBK\n38123Povo, TrentoItaly\n"
] | [
"FBK\n38123Povo, TrentoItaly",
"FBK\n38123Povo, TrentoItaly",
"FBK\n38123Povo, TrentoItaly",
"FBK\n38123Povo, TrentoItaly",
"FBK\n38123Povo, TrentoItaly",
"FBK\n38123Povo, TrentoItaly",
"FBK\n38123Povo, TrentoItaly"
] | [] | This paper introduces the contents and the possible usage of the DIRHA-ENGLISH multi-microphone corpus, recently realized under the EC DIRHA project. The reference scenario is a domestic environment equipped with a large number of microphones and microphone arrays distributed in space.The corpus is composed of both real and simulated material, and it includes 12 US and 12 UK English native speakers. Each speaker uttered different sets of phonetically-rich sentences, newspaper articles, conversational speech, keywords, and commands. From this material, a large set of 1-minute sequences was generated, which also includes typical domestic background noise as well as inter/intra-room reverberation effects. Dev and test sets were derived, which represent a very precious material for different studies on multi-microphone speech processing and distant-speech recognition. Various tasks and corresponding Kaldi recipes have already been developed.The paper reports a first set of baseline results obtained using different techniques, including Deep Neural Networks (DNN), aligned with the state-of-the-art at international level. | 10.1109/asru.2015.7404805 | [
"https://arxiv.org/pdf/1710.02560v1.pdf"
] | 9,975,295 | 1710.02560 | e0f9b05703d2c351c39b33d372b15431ab9dd26e |
THE DIRHA-ENGLISH CORPUS AND RELATED TASKS FOR DISTANT-SPEECH RECOGNITION IN DOMESTIC ENVIRONMENTS
Mirco Ravanelli mravanelli@fbk.eu
FBK
38123Povo, TrentoItaly
Luca Cristoforetti
FBK
38123Povo, TrentoItaly
Roberto Gretter gretter@fbk.eu
FBK
38123Povo, TrentoItaly
Marco Pellin pellin@fbk.eu
FBK
38123Povo, TrentoItaly
MaurizioAlessandro Sosi alesosi@fbk.eu
FBK
38123Povo, TrentoItaly
Omologo Fondazione omologo@fbk.eu
FBK
38123Povo, TrentoItaly
Bruno Kessler
FBK
38123Povo, TrentoItaly
THE DIRHA-ENGLISH CORPUS AND RELATED TASKS FOR DISTANT-SPEECH RECOGNITION IN DOMESTIC ENVIRONMENTS
Index Terms-distant speech recognitionmicrophone arrayscorporaKaldiDNN
This paper introduces the contents and the possible usage of the DIRHA-ENGLISH multi-microphone corpus, recently realized under the EC DIRHA project. The reference scenario is a domestic environment equipped with a large number of microphones and microphone arrays distributed in space.The corpus is composed of both real and simulated material, and it includes 12 US and 12 UK English native speakers. Each speaker uttered different sets of phonetically-rich sentences, newspaper articles, conversational speech, keywords, and commands. From this material, a large set of 1-minute sequences was generated, which also includes typical domestic background noise as well as inter/intra-room reverberation effects. Dev and test sets were derived, which represent a very precious material for different studies on multi-microphone speech processing and distant-speech recognition. Various tasks and corresponding Kaldi recipes have already been developed.The paper reports a first set of baseline results obtained using different techniques, including Deep Neural Networks (DNN), aligned with the state-of-the-art at international level.
INTRODUCTION
During the last decade, much research has been devoted to improve Automatic Speech Recognition (ASR) performance [1]. As a result, ASR has recently been applied in several fields, such as web-search, car control, automated voice answering, radiological reporting and it is currently used by millions of users worldwide. Nevertheless, most state-of-the-art systems are still based on close-talking solutions, forcing the user to speak very close to a microphone-equipped device. Although this approach usually leads to better performance, it is easy to predict that, in the future, users will prefer to relax the constraint of handling or wearing any device to access speech recognition services. There are indeed various real-life situations where a distant-talking (far-field) 1 interac- 1 In the following, the same concept is referred to as "distant-speech". tion is more natural, convenient and attractive [2]. In particular, amongst all the possible applications, an emerging field is speech-based domestic control, where users might prefer to freely interact with their home appliances without wearing or even handling any microphone-equipped device. This scenario was addressed under the EC DIRHA (Distant-speech Interaction for Robust Home Applications) project 2 , which had the ultimate goal of developing voice-enabled automated home services based on Distant-Speech Recognition (DSR) in different languages.
Despite the growing interest towards DSR, current technologies still exhibit a significant lack of robustness and flexibility, since the adverse acoustic conditions originated by non-stationary noises and acoustic reverberation make speech recognition significantly more challenging [3]. Although considerable progresses were made at multi-microphone frontend processing level in order to feed ASR with an enhanced speech input [4,5,6,7,8], the performance loss observed from close-talking to distant-speech remains quite critical, even when the most advanced DNN-based backend frameworks are adopted [9,10,11,12,13,14].
To further progress, a crucial step regards the selection of data suitable to train and test the various speech processing, enhancement, and recognition algorithms. Collecting and transcribing sufficiently large data sets to cover any possible application scenario is a prohibitive, time-consuming and expensive task. In a domestic context, in particular, due to the large variabilities that can be introduced when deploying such systems in different houses, this issue becomes even more challenging than in any other traditional ASR application. In this context, an ideal system should be flexible enough in terms of microphone distribution in space, and in terms of other possible profiling actions. Moreover, it must be able to provide a satisfactory behaviour immediately after its installation, and to improve performance thanks to its capability to learn from the environment and from the users.
In order to develop such solutions, the availability of high-quality and realistic, multi-microphone corpora represents one of the fundamental steps towards reducing the 2 performance gap between close-talking and distant-speech interaction. Along this direction, strong efforts have been spent recently by the international scientific community, through the development of corpora and challenges, such as REVERB [15], CHIME [16,17] and ASpIRE. Nevertheless, we feel that other complementary corpora and tasks are necessary to the research community in order to further boost technological advances in this field, for instance providing a large number of "observations" of the same acoustic scene.
The DIRHA-ENGLISH corpus complements the set of corpora previously collected under the DIRHA project in other four languages (i.e., Austrian German, Greek, Italian, Portuguese) [18]. It gives the chance of working on English, the most commonly used language inside the ASR research community, with a very large number of microphone channels, a multi-room setting, and the use of microphone arrays having different characteristics. Half of the material is based on simulations, and half is based on real recordings, which allows one to assess recognition performance in real-world conditions. It is also worth mentioning that some portions of the corpus will be made publicly available, with free access, in the short term (as done with other data produced by the DIRHA consortium).
The purpose of this paper is to introduce the DIRHA-ENGLISH corpus as well as to provide some baseline results on phonetically-rich sentences, which were obtained using the Kaldi framework [19]. The resulting TIMIT-like phone recognition task can be seen as complementary to the WSJlike and conversational speech tasks also available for a next distribution.
The remainder of the paper is organized as follows. Section 2 provides a brief description of the DIRHA project, while Section 3 focuses the contents and characteristics of the DIRHA-ENGLISH corpus. Section 4 gives a description of the experimental tasks so far defined, and of the corresponding baseline results. Section 5 draws some conclusions.
THE DIRHA PROJECT
The EC DIRHA project, which started in January 2012 and lasted three years, had the goal of addressing acoustic scene analysis and distant-speech interaction in a home environment. In the following, some information are reported about project goals, tasks, and corpora.
Goals and tasks
The application scenario targeted by the project is characterized by a quite flexible voice interactive system to talk with in any room, and from any position in space. Exploiting a microphone network distributed in the different rooms of the apartment, the DIRHA system reacts properly when a command is given by the user. The system is always-listening, waiting for a specific keyword to "capture" in order to begin a dialogue. The dialogue that is triggered in this way, gives the end-user a possible access to devices and services, e.g., open/close doors and windows, switch on/off lights, control the temperature, play music, etc. Barge-in (to interact while music/speech prompts are played), speaker verification, concurrent dialogue management (to support simultaneous dialogues with different users) are some advanced features characterizing the system. Finally, a very important aspect to mention is the need to limit the rate of false alarms, due to possible misinterpretation of normal conversations or of other sounds captured by the microphones, which do not carry any relevant message to the system.
Starting from these targeted functionalities, several experimental tasks were defined concerning the combination between front-end processing algorithms (e.g., of speaker localization, acoustic echo cancellation, speech enhancement, etc.) and an ASR backend, in each language. Most of these tasks were referred to voice interaction in the ITEA apartment 3 , situated in Trento (Italy), which was the main site for acoustic and speech data collection.
DIRHA corpora
The DIRHA corpora were designed in order to provide multimicrophone data sets that can be used to investigate a wide range of tasks as those mentioned above.
Some data sets were based on simulations realized applying a contamination method [20,21,22] that combines clean-speech signals, estimated Impulse Responses (IRs), and real multichannel background noise sequences, as described in [18]. Other corpora were recorded under real-world conditions.
Besides the DIRHA-ENGLISH corpus described in the next section, other corpora developed in the project are:
• The DIRHA Sim corpus described in [18] (30 speakers x 4 languages), which consists of 1-minute multichannel sequences including different acoustic events and speech utterances;
• A Wizard-of-OZ data set proposed in [23] to evaluate the performance of speech activity detection and speaker localization components;
• The DIRHA AEC corpus [24], which includes data specifically created for studies on Acoustic Echo Cancellation (AEC), to suppress known interferences diffused in the environment (e.g., played music);
• The DIRHA-GRID corpus [25], a multi-microphone multi-room simulated data set that derives from contaminating the GRID corpus [26] of short commands in the English language.
THE DIRHA-ENGLISH CORPUS
As done for the other four languages, also the DIRHA-ENGLISH corpus consists of a real and a simulated data set, the latter one deriving from contamination of a clean speech data set described next.
Clean speech material
The clean speech data set was realized in a recording studio of FBK, using professional equipment (e.g., a Neumann TLM 103 microphone) to obtain high-quality 96 kHz -24 bit material. 12 UK and 12 US speakers were recorded (6 males and 6 females, for each language). For each of them, the corresponding recorded material includes:
• 15 read commands;
• 15 spontaneous commands;
• 13 keywords;
• 48 phonetically-rich sentences (from the Harvard corpus);
• 66 or 67 sentences from WSJ-5k;
• 66 or 67 sentences from WSJ-20k;
• about 10 minutes of conversational speech (e.g., the subject was asked to talk about a movie recently seen).
The total time is about 11 hours. All the utterances were manually annotated by an expert. For the phonetically-rich sentences, an automatic phone segmentation procedure was applied as done in [27]. An expert then checked manually the resulting phone transcriptions and time-aligned boundaries to confirm their reliability.
For both US and UK English, 6 speakers were assigned to the development set, while the other 6 speakers were assigned to the test set. These assignments were done in order to distribute WSJ sentences as in the original task [28]. The data set contents are compliant with TIMIT specifications (e.g., file format).
The microphone network
The ITEA apartment is the reference home environment that was available during the project for data collection as well as for the development of prototypes and showcases. The flat comprises five rooms which are equipped with a network of several microphones. Most of them are high-quality omnidirectional microphones (Shure MX391/O), connected to multichannel clocked pre-amp and A/D boards (RME Octamic II), which allowed a perfectly synchronous sampling at 48 kHz, with 24 bit resolution. The bathroom and two other rooms were equipped with a limited number of microphone pairs and triplets (i.e., overall 12 microphones), while the livingroom and the kitchen comprise the largest concentration of sensors and devices. As shown in Figure 1, the living-room includes three microphone pairs, a microphone triplet, two 6-microphone ceiling arrays (one consisting of MEMS digital microphones), two harmonic arrays (consisting of 15 electret microphones and 15 MEMS digital microphones, respectively). More details about this facility can be found in [18].
Concerning this microphone network, a strong effort was devoted to characterize the environment at acoustic level, through different campaigns of IR estimation, leading to more than 10.000 IRs that describe sound propagation from different positions in space (and different possible orientations of the sound source) to any of the available microphones. The method adopted to estimate an IR consists in diffusing a known Exponential Sine Sweep (ESS) signal in the target environment, and recording it by the available microphones [29]. The accuracy of the resulting IR estimation has a remarkable impact on the speech recognition performance, as shown in [30]. For more details on the creation of the ITEA IR database, please refer to [30,18]. Note that the microphone network considered for the DIRHA-ENGLISH data set (shown in Fig.1) is limited to the living-room and to the kitchen of the ITEA apartment, but also considers harmonic arrays and MEMS microphones which were unavailable in the other DIRHA corpora.
Simulated data set
The DIRHA-ENGLISH simulated data sets derive from the clean speech described in Section 3.1, and from the application of the contamination method discussed in [20,30].
The resulting corpus consists of a large number of 1minute sequences, each including a variable number of sentences uttered in the living-room with different noisy conditions. Four types of sequences have been created, corresponding to the respective following tasks: 1) Phonetically-rich sentences 4 ; 2) WSJ 5-k utterances; 3) WSJ 20-k utterances; 4) Conversational speech (also including keywords and commands).
For each sequence, 62 microphone channels are available, as outlined in Section 3.2.
Real data set
For what concerns real recordings, each subject was positioned in the living-room and read the material from a tablet, standing still or sitting on a chair, in a given position. After each set, she/he was asked to move to a different position and take a different orientation. For each speaker, the recorded material corresponds to the same list of contents reported in Section 3.1 for the clean speech data set.
Note also that all the channels recorded through MEMS digital microphones were time-aligned with the others during a post-processing step (since using the same clock for all the platforms was not feasible due to different settings and sampling frequency in the case of MEMS devices) .
Once collected the whole real material, 1-minute sequences were derived from it in order to ensure a coherence in terms of sequence between simulation and real data sets.
EXPERIMENTS AND RESULTS
This section describes the proposed task and the related baseline experiments concerning the US phonetically-rich portion of the DIRHA-English corpus.
ASR framework
Training and testing corpora
In this work, the training phase is accomplished with the train part of the TIMIT corpus [31]. For the DSR experiments, the 4 While noisy conditions are quite challenging for the WSJ and conversational parts of the DIRHA-English corpus, the phonetically-rich sequences are characterized by more favorable conditions in order to make this material more suitable for studies on reverberation effects only. original TIMIT dataset is reverberated using three impulse responses measured in the living-room. Moreover, some multichannel noisy background sequences are added to the reverberated signals, in order to better match real-world conditions. Both the impulse responses and the noisy sequences are different from those used to generate the DIRHA-ENGLISH simulated data set.
The test phase is conducted using the real and the simulated phonetically-rich sentences of the DIRHA-English data set. In both cases, for each 1-minute sequence an oracle voice activity detector (VAD) is applied in the next experiments 5 , in order to avoid any possible bias due to inconsistent sentence boundaries. A down-sampling of the speech sequences from 48 to 16 kHz is finally performed.
Feature extraction
A standard feature extraction based on MFCCs is applied to the speech sentences. In particular, the signal is blocked into frames of 25 ms with 10 ms overlapping and, for each frame, 13 MFCCs are extracted. The resulting features, together with their first and second order derivatives, are then arranged into a single observation vector of 39 components.
Acoustic model training
In the following experiments, three different acoustic models of increasing complexity are considered. The procedure adopted for training such models is the same as that used for the original s5 TIMIT Kaldi recipe [19]. The first baseline (mono), refers to a simple system characterized by 48 context-independent phones of the English language, each modeled by a three state left-to-right HMM (overall using 1000 gaussians). The second baseline (tri) is based on a context-dependent phone modeling and on speaker adaptive training (SAT). Overall, 2.5k tied states with 15k gaussians are employed. Finally, the DNN baseline (DNN), trained with the Karel's recipe [32], is composed of 6 hidden layers of 1024 neurons, with a context window of 11 consecutive frames (5 before and 5 after the current frame) and an initial learning rate of 0.008.
Proposed task and evaluation
The original Kaldi recipe is based on a bigram language model estimated from the phone transcriptions available in the training set. Conversely, we propose the adoption of a pure phone-loop (i.e., zero-gram based) task, in order to avoid any non-linear influence and artifacts possibly originated by a language model. Our past experience [30,33,14] indeed suggests that, even though the use of language models is certainly helpful in increasing the recognition performance, 5 Alternative tasks (not presented here) have been defined as well with the same material, in order to investigate VAD and ASR components together. the adoption of a simple phone-loop task is more suitable for experiments purely focusing on the acoustic information.
Another difference with the original Kaldi recipe regards the evaluation of silences and closures. In the evaluation phase, the standard Kaldi recipe (based on Sclite) maps the original 48 English phones into a reduced set of 39 units, as originally done in [34]. In particular, the six closures (bcl, dcl, gcl, kcl, pcl, tcl) are mapped as "optional silences" and possible deletions of such units are not scored as errors. These phones would be likely considered as correct, since deletions of short closures occur very frequently. We believe that the latter aspect might introduce a bias in the evaluation metrics, especially for DSR tasks, where the reverberation tail makes the recognition of the closures nearly infeasible, as highlighted in Figure 2. For this reason, we propose to simply filter out all the silences and closures from both the reference and the hypothesized phone sequences. This leads to a performance reduction, since all the favorable optional silences added in the original recipe are avoided. However, a more coherent estimation of the recognition rates concerning phones as occlusive and vowels is reached.
Baseline results
This section provides some baseline results 6 , which might be useful in the future to other researchers for reference purposes. In the following sections, results based on close- 6 Part of the experiments are conducted with a Tesla K40 donated by the NVIDIA Corporation. talking and distant-speech input are presented. Table 1 reports the performance obtained by decoding the clean sentences recorded in the FBK recording studio with either a phone bigram language model or a simple phone-loop. Results are provided using both the standard Kaldi s5 and the proposed recipe, in order to highlight all the discrepancies in performance that can be observed in these different experimental settings. As expected, these results highlight that the system performance is significantly improved when passing from a simple monophone-based model to a more competitive DNN baseline. Moreover, as outlined in Sec.4.1.4, applying the original Kaldi evaluation provides a mismatch of about 20% in relative error reduction, which does not correspond to any real system improvement. Next experiments will be based on the pure phone-loop grammar scored with the proposed evaluation method.
Close-talking performance
Single distant-microphone performance
In this section, the results obtained with a single distant microphone are discussed. Table 2 shows the performance achieved with some of the microphones highlighted in Fig.1 Table 2. PER(%) performance obtained with single distant microphones for both the simulated and real dataset of phonetically-rich sentences.
The results clearly highlights that in the case of distantspeech input the ASR performance is dramatically reduced, if compared to a close-talking case. As already observed with close-talking results, the use of a DNN significantly outperforms the other acoustic modeling approaches. This is consistent for all the considered channels, with both simulated and real data sets. Actually, the performance on real data is slightly worse than that achieved on simulated data, due to a lower SNR characterizing the real recording sessions.
It is also worth noting that almost all the channels provide a similar performance and a comparable trend over the considered acoustic models. Only the kitchen microphone (KA6) corresponds to a more challenging situation, since all the utterances of both real and simulated data sets were pronounced in the living-room.
Delay-and-sum beamforming performance
This section reports the results obtained with a standard delay-and-sum beamforming [4] applied to both the ceiling and the harmonic arrays of the living-room. Table 3. PER(%) performance obtained with a delay-andsum beamforming applied to both the ceiling and the linear harmonic array. Table 3 shows that beamforming is helpful in improving the system performance. For instance, in the case of real data one passes from a PER of 55.1%, with the single microphone, to a PER of 50.6%, when delay-and-sum beamforming is applied to the ceiling array signals.
Even though the ceiling array is composed of six microphones only, it ensures a slightly better performance when compared with a less compact 13 element harmonic array. This result might be due both to a better position of the ceiling array, which often ensures the presence of a direct path stronger than reflections, and to adoption of higher quality microphones. The performance improvement introduced by delay-and-sum beamforming is higher with real data, confirming that spatial filtering techniques are particularly helpful when the acoustic conditions are less stationary and predictable.
Microphone selection performance
The DIRHA-English corpus can also be used for microphone selection experiments. It would be thus of interest to provide some lower and upper bound performance for a microphone selection technique applied to this data set. Table 4 compares the results achieved with random and with oracle selections of the microphone, for each phonetically-rich sentence. For this selection, we considered the six microphones of the livingroom, which are depicted as red dots in Figure 1.
Results show that a proper microphone selection is crucial for improving the performance of a DSR system. The gap between the upper bound limit, based on an oracle channel Table 4. PER(%) performance obtained with a random and an oracle microphone selection. selection and the lower bound limit, based on a random selection of the microphone, is particularly large. This confirms the importance of suitable microphone selection criteria. A proper channel selection has a great potential even when compared with a microphone combination based on delayand-sum beamforming. For instance, a PER of 44.0% is obtained with an oracle channel selection against a PER of 50.6% achieved with the ceiling array.
CONCLUSIONS AND FUTURE WORK
This paper described the DIRHA-ENGLISH multi-microphone corpus and the first baseline results concerning the use of the phonetically-rich sentence data sets. Overall, the experimental results show the expected trend of performance, quite well aligned to past works in this field. In research studies on DSR, there are many advantages in using phonetically-rich material with such a large number of microphone channels. For instance, there is the possibility of better focusing on the impact on performance of some frontend processing techniques for what concerns specific phone categories.
The corpus, also includes WSJ and conversational speech data sets that can be object of public distribution 7 and of a possible use in next challenges regarding DSR. The latter data sets can be very helpful to investigate other key topics as, for instance, multi-microphone hypothesis combination based on confusion networks, multiple lattices, and rescoring. Forthcoming works include the development of baselines and related recipes for MEMS microphones, for WSJ and conversational sequences, as well as for the UK English language.
CORPUS RELEASE
Some 1-minute sequences can be found at http://dirha. fbk.eu/DIRHA_English. The access to data that were used in this work, and to related documents, will be possible soon through a FBK server, with modalities that will be reported under http://dirha.fbk.eu. In the future, other data sets will be made publicly available, together with corresponding documentation and recipes, and with instructions to allow comparison of systems and maximize scientific insights.
Fig. 1 .
1An outline of the microphone set-up adopted for the DIRHA-ENGLISH corpus. Blue small dots represent digital MEMS microphones, red ones refers to the channels considered for the following experimental activity, while black ones represent the other available microphones. The right pictures show the ceiling array and the two linear harmonic arrays installed in the living-room.
Fig. 2 .
2The phrase "a chicken leg" uttered in close and distant-talking scenarios, respectively. The closures (in red) are dimmed by the reverberation tail in the distant speech.
The research presented here has been partially funded by the European Unions 7th Framework Programme (FP7/2007-2013) under grant agreement no. 288121 DIRHA (for more details, please see http:// dirha.fbk.eu).
.Simulated Data
Real Data
Mic. ID Mono
Tri
DNN Mono
Tri
DNN
LA6
68.8
57.7
51.6
70.5
60.9
55.1
L1C
67.4
58.5
52.4
70.3
61.7
55.6
LD07
67.5
58.1
53.2
71.5
62.6
57.3
KA6
76.7
67.3
64.0
80.5
73.6
70.5
We would like to thank ITEA S.p.A (Istituto Trentino per l'Edilizia Abitativa) for making available the apartment used for this research.
The distribution of WSJ data set is under discussion with LDC.
Automatic Speech Recognition -A Deep Learning Approach. Dong Yu, Li Deng, SpringerDong Yu and Li Deng, Automatic Speech Recognition - A Deep Learning Approach, Springer, 2015.
Distant Speech Recognition. M Wölfel, J Mcdonough, WileyM. Wölfel and J. McDonough, Distant Speech Recog- nition, Wiley, 2009.
E Hänsler, G Schmidt, Speech and Audio Processing in Adverse Environments. SpringerE. Hänsler and G. Schmidt, Speech and Audio Process- ing in Adverse Environments, Springer, 2008.
M Brandstein, D Ward, Microphone arrays. BerlinSpringerM. Brandstein and D. Ward, Microphone arrays, Springer, Berlin, 2000.
Beamforming for Speech and Audio Signals. W Kellermann, HandBook of Signal Processing in Acoustics. SpringerW. Kellermann, Beamforming for Speech and Audio Signals, in HandBook of Signal Processing in Acous- tics, Springer, 2008.
Channel selection measures for multi-microphone speech recognition. M Wolf, C Nadeu, Speech Communication. 57M. Wolf and C. Nadeu, "Channel selection measures for multi-microphone speech recognition," Speech Commu- nication, vol. 57, pp. 170-180, Feb. 2014.
S Makino, T Lee, H Sawada, Blind Speech Separation. SpringerS. Makino, T. Lee, and H. Sawada, Blind Speech Sepa- ration, Springer, 2010.
Speech Dereverberation. P A Naylor, N D Gaubitch, SpringerP. A. Naylor and N. D. Gaubitch, Speech Dereverbera- tion., Springer, 2010.
Hybrid acoustic models for distant and multichannel large vocabulary speech recognition. P Swietojanski, A Ghoshal, S Renals, Proc. of ASRU 2013. of ASRU 2013P. Swietojanski, A. Ghoshal, and S. Renals, "Hybrid acoustic models for distant and multichannel large vo- cabulary speech recognition," in Proc. of ASRU 2013, 2013, pp. 285-290.
Using neural network front-ends on far field multiple microphones based speech recognition. Y Liu, P Zhang, T Hain, Proc. of ICASSP. of ICASSPY. Liu, P. Zhang, and T. Hain, "Using neural net- work front-ends on far field multiple microphones based speech recognition," in Proc. of ICASSP 2014, 2014, pp. 5542-5546.
The MERL/MELCO/TUM System for the REVERB Challenge Using Deep Recurrent Neural Network Feature Enhancement. F Weninger, S Watanabe, J Le Roux, J R Hershey, Y Tachioka, J Geiger, B Schuller, G Rigoll, Proc. of the IEEE REVERB Workshop. of the IEEE REVERB WorkshopF. Weninger, S. Watanabe, J. Le Roux, J.R. Hershey, Y. Tachioka, J. Geiger, B. Schuller, and G. Rigoll, "The MERL/MELCO/TUM System for the REVERB Chal- lenge Using Deep Recurrent Neural Network Feature Enhancement," in Proc. of the IEEE REVERB Work- shop, 2014.
Reverberant speech recognition combining deep neural networks and deep autoencoders. S Sakai, M Mimura, T Kawahara, Proc. of the IEEE REVERB Workshop. of the IEEE REVERB WorkshopS. Sakai M. Mimura and T. Kawahara, "Reverberant speech recognition combining deep neural networks and deep autoencoders," in Proc. of the IEEE REVERB Workshop, 2014.
Spatial diffuseness features for dnn-based speech recognition in noisy and reverberant environments. A Schwarz, C Huemmer, R Maas, W Kellermann, Proc. of ICASSP 2015. of ICASSP 2015A. Schwarz, C. Huemmer, R. Maas, and W. Keller- mann, "Spatial diffuseness features for dnn-based speech recognition in noisy and reverberant environ- ments," in Proc. of ICASSP 2015, 2015.
Contaminated speech training methods for robust DNN-HMM distant speech recognition. M Ravanelli, M Omologo, Proc. of INTERSPEECH 2015. of INTERSPEECH 2015M. Ravanelli and M. Omologo, "Contaminated speech training methods for robust DNN-HMM distant speech recognition," in Proc. of INTERSPEECH 2015, 2015.
The reverb challenge: A common evaluation framework for dereverberation and recognition of reverberant speech. K Kinoshita, M Delcroix, T Yoshioka, T Nakatani, E Habets, R Haeb-Umbach, V Leutnant, A Sehr, W Kellermann, R Maas, S Gannot, B Raj, Proc. of WASPAA 2013. of WASPAA 2013K. Kinoshita, M. Delcroix, T. Yoshioka, T. Nakatani, E. Habets, R. Haeb-Umbach, V. Leutnant, A. Sehr, W. Kellermann, R. Maas, S. Gannot, and B. Raj, "The reverb challenge: A common evaluation framework for dereverberation and recognition of reverberant speech," in Proc. of WASPAA 2013, 2013, pp. 1-4.
The PASCAL CHiME speech separation and recognition challenge. J Barker, E Vincent, N Ma, H Christensen, P Green, Computer Speech and Language. 273J. Barker, E. Vincent, N. Ma, H. Christensen, and P. Green, "The PASCAL CHiME speech separation and recognition challenge," Computer Speech and Lan- guage, vol. 27, no. 3, pp. 621-633, 2013.
The third CHiME Speech Separation and Recognition Challenge: Dataset, task and baselines. J Barker, R Marxer, E Vincent, S Watanabe, Proc. of ASRU 2015. of ASRU 2015J. Barker, R. Marxer, E. Vincent, and S. Watanabe, "The third CHiME Speech Separation and Recognition Chal- lenge: Dataset, task and baselines," in Proc. of ASRU 2015, 2015.
The DIRHA simulated corpus. L Cristoforetti, M Ravanelli, M Omologo, A Sosi, A Abad, M Hagmueller, P Maragos, Proc. of LREC. of LRECL. Cristoforetti, M. Ravanelli, M. Omologo, A. Sosi, A. Abad, M. Hagmueller, and P. Maragos, "The DIRHA simulated corpus," in Proc. of LREC 2014, 2014, pp. 2629-2634.
The Kaldi Speech Recognition Toolkit. D Povey, A Ghoshal, G Boulianne, L Burget, O Glembek, N Goel, M Hannemann, P Motlicek, Y Qian, P Schwarz, J Silovsky, G Stemmer, K Vesely, Proc. of ASRU. of ASRUD. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, "The Kaldi Speech Recognition Toolkit," in Proc. of ASRU 2011, 2011.
Hidden Markov model training with contaminated speech material for distant-talking speech recognition. M Matassoni, M Omologo, D Giuliani, P Svaizer, Computer Speech & Language. 162M. Matassoni, M. Omologo, D. Giuliani, and P. Svaizer, "Hidden Markov model training with contaminated speech material for distant-talking speech recognition," Computer Speech & Language, vol. 16, no. 2, pp. 205- 223, 2002.
A corpus-based approach for robust ASR in reverberant environments. L Couvreur, C Couvreur, C Ris, Proc. of INTERSPEECH. of INTERSPEECHL. Couvreur, C. Couvreur, and C. Ris, "A corpus-based approach for robust ASR in reverberant environments.," in Proc. of INTERSPEECH 2000, 2000, pp. 397-400.
Using Artificially Reverberated Training Data in Distant-Talking ASR. T Haderlein, E Nöth, W Herbordt, W Kellermann, H Niemann, Springer3658of Lecture Notes in Computer ScienceT. Haderlein, E. Nöth, W. Herbordt, W. Kellermann, and H. Niemann, "Using Artificially Reverberated Training Data in Distant-Talking ASR," 2005, vol. 3658 of Lec- ture Notes in Computer Science, pp. 226-233, Springer.
A speech event detection/localization task for multiroom environments. A Brutti, M Ravanelli, P Svaizer, M Omologo, Proc. of HSCMA. of HSCMAA. Brutti, M. Ravanelli, P. Svaizer, and M. Omologo, "A speech event detection/localization task for multi- room environments," in Proc. of HSCMA 2014, 2014, pp. 157-161.
A multi-channel corpus for distant-speech interaction in presence of known interferences. E Zwyssig, M Ravanelli, P Svaizer, M Omologo, Proc. of ICASSP. of ICASSPE. Zwyssig, M. Ravanelli, P. Svaizer, and M. Omologo, "A multi-channel corpus for distant-speech interaction in presence of known interferences," in Proc. of ICASSP 2015, 2015, pp. 4480-4485.
The DIRHA-GRID corpus: baseline and tools for multi-room distant speech recognition using distributed microphones. M Matassoni, R Astudillo, A Katsamanis, M Ravanelli, Proc. of INTERSPEECH. of INTERSPEECHM. Matassoni, R. Astudillo, A. Katsamanis, and M. Ra- vanelli, "The DIRHA-GRID corpus: baseline and tools for multi-room distant speech recognition using distributed microphones," in Proc. of INTERSPEECH 2014, 2014, pp. 1616-1617.
An audio-visual corpus for speech perception and automatic speech recognition. M Cooke, J Barker, S Cunningham, X Shao, Journal of the Acoustical Society of America. 1205M. Cooke, J. Barker, S. Cunningham, and X. Shao, "An audio-visual corpus for speech perception and automatic speech recognition," Journal of the Acoustical Society of America, vol. 120, no. 5, pp. 2421-2424, November 2006.
Automatic segmentation and labeling of speech based on hidden markov models. F Brugnara, D Falavigna, M Omologo, Speech Communication. 124F. Brugnara, D. Falavigna, and M. Omologo, "Auto- matic segmentation and labeling of speech based on hid- den markov models," Speech Communication, vol. 12, no. 4, pp. 357-370, 1993.
The design for the wall street journal-based csr corpus. B Douglas, Janet M Paul, Baker, Proc. of the Workshop on Speech and Natural Language. of the Workshop on Speech and Natural LanguageDouglas B. Paul and Janet M. Baker, "The design for the wall street journal-based csr corpus," in Proc. of the Workshop on Speech and Natural Language, 1992, pp. 357-362.
Simultaneous measurement of impulse response and distortion with a swept-sine technique. A Farina, Proc. of the 108th AES Convention. of the 108th AES ConventionA. Farina, "Simultaneous measurement of impulse re- sponse and distortion with a swept-sine technique," in Proc. of the 108th AES Convention, 2000, pp. 18-22.
Impulse response estimation for robust speech recognition in a reverberant environment. M Ravanelli, A Sosi, P Svaizer, M Omologo, Proc. of EUSIPCO 2012. of EUSIPCO 2012M. Ravanelli, A. Sosi, P. Svaizer, and M. Omologo, "Im- pulse response estimation for robust speech recognition in a reverberant environment," in Proc. of EUSIPCO 2012, 2012, pp. 1668-1672.
DARPA TIMIT Acoustic Phonetic Continuous Speech Corpus CDROM. J S Garofolo, L F Lamel, W M Fisher, J G Fiscus, D S Pallett, N L Dahlgren, J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fis- cus, D. S. Pallett, and N. L. Dahlgren, "DARPA TIMIT Acoustic Phonetic Continuous Speech Corpus CDROM," 1993.
Sequence discriminative training of deep neural networks. A Ghoshal, D Povey, Proc. INTER-SPEECH 2013. INTER-SPEECH 2013A. Ghoshal and D. Povey, "Sequence discriminative training of deep neural networks," in Proc. INTER- SPEECH 2013, 2013.
On the selection of the impulse responses for distant-speech recognition based on contaminated speech training. M Ravanelli, M Omologo, Proc. of INTER-SPEECH. of INTER-SPEECHM. Ravanelli and M. Omologo, "On the selection of the impulse responses for distant-speech recognition based on contaminated speech training," in Proc. of INTER- SPEECH 2014, 2014, pp. 1028-1032.
Speaker-independent phone recognition using hidden markov models. K.-F Lee, H.-W Hon, IEEE Transactions on Acoustics, Speech and Signal Processing. 3711K.-F. Lee and H.-W. Hon, "Speaker-independent phone recognition using hidden markov models," IEEE Trans- actions on Acoustics, Speech and Signal Processing, vol. 37, no. 11, pp. 1641-1648, Nov 1989.
| [] |
[
"Enumeration Classes Defined by Circuits",
"Enumeration Classes Defined by Circuits"
] | [
"Nadia Creignou \nAix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance\n",
"Arnaud Durand \nUniversité Paris Cité\nCNRS\nIMJ-PRG\nParisFrance\n",
"Heribert Vollmer \nLeibniz Universität Hannover\n\n"
] | [
"Aix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance",
"Université Paris Cité\nCNRS\nIMJ-PRG\nParisFrance",
"Leibniz Universität Hannover\n"
] | [] | We refine the complexity landscape for enumeration problems by introducing very low classes defined by using Boolean circuits as enumerators. We locate well-known enumeration problems, e.g., from graph theory, Gray code enumeration, and propositional satisfiability in our classes. In this way we obtain a framework to distinguish between the complexity of different problems known to be in DelayP, for which a formal way of comparison was not possible to this day.ACM Subject ClassificationTheory of computation → Computational complexity and cryptography; Theory of computation → Circuit complexity Enumeration Classes Defined by Circuits input and the last solution, signals that no further solution exists. Still using AC 0 circuits we then consider extended classes by allowing precomputation of different complexity (typically, polynomial time precomputation) and/or memory to be passed on from the computation of one solution to the next (from a constant to a polynomial number of bits) By this, we obtain a hierarchy of classes within DelayP/IncP shown inFig. 1.The main motivation behind our work is the wish to be able to compare the complexity of different tractable enumeration problems by classifying them in a fine hierarchy within DelayP, and to obtain lower bounds for enumeration tasks. From different application areas such as graph problems, Gray code enumeration and satisfiability, we identify natural problems, all belonging to DelayP, some of which can be enumerated in Del·AC 0 , some cannot, but allowing precomputation or a certain number of bits of auxiliary memory they can. We would like to mention in particular the maybe algorithmically most interesting contribution of our paper, the case of enumeration for satisfiability of 2-CNF (Krom) formulas. While it is known that counting satisfying assignments for formulas from this fragment of propositional logic is #P-complete [16], we exhibit a Del P ·AC 0 algorithm (i.e. Del·AC 0 with polynomial time precomputation but no memory), for enumeration, thus placing the problem in one of the lowest class in our framework. This means that surprisingly satisfying assignments of Krom formulas can be enumerated very efficiently (only AC 0 is needed to produce the next solution) after a polynomial time precomputation before producing the first solution.Building on well-known lower bounds (in particular for the parity function [10, 1]) we prove (unconditional) separations among (some of) our classes and strict containment in DelayP, and building on well-known completeness results we obtain conditional separations, leading to the inclusions and non-inclusions depicted inFig. 1.Another refinement of DelayP that has received considerable attention in the past, in particular in the database community, is the class CD • lin of problems that can be enumerated on RAMs with constant delay after linear time preprocessing [9] (see also the surveys[14,8]). It is not difficult to see (see Section 3.3) that CD•lin and Del·AC 0 are incomparable classes; thus our approach provides a novel way to refine polynomial delay. This paper is organized as follows. After some preliminaries, we introduce our new classes in Sect. 3. In Sect. 4 we present a number of upper and lower bounds for example enumeration problems from graph theory, Gray code enumeration and propositional satisfiability. Depending whether we allow or disallow precomputation steps, we obtain further conditional or unconditional separation results between classes in Sect. 5. Finally we conclude with a number of open problems.PreliminariesSince our main computational model will be Boolean circuits, we fix the alphabet Σ = {0, 1}, and use this alphabet to encode graphs, formulas, etc., as usual. Any reasonable encoding will do for all of our results.Let R ⊆ Σ * × Σ * be a computable predicate. We say that R is polynomially balanced, if there is a polynomial p such that for all pairs (x, y) ∈ R, we have |y| ≤ p(|x|). Now we define the enumeration problem associated to R as follows.Note that by the last requirement, the circuit family signals there is no further solution if the input solution is given again as output. Moreover, we point out that, in the definition | 10.48550/arxiv.2205.00539 | [
"https://arxiv.org/pdf/2205.00539v1.pdf"
] | 248,496,054 | 2205.00539 | 76caa5631fb4168b11db023103ff4414b62c6b01 |
Enumeration Classes Defined by Circuits
Nadia Creignou
Aix Marseille Univ
Université de Toulon
CNRS
MarseilleLISFrance
Arnaud Durand
Université Paris Cité
CNRS
IMJ-PRG
ParisFrance
Heribert Vollmer
Leibniz Universität Hannover
Enumeration Classes Defined by Circuits
10.4230/LIPIcsDigital Object Identifier 10.4230/LIPIcs...and phrases Computational complexityenumeration problemBoolean circuit
We refine the complexity landscape for enumeration problems by introducing very low classes defined by using Boolean circuits as enumerators. We locate well-known enumeration problems, e.g., from graph theory, Gray code enumeration, and propositional satisfiability in our classes. In this way we obtain a framework to distinguish between the complexity of different problems known to be in DelayP, for which a formal way of comparison was not possible to this day.ACM Subject ClassificationTheory of computation → Computational complexity and cryptography; Theory of computation → Circuit complexity Enumeration Classes Defined by Circuits input and the last solution, signals that no further solution exists. Still using AC 0 circuits we then consider extended classes by allowing precomputation of different complexity (typically, polynomial time precomputation) and/or memory to be passed on from the computation of one solution to the next (from a constant to a polynomial number of bits) By this, we obtain a hierarchy of classes within DelayP/IncP shown inFig. 1.The main motivation behind our work is the wish to be able to compare the complexity of different tractable enumeration problems by classifying them in a fine hierarchy within DelayP, and to obtain lower bounds for enumeration tasks. From different application areas such as graph problems, Gray code enumeration and satisfiability, we identify natural problems, all belonging to DelayP, some of which can be enumerated in Del·AC 0 , some cannot, but allowing precomputation or a certain number of bits of auxiliary memory they can. We would like to mention in particular the maybe algorithmically most interesting contribution of our paper, the case of enumeration for satisfiability of 2-CNF (Krom) formulas. While it is known that counting satisfying assignments for formulas from this fragment of propositional logic is #P-complete [16], we exhibit a Del P ·AC 0 algorithm (i.e. Del·AC 0 with polynomial time precomputation but no memory), for enumeration, thus placing the problem in one of the lowest class in our framework. This means that surprisingly satisfying assignments of Krom formulas can be enumerated very efficiently (only AC 0 is needed to produce the next solution) after a polynomial time precomputation before producing the first solution.Building on well-known lower bounds (in particular for the parity function [10, 1]) we prove (unconditional) separations among (some of) our classes and strict containment in DelayP, and building on well-known completeness results we obtain conditional separations, leading to the inclusions and non-inclusions depicted inFig. 1.Another refinement of DelayP that has received considerable attention in the past, in particular in the database community, is the class CD • lin of problems that can be enumerated on RAMs with constant delay after linear time preprocessing [9] (see also the surveys[14,8]). It is not difficult to see (see Section 3.3) that CD•lin and Del·AC 0 are incomparable classes; thus our approach provides a novel way to refine polynomial delay. This paper is organized as follows. After some preliminaries, we introduce our new classes in Sect. 3. In Sect. 4 we present a number of upper and lower bounds for example enumeration problems from graph theory, Gray code enumeration and propositional satisfiability. Depending whether we allow or disallow precomputation steps, we obtain further conditional or unconditional separation results between classes in Sect. 5. Finally we conclude with a number of open problems.PreliminariesSince our main computational model will be Boolean circuits, we fix the alphabet Σ = {0, 1}, and use this alphabet to encode graphs, formulas, etc., as usual. Any reasonable encoding will do for all of our results.Let R ⊆ Σ * × Σ * be a computable predicate. We say that R is polynomially balanced, if there is a polynomial p such that for all pairs (x, y) ∈ R, we have |y| ≤ p(|x|). Now we define the enumeration problem associated to R as follows.Note that by the last requirement, the circuit family signals there is no further solution if the input solution is given again as output. Moreover, we point out that, in the definition
Introduction
In computational complexity theory, most often decision problems are studied that ask for the existence of a solution to some problem instance, e.g., a satisfying assignment of a given propositional formula. In contrast, enumeration problems ask for a list of all solutions, e.g., all satisfying assignments. In many application areas these are the more "natural" kind of problems-let us just mention database queries, web search, diagnosis, data mining, bioinformatics, etc. The notion of tractability for enumeration problems requires a new approach, simply because there may be a large number of solutions, exponential in the input size. Widely studied is the class DelayP ("polynomial delay"), containing all enumeration problems where, for a given instance x, (i) the time to compute the first solution, (ii) the time between producing any two consecutive solutions, and (iii) the time to detect that no further solution exists, are all polynomially bounded in the length of x. Also the class IncP ("incremental polynomial time"), where we allow the time to produce the next solution and to signal that no further solution exists to grow by a polynomial bounded in the size of the input plus the number of already computed solutions. These classes were introduced in 1988 in [12], and since then, an immense number of membership results have been obtained. Recently, also intractable enumeration problems have received some attention. Reducibilities, a completeness notion and a hierarchy of intractable enumeration problems, analogous to the well-known polynomial hierarchy, were defined and studied in [7].
In this paper we will look for notions of tractability for enumeration stricter than the above two. More specifically, we will introduce a refinement of the existing classes based on the computation model of Boolean circuits. The main new class in our framework is the class Del·AC 0 . An enumeration problem belongs to this class if there is a family of AC 0 circuits, i.e., a family of Boolean circuits of constant depth and polynomial size with unbounded fan-in gates, that (i) given the input computes the first solution, (ii) given input and a solution computes the next solution (in any fixed order of solutions), and (iii) given Enum·R Input: x ∈ Σ * Output: an enumeration of elements in SolR(x) = {y : R(x, y)} We require that R is computable but do not make any complexity assumptions on R. In the enumeration context, it is sometimes stipulated that R is polynomial-time checkable, i.e., membership of (x, y) in R is decidable in time polynomial in the length of the pair [15,4]. Generally, we do not require this, but we will come back to this point later.
We assume basic familiarity of the reader with the model of Boolean circuits, see, e.g., [18,5]. We use AC 0 to denote the class languages that can be decided by uniform families of Boolean circuits of polynomial size and constant depth with gates of unbounded fan-in. The class of functions computed by such circuit families is denoted by FAC 0 , and for simplicity often again by AC 0 . The notation for the corresponding class of languages/functions defined by uniform families of circuits of polynomial size and logarithmic depth with gates of bounded fan-in is NC 1 .
The actual type of uniformity used is of no importance for the results of the present paper. However, for concreteness, all circuit classes in this paper are assumed to be uniform using the "standard" uniformity condition, i. e., DLOGTIME-uniformity/U E -uniformity [3]; the interested reader may also consult the textbook [18].
Delay Classes with Circuit Generators
In this section we present the formal definition of our new enumeration classes. As we already said, we will restrict our definition to usual delay classes; classes with incremental delay can be defined analogously, however, we will see that our delay-classes with memory in a sense reflect incremental classes in the circuit model. The main idea is that the generation of a next solution will be done by a circuit from a family; in the examples and lower and upper bounds in the upcoming sections, these families are usually of low complexity like AC 0 or NC 1 . The generator will receive the original input word plus the previous solution. Parameters in the definition will be first the complexity of any precomputation before the first solution is output, and second the amount of information passed from the generation of one solution to the next.
Delay Classes with no Memory
For a family C = (C n ) n∈N of Boolean circuits, circuit C i will be the circuit in the family with i input gates. When the length of the circuit input is clear from the context, we will usually simply write C |·| to refer to the circuit with appropriate number of input gates.
Definition 1. [K-delay]
Let R be a polynomially balanced predicate. The enumeration problem Enum·R is in Del·K if there exists a family of K-circuits C = (C n ) n∈N such that, for all inputs x, there is an enumeration y 1 , ..., y k of Sol R (x) and:
C |·| (x) = y 1 ∈ Sol R (x), for all i < k: C |·| (x, y i ) = y i+1 ∈ Sol R (x) C |·| (x, y k ) = y k XX:4
Enumeration Classes Defined by Circuits above, if x is an input and y ∈ Sol R (x), then C |x|+|y| produces a z ∈ Sol R (x). However, if y ∈ Sol R (x), nothing is specified about the output z.
Next we consider classes where a precomputation before outputting the first solution is allowed. The ressource bounds of the precomputation are specified by an arbitrary complexity class.
Definition 2. [K-delay with T -precomputation]
Let R be a polynomially balanced predicate and T be a complexity class. The enumeration problem Enum·R is in Del T ·K if there exists an algorithm M working with resource T and a family of K-circuits C = (C n ) n∈N such that, for all input x there is an enumeration y 1 , ..., y k of Sol R (x) and:
M compute some value x * , i.e., M (x) = x * C |·| (x * ) = y 1 ∈ Sol R (x), for all i < k: C |·| (x * , y i ) = y i+1 ∈ Sol R (x) C |·| (x * , y k ) = y k
Delay Classes with memory
Extending the above model, we now allow each circuit to produce slightly more than the next solution. These additional information is then passed as extra input to the computation of the next solution, in other words, it can serve as an auxiliary memory.
Definition 3. [K-delay with auxiliary memory]
Let R be a polynomially balanced predicate. The enumeration problem Enum·R is in Del * ·K if there exist two families of K-circuits C = (C n ) n∈N , D = (D n ) n∈N such that, for all input x there is an enumeration y 1 , ..., y k of Sol R (x) and:
C |·| (x) = y * 1 and D |·| (y * 1 ) = y 1 ∈ Sol R (x), for all i < k: C |·| (x, y * i ) = y * i+1 and D |·| (y * i+1 ) = y i+1 ∈ Sol R (x), C |·| (x, y * k ) = y * k , for 1 ≤ i ≤ k, y i is a prefix of y * i . When there exists a polynomial p ∈ N[x] such that |y * i | ≤ p(|x|)
, for all i ≤ k, the class is called Del P ·K, K-delay with polynomial auxiliary memory. When there exists a constant c ∈ N such that |y * i | ≤ |y i | + c, for all i ≤ k, the class is called Del c ·K, K-delay with constant auxiliary memory.
The idea is that the y * i will contain the previous solution plus the additional memory. Hence the superscript "c" indicates a bounded auxiliary memory size.
By abuse of expression, we will sometimes say that a problem in some of these classes above can be enumerated with a delay in K or with a K-delay. When there is no restriction on memory i.e. when considering the class Del * T ·K, an incremental enumeration mechanism can be used. Indeed, the memory can then store all solutions produced so far which results in an increase of the expressive power.
Also in the case of memory, we allow possibly precomputation before the first output is made:
Definition 4. [K-delay with T -precomputation and auxiliary memory]
Let R be a polynomially balanced predicate and T be a complexity class. The enumeration problem Enum·R is in Del * T ·K if there exists an algorithm M working with resource T and two families of K-circuits C = (C n ) n∈N , D = (D n ) n∈N such that, for all input x there is an enumeration y 1 , ..., y k of Sol R (x) and:
M computes some value x * , i.e., M (x) = x * , C |·| (x * ) = y * 1 and D |·| (y * 1 ) = y 1 ∈ Sol R (x), for all i < k: C |·| (x * , y * i ) = y * i+1 and D |·| (y * i+1 ) = y i+1 ∈ Sol R (x), C |·| (x * , y * k ) = y * k , for 1 ≤ i ≤ k, y i is a prefix of y * i .
When there exists a polynomial p ∈ N[x] such that |y * i | ≤ p(|x|), for all i ≤ k, the class is called Del P T ·K, K-delay with T -precomputation and polynomial auxiliary memory. When there exists a constant c ∈ N such that |y * i | ≤ |y i | + c, for all i ≤ k, the class is called Del c T ·K, K-delay with T -precomputation and constant auxiliary memory.
Relation to Known Enumeration Classes
All classes we consider in this paper are subclasses of the well-known classes DelayP or IncP, resp., even if we allow our circuit to be of arbitrary depth (but polynomial size). Let us briefly clarify the relation between our classes and the class CD•lin of enumeration problems that have a constant delay on a RAM after linear-time precomputation. This class was introduced in [9].
The problem to enumerate for a given graph the pairs of all vertices that are connected by a path of length 2 has only a polynomial number of solutions and is trivially in Del·AC 0 . Since it is essentially the same as Boolean matrix multiplication, it is not in CD•lin, assuming the beforementioned BMM hypothesis.
On the other hand, note that the enumeration problem Enum-Parity, given as input a sequence of bits with the solution set consisting only of one solution, the parity of the input, is not in Del·AC 0 , since the parity function is not in AC 0 [10,1]. However, since Parity can be computed in linear time, Enum-Parity is trivially in CD•lin.
As we will show in detail in the full version of this paper, the computation of a constant number of time steps of a RAM can be simulated by AC 0 circuits. Hence if we add linear precomputation and polynomial memory to save the configuration of the RAM, we obtain an upper bound for CD•lin. To summarize:
Examples
In this section we show that many natural problems, ranging from graph problems, enumeration of Gray codes and satisfiability problems lie in our circuit classes.
Graph Problems
We first consider the enumeration problem associated with the notion of reachability in a graph.
Enum-Reach
Input: a graph G = (V, E), s ∈ V Output: an enumeration of vertices reachable from s Theorem 7. Enum-Reach ∈ Del P ·AC 0 Proof. At each step multiplication of Boolean matrices gives the set of vertices which are reachable from s with one more step. This can be done in AC 0 . The polynomial memory is used to remember all vertices that have been encountered to far.
Let us now turn to the enumeration of all transversals (not only the minimal ones). Proof. Let E be a set of hyperedges over a set of n vertices. Every binary word y = y 1 · · · y n ∈ {0, 1} n can be interpreted as a subset of vertices. We propose an algorithm that enumerates each of these words that corresponds to a transversal of H in lexicographical order with the convention 1 < 0. The algorithm is as follows: As a first step output 1 . . . 1, the trivial solution.
Enum-Transversal
Let H be the input and y be the last output solution.
For each prefix y 1 . . . y i of y with y i = 1 and i ≤ n consider the word of length n,
z i = y 1 . . . y i−1 01 . . . 1.
Check whether at least one of these words z i is a transversal of H.
If yes select the one with the longest common prefix with y, that is the transversal z i with the largest i and output it as the next solution. Else stop.
First we prove that the algorithm is correct. The transversal that is the successor of y in our lexicographical order (where 1 < 0), if it exists, has a common prefix with y, then a bit flipped from 1 to 0, and finally is completed by 1's only. Indeed, a successor of y necessarily starts this way, and by monotonicity the first extension of such a prefix into a solution is the one completed by 1's only. As a consequence our algorithm explores all possible candidates and select the next transversal in the lexicographical order. Now let us prove that this is an AC 0 -delay enumeration algorithm that does not require memory. The main observation is that one can check with an AC 0 circuit whether a binary word corresponds to transversal of H. Now, for each i we can use a sub-circuit, which on input (H, y) checks whether y i = 1 and if yes whether z i is a transversal of H. This circuit can output (z i , 1) if both tests are positive, and (y i , 0) otherwise. All these sub-circuits can be wired in parallel. Finally it suffices to use a selector to output z i with the largest i for which (z i , 1) is output at the previous step. Such a selector can be implemented by an AC 0 circuit.
It is then easy to show (in a similar way) that enumeration of all dominating sets of a graph can be done in Del·AC 0 .
Gray Code
Given n ∈ N, a Gray n-code is a ranked list of elements of Σ n such that between two successive words x, y there exists only one bit such that x i = y i . Since we deal with Boolean circuits, we have to fix Σ = {0, 1}, but Gray codes are defined for arbitrary alphabets.
The binary reflected Gray code of length n, denoted G n , is made of 2 n words:
G n = [G n 0 , G n 1 , . . . , G n 2 n −1 ]
. It is defined recursively as follows: G 1 = [0, 1] and, for n ≥ 1
G n = [0G n−1 0 , 0G n−1 1 , . . . , 0G n−1 2 n−1 −1 , 1G n−1 2 n−1 −1 , . . . , 1G n−1 1 , 1G n−1 0 ].
N. Creignou, A.Durand and H. Vollmer
XX:7
As an example let us consider the list of pairs (rank, word) for n = 4: (0, 0000), (1, 0001), (2, 0011), (3, 0010), (4, 0110), (5, 0111), (6, 0101), (7, 0100), (8,1100), (9,1101), (10,1111), (11,1110), (12,1010), (13,1011), (14,1001), (15,1000).
Given n and r < 2 n , let b n−1 · · · b 1 b 0 be the binary decomposition of r and G n r = a n−1 · · · a 1 a 0 ∈ Σ n be the rth word in the binary reflected code of length n. It is well-known that, for all j = 0, ..., n − 1,
b j = n−1 i=j a i mod 2 and a j = (b j + b j+1 ) mod 2.
Hence computing the rank of a word in the binary reflected code amounts to be able to compute parity. On the other side, computing the word from its rank can easily be done by a circuit.
While it is trivial to enumerate all words of length n in arbitrary or lexicographic order, this is not so clear for Gray code order. Also, given a rank or a first word, to enumerate all words of higher Gray code rank (in arbitrary order) are interesting computational problems.
Enum-Gray-Rank
Input: a binary word r of length n interpreted as an integer in [0, 2 n [ Output: an enumeration of words of G n that are of rank at least r.
Enum-Gray-Word
Input: a word x of length n Output: an enumeration of words of G n , that are of rank at least the rank of x.
It turns out that for those problems where the order of solutions is not important, a very efficient enumeration is possible: Theorem 9. Let n be an integer 1. Given 1 n , enumerating all words of length n even in lexicographic ordering is in Del·AC 0 2. Enum-Gray-Rank ∈ Del·AC 0 3. Enum-Gray-Word ∈ Del·AC 0 We next turn to those versions of the above problems, where we require that solutions are given one after the other in Gray code order. For each of them, the computational complexity is provably higher than in the above cases.
Theorem 10. Given 1 n , enumerating all words of length n in a Gray code order is in
Del c ·AC 0 \Del P ·AC 0
Proof. A classical method to enumerate gray code of length n is the following [13].
Step 0 : produce the word 0 · · · 0 of length n.
Step 2k + 1 : switch the bit at position 0.
Step 2k + 2: find minimal position i where there is a 1 and switch bit at position i + 1.
This method can be turned into an AC 0 -delay enumeration without precomputation using one bit of memory (to keep trace if the step is an even or odd one all along the computation). This proves the membership in Del c ·AC 0 .
For the lower bound, suppose C = (C n ) n∈N is an AC 0 circuit family enumerating the Gray code of length n after polynomial time precomputation produced by machine M . We will describe how to use C to construct an AC 0 -family computing the parity function, contradicting the lower bound given by [1,10].
Given is an arbitrary word w = w n−1 . . . w 0 of length n, and we want to compute its parity
n−1 i=0 w i mod 2. Let x * = M (1 n ).
Then, w will appear as a solution somewhere in the enumeration defined by C. Let w be the next words after w. There exists r such that G n r = w and G n r+1 = w . By comparing w and w , one can decide which transformation step has been applied to w to obtain w and thus if r is odd or even. Note that the parity of w is 1 if and only if r is odd. Hence, one can compute parity by a constant depth circuit operating as follows:
Input w: n := |w|;
x * := M (1 n ); w := C |·| (x * , w);
if last bits of w and w differ then v := 1 else v := 0; output v.
Note that the computation of x * does not depend on w but only on the length of w; hence x * can be hardwired into the circuit family, which, since M runs in polynomial time, will then be P-uniform. But we know from [10,1] that parity cannot even be computed by non-uniform AC 0 circuit families.
We also consider the problem of enumerating all words starting not from the first one but at a given position, but now in Gray code order. Surprisingly this time the complexity will depend on how the starting point is given, by rank or by word.
Enum-Gray-Rank ord Input: A binary word r of length n interpreted as an integer in [0, 2 n [ Output: an enumeration of words of G n in increasing number of ranks starting from rank r.
Enum-Gray-Word ord Input: A word x of length n Output: an enumeration of words of G n in Gray code order that are of rank at least the rank of x. Theorem 11. 1. Enum-Gray-Rank ord ∈ Del c ·AC 0 \Del P ·AC 0 . 2. Enum-Gray-Word ord is in the class Del c P ·AC 0 , but neither in Del P ·AC 0 nor Del c ·AC 0 .
Satisfiability Problems
Deciding the satisfiability of a CNF-formula is well-known to be NP-complete. Nevertheless the problem becomes tractable for some restricted classes of formulas. For such classes we investigate the existence of an AC 0 -delay enumeration algorithm. First we consider monotone formulas.
Enum-Monotone-Sat
Input: A set of positive (resp. negative) clauses Γ over a set of variables V Output: an enumeration of all assignments over V that satisfy Γ
The following positive result is an immediate corollary of Theorem 8.
Theorem 12. Enum-Monotone-Sat ∈ Del·AC 0 .
If we allow polynomial precomputation, then we obtain an AC 0 -delay enumeration algorithm for a class of CNF-formulas, referred to as IHS in the literature (for Implicative Hitting Sets, see [6]), which is larger than the monotone class. A formula in this class consists of monotone clauses (either all positive or all negative) together with implicative clauses.
Enum-IHS-Sat
Input: A set of clauses Γ over a set of variables V , with C = M ∪ B, where M is a set of positive clauses (resp. negative clauses) and B a set of binary clauses of the form (¬x) or (x ∨ ¬x ) (resp. of of the form (x) or (x ∨ ¬x )) Output: an enumeration of all assignments over V that satisfy Γ
Theorem 13. Enum-IHS-Sat ∈ Del P ·AC 0 \ Del * ·AC 0 .
Proof sketch. Observe that contrary to the monotone case 1....1 is not a trivial solution. Indeed a negative unary clause (¬x) in B forces x to be assigned 0, and this truth value can be propagated to other variables by the implicative clauses of the form (x ∨ ¬x ). For this reason as a precomputation step, for each variable x we compute tc(x) the set of all variables that have to be set to 0 in any assignment satisfying Γ in which x is assigned 0. With this information we can use an algorithm that enumerates all truth assignments satisfying Γ in lexicographical order very similar to the one used for enumerating the transversals of a graph (see the proof of Theorem 8). The detailed algorithm can found in the appendix.
For the lower bound, consider the st-connectivity problem: given a directed graph G = (V, A) with two distinguished vertices s and t, decide whether there exists a path from s to t. From G, s and t we build an instance of Enum-IHS-Sat as follows. We consider a set a clauses
C = P ∪ B, where P = {(s ∨ t)} and B = {(¬s)} ∪ {(x ∨ ¬y) | (x, y) ∈ A}. This is an AC 0 -reduction.
Observe that there exists a path from s to t if and only if Γ is unsatisfiable. Suppose that Enum-IHS-Sat ∈ Del·AC 0 , this means in particular that outputting a first assignment satisfying Γ or deciding there is none is in AC 0 . Thus the above reduction shows that st-connectivity is in AC 0 , thus contradicting the fact that st-connectivity is known not to be in AC 0 (see [10,1]).
Surprisingly the enumeration method used so far for satisfiability problems presenting a kind of monotonicity can be used for the enumeration of all assignments satisfying a Krom set of clauses (i.e., a 2-CNF formula) as soon as the literals are considered in an appropriate order.
Theorem 14. Enum-Krom-Sat ∈ Del P ·AC 0 \ Del * ·AC 0 .
Proof sketch. The proof builds on the algorithm in [2] that decides whether a set of Krom clauses is satisfiable in linear time. A full proof is given in the appendix.
Let Γ be a set of 2-clauses over a set of n variables V . We perform the following precomputation steps:
Build the associated implication graph, i.e., the directed graph G whose set of vertices is the set of literals V ∪ {v : v ∈ V }. For any 2-clause (l ∨ l ) in Γ there are two arcsl → l andl → l in G.
For each literal l compute tc(l) the set of vertices that are reachable from l in G.
Compute the set of strongly connected components of G. If no contradiction is detected, that is if no strongly connected component contains both a variable x and its negation, then contract each strongly connected component into one vertex. The result of this operation is a DAG, which, by abuse of notation, we also call G.
Compute a topological ordering of the vertices of G.
In searching through this topological ordering, build an ordered sequence M of n literals corresponding to the first occurrences of each variable.
If the set of clauses is satisfiable, one can enumerate the satisfying assignments given as truth assignments on M in lexicographic order. The enumeration process is similar in spirit as the one developed in the preceding theorem. For the lower bound, the proof given in Theorem 13 applies.
We next turn to the special case where clauses are XOR-clauses, i.e., clauses in which the usual "or" connective is replaced by the exclusive-or connective, ⊕. Such a clause can be seen as a linear equation over the two elements field F 2 .
Enum-XOR-Sat Input: A set of XOR-clauses Γ over a set of variables V Output: an enumeration of all assignments over V that satisfy Γ If we allow a polynomial precomputation step, then we obtain an AC 0 -delay enumeration algorithm for this problem that uses constant memory. Interestingly this algorithm relies on the efficient enumeration of binary words in a Gray code order that we have seen in the previous section and contrary to the satisfiability problems studied so far does not provide an enumeration in lexicographic order.
Theorem 15. Enum-XOR-Sat ∈ Del c P ·AC 0 \ Del * ·AC 0 . Proof sketch.
Observe that a set of XOR-clauses Γ over a set of variables V = {x 1 , . . . x n } can be seen as a linear system over V on the two elements field F 2 . As a consequence enumerating all assignments over V that satisfy Γ comes down to enumerating all solutions of the corresponding linear system.
As a precomputation step we apply Gaussian elimination in order to obtain an equivalent triangular system. If the system has no solution stop. Otherwise we can suppose that the linear system is of rank n − k for some 0 ≤ k ≤ n − 1, and without loss of generality that x 1 , . . . , x k are free variables, whose assignment determines the assignment of all other variables in the triangular system. We then compute a first solution s 0 corresponding to x 1 , . . . , x k assigned 0 . . . 0. Next, for each i = 1, . . . , k compute the solution s i corresponding to all variables in x 1 , , . . . x k assigned 0 except x i which is assigned 1. Compute then the influence list of x i , L(x i ) = {j | k + 1 ≤ j ≤ n, s 0 (x j ) = s i (x j )}. The influence list of x i gives the bits that will be changed when going from a solution to another one in flipping only the bit x i in the prefix corresponding to the free variables. Observe that this list does not depend on the solution (s 0 in the definition) we start from.
With this precomputation we start our enumeration procedure, which uses the enumeration of binary prefixes of length k in a Gray code order as a subprocedure. The algorithm is described in the appendix.
Separations of Delay Classes
In the previous results we already presented a few lower bounds, but now we will systematically strive to separate the studied classes. As long as no precomputation is allowed, we are able to separate all delay classes (only the case of unbounded memory, so the "ìncremental class" resists). With precomputation, the situation seems to be more complicated. We obtain only a conditional separation of the class with constant memory from the one without memory at all. We denote by z 1 , ..., z t an enumeration of elements of A. Clearly, |R L (x)| = t + 1 and t ≥ n. To show that R L ∈ Del c ·AC 0 , we use the enumeration of elements of A (which is easy) and one additional memory bit that is transferred from one step to the other to compute Parity. Indeed, we build families of circuits (C n ) and (D n ) according to Definition 3 as follows.
Unconditional Separations for Classes without Precomputation
First C |·| (x) computes y * 1 = z 1 b 1 where b 1 = x 1 , and D |·| (y * 1 ) = z 1 . For 1 < i ≤ t, the circuit C |·| (x, y * i−1 ) computes y * i , where y * i = z i b i with b i = b i−1 ⊕ x i if i ≤ n, and b i = b i−1 else, and D |·| (y * i ) = z i .
After t steps, the memory bit b t contains a 0 if and only if the number of ones in x is even. According to this, we either output 1 m or 0 m as last solution.
Note that the size of the solutions is m, the size of the memory words above is m + 1, hence we need constant amount of additional memory. The circuit families (C n ) and (D n ) are obviously DLOGTIME-uniform.
Suppose now that R L ∈ Del·AC 0 and let (C n ) be the associated family of enumeration circuits. We construct a circuit family as follows: We compute in parallel all C |·| (x) and C |·| (x, z i ) for 1 ≤ i ≤ t. In this way, we will obtain among other solutions either 0 m or 1 m . We accept in the first case. Note that the z i are the same for all inputs x of the same length. Thus, we obtain an AC 0 circuit family for parity, contradicting [10,1].
By extending the above approach, one can prove the following separation:
Theorem 17. Del c ·AC 0 Del P ·AC 0
The parity problem can be seen as an enumeration problem: given x, one output the unique solution 1 if the number of ones in x is even. One outputs 0 if it is odd. Since as a function problem, parity can not be in Del P ·AC 0 (the fact there is only one solution makes memory useless). It is obviously in DelayP. This implies that Del P ·AC 0 DelayP. Putting all the previous results together, we conclude: Corollary 18. Del·AC 0 Del c ·AC 0 Del P ·AC 0 DelayP.
Conditional Separation for Classes with Precomputation
If precomputation is allowed, the separation proofs of the previous subsection no longer work; in fact we do not know if the corresponding separations hold. However, under reasonable complexity-theoretic assumptions we can at least separate the classes Del P ·AC 0 and Del c P ·AC 0 . Note that in Theorem 10 we already proved a separation of just these two classes, but this concerns only the special case of ordered enumeration, and does not say anything about the general case. We find it interesting that the proof below relies on a characterization of the class PSPACE in terms of regular leaf-languages or serializable computation [11,17]. The proof will be given in the appendix.
Theorem 19. If NP = PSPACE, then Del c ·AC 0 \ Del P ·AC 0 = ∅.
XX:12 Enumeration Classes Defined by Circuits
Del·AC 0 Del c ·AC 0 Del P ·AC 0 Del * ·AC 0 Del P ·AC 0 Del c P ·AC 0 Del P P ·AC 0 Del * P ·AC 0 DelayP IncP (if NP = PSPACE)
Conclusion
The obtained inclusion relations among the classes we introduced are summarized in Fig. 1. We noted earlier that in our context, enumeration problems are defined without a complexity assumption concerning the underlying relation. We should remark that quite often, a polynomial-time upper bound is required, see [15,4]. All of our results, with the exception of the conditional separations in Sect. 5 also hold under the stricter definition; however, the relation R L used in the lower bounds in Subsect. 5.2 is based on a PSPACE-complete set and therefore, to check if y ∈ R L (x) requires polynomial space w.r.t. the length of x. It would be nice to be able to base these separations on polynomial-time checkable relations, or even better, to separate the classes unconditionally, but this remains open. Moreover, some further inclusions in Fig. 1 are still not known to be strict. In Subsect. 4.3, we proved that, for several fragments of propositional logic, among them the Krom and the affine fragments, the enumeration of satisfiable assignments is in the class Del P ·AC 0 . This means satisfiable assignments can be enumerated very efficiently, i. e., by an AC 0 -circuit family, after some precomputation, which is also efficiently doable (in polynomial time). For another important and very natural fragment of propositional logic, namely the Horn fragment, a DelayP-algorithm is known, but it is not at all clear how polynomial-time precomputation can be of any help to produce more than one solution. Since Horn-Sat is P-complete, we conclude that Enum-Horn-Sat ∈ Del * ·AC 0 , and we conjecture that it is not in Del P ·AC 0 . In fact, we do not see any reasonable better bound than the known DelayP.
A Proofs for Section 4 (Examples)
Proof. (of Theorem 9) The proof of the first item is immediate. Let r be an input of Enum·Gray-Rank. From r one can easily compute y 0 = G n r and y 1 = G n r+1 but also z 0 = 10 · · · 01 and z 1 = 10 · · · 00 the two last words of the binary reflected Gray code. Suppose that r is even (a similar argument can be given when r is odd permuting the roles of 10 · · · 01 and 10 · · · 00. The enumeration step is the following:
As a first step, output y 0 . Let r be the input and y be the last output solution. Compute (again) z 0 and y 1 If y = y 1 , stop If y = z 0 , then output z 1 Else, switch the bit of y at position 0 then find the minimal position i where there is a 1 and switch bit at position i + 1. Output this word as the new solution.
The above process does not require memory. Starting from y 0 , it will start enumerating binary words of rank G n r+2 , G n r+4 , G n r+6 ... until it reaches z 0 = 10 · · · 01 = G n 2 n −2 . It then outputs z 1 = 10 · · · 00 = G n 2 n −1 and, applying the same rules, enumerates successively G n 2 n −3 , G n 2 n −5 , ... until y 1 = G n r+1 . The proof for Enum·Gray-Word proceeds along the same lines but is even simpler.
Proof. (of Theorem 11)
1. For the upper bound, we use the method described in Theorem 10. Since the starting word is given by its rank r, one needs to modify a bit the approach above by first computing G n r , check what the parity of its last bit is to determine what kind of step needs to be performed first. Then we continue on as the above proof. Suppose Enum·Gray-Rank ord is in Del P · AC 0 . Then by choosing x = 0 n we can enumerate all words of length n in a Gray code order in Del P ·AC 0 , which, by Theorem 10, is not possible.
2.
For the membership in Del c P ·AC 0 one just compute the rank r of x during the precomputation and use the preceding theorem. The first lower bound is proven exactly as in Theorem 11. The second lower bound follows by an easy modification: Suppose Enum·Gray-Word ord ∈ Del c ·AC 0 . Given word w of length n, we can compute its parity in AC 0 as follows: Start the enumeration of G n to compute the first solution w and next solution w . Even with constant memory, this can be done by a circuit of constant-depth. Then decide the parity as in Theorem 10.
Proof. (of Theorem 13) Let Γ be a set of clauses over a set of n variables
V = {x 1 , . . . x n }, with Γ = P ∪ B,
where P is a set of positive clauses and B a set of binary clauses of the form (¬x) or (x ∨ ¬x ). Any truth assignment can be seen as a binary word of length n.
Observe that contrary to the monotone case 1....1 is not a trivial solution. Indeed a negative unary clause (¬x) in B forces x to be assigned 0, and this truth value can be propagated to other variables by the implicative clauses of the form (x ∨ ¬x ). For this reason as a precomputation step we propose the following procedure:
Build a directed graph G whose set of vertices is V . For any 2-clause (x ∨ ¬x ) in B there is an arc (x, x ) in G.
For each variable x compute tc(x) the set of vertices that are reachable from x in G.
Intuitively tc(x) contains all variables that have to be assigned 0 in any satisfying assignment in which x is assigned 0. Observe that any variable x such that (¬x) ∈ B has to be assigned 0, and so have to be all the variables in tc(x). We replace all these variables by their value and simplify the set of clauses accordingly. If the empty clause occurs, then Γ is not satisfiable, otherwise it is satisfiable by the 1 . . . 1 assignment.
So in the following w.l.o.g we suppose that Γ is satisfiable and has no negative unary variable.
The precomputation having been performed we propose an algorithm that enumerates all truth assignments satisfying Γ in lexicographical order with the convention 1 < 0. The algorithm is as follows:
As a first step output 1 . . . 1 the trivial solution.
Let the set of clauses, Γ, together with the set of lists of vertices reachable from each vertex x in the digraph G, {tc(x)|x ∈ V }, be the input and y be the last output solution.
For each prefix y 1 . . . y i of y with y i = 1 and i ≤ n consider the word of length n,
z i = y 1 . . . y i−1 0w i+1 . . . w n , where for j ≥ i + 1, w j = 0 if x j ∈ tc(x i ) ∪ {k|k<i,y k =0}
tc(x k ),
w j = 1 otherwise.
Check whether at least one of these words z i is an assignment satisfying Γ.
If yes select the one with the longest common prefix with y, that is the satisfying assignment z i with the largest i and output it as the next solution. Else stop.
Observe that y 1 . . . y i−1 0 is the prefix of a solution if and only if y 1 . . . y i−1 0w i+1 . . . w n as it is defined is a solution. Moreover if y 1 . . . y i−1 0w i+1 . . . w n is a solution, then it is the first one in the considered lexicographic order with prefix y 1 . . . y i−1 0. The proof that the algorithm is correct is then similar to the one of Theorem 8. The precomputation can be done in polynomial time, and when done allows an implementation of the enumeration algorithm with constant-depth circuits, thus proving that Enum-IHS-Sat ∈ Del P ·AC 0 .
For the lower bound, consider the st-connectivity problem: given a directed graph G = (V, A) with two distinguished vertices s and t, decide whether there exists a path from s to t. From G, s and t we build an instance of Enum-IHS-Sat as follows. We consider a set a clauses C = P ∪ B, where P = {(s ∨ t)} and B = {(¬s)} ∪ {(x ∨ ¬y) | (x, y) ∈ A}. This is an AC 0 -reduction.
Observe that there exists a path from s to t if and only if Γ is unsatisfiable. Suppose that Enum-IHS-Sat ∈ Del·AC 0 , this means in particular that outputting a first assignment satisfying Γ or deciding there is none is in AC 0 . Thus the above reduction shows that st-connectivity is in AC 0 , thus contradicting the fact that st-connectivity is known not to be in AC 0 (see [10,1]).
Proof. (of Theorem 14)
The proof builds on the algorithm in [2] that decides whether a set of Krom clauses is satisfiable in linear time.
Let Γ be a set of 2-clauses over a set of n variables V . We perform the following precomputation steps:
Build the associated implication graph, i.e., the directed graph G whose set of vertices is the set of literals V ∪ {v : v ∈ V }. For any 2-clause (l ∨ l ) in Γ there are two arcsl → l andl → l in G.
For each literal l compute tc(l) the set of vertices that are reachable from l in G.
Observe that G has a duality property, i.e., if l → l is an arc of G, then so isl →l. Intuitively tc(l) contains all literals that have to be assigned 1 in any satisfying assignment in which l is assigned 1. Moreover, a given truth assignment is satisfying if and only there is no arc 1 → 0 in the graph in which every literal has been replaced by its truth value. From this precomputation we can decide whether Γ is satisfiable. Indeed, it is proven in [2] that Γ is satisfiable if and only if no strongly connected component of G contains both a variable x and its negation, i.e., there is no variable x such that x ∈ tc(x) andx ∈ tc(x). We can also detect equivalent literals.
So in the following w.l.o.g we suppose that Γ is satisfiable and has no equivalent literals. In particular G is then a directed acyclic graph.
We then go on with two additional precomputation steps: Compute a topological ordering of the vertices of G, which is denoted ≤ in the following.
In searching through this topological ordering build an ordered sequence M = (l 1 , . . . , l n ) of n literals corresponding to the first occurrences of all variables i.e. for all i, j ≤ n s.t. i = j: l i = l j and l i =l j .
The precomputation having been performed we propose an algorithm that enumerates all truth assignments satisfying Γ, given as truth assignments on M , in lexicographic order. The algorithm is as follows:
As a first step output 0 . . . 0 the first solution. Let the set of clauses, Γ, together with the set of lists of vertices reachable from each vertex l in the implication graph G be the input and y be the last output solution.
For each prefix y 1 . . . y i of y with y i = 0 and i ≤ n consider the word of length n,
z i = y 1 . . . y i−1 1w i+1 . . . w n , where for j ≥ i+1, w j = 1 if l j ∈ tc(l i )∪ {k|k<i,y k =1}
tc(l k ), w j = 0 otherwise. Check whether at least one of these words z i is an assignment satisfying Γ.
If yes select the one with the longest common prefix with y, that is the satisfying assignment z i with the largest i and output it as the next solution. Else stop.
To prove that the algorithm is correct we have to prove the following:
The assignment l 1 = 0, . . . , l n = 0 is satisfying. For all i ≤ n−1, y 1 . . . y i ∈ {0, 1} i is the prefix of a solution if and only if y 1 . . .
y i w i+1 . . . w n where for j ≥ i + 1, w j = 1 if l j ∈ {k|k≤,y k =1}
tc(l k ), and w j = 0 otherwise, is a solution.
Observe that if y 1 . . . y i w i+1 . . . w n is a solution, then it is the first solution with prefix y 1 . . . y i in our lexicographic order.
The fact that the assignment l 1 = 0, . . . , l n = 0 is satisfying follows from the proof in [2]. In seek of completeness let us reprove it. In order to get a contradiction suppose it is not the case. This means that there are l i and l j in M such thatl j → l i . By duality one can suppose w.l.o.g that i ≤ j. On the one hand i ≤ j implies that in the topological order l i ≤ l j , while on the other handl j → l i impliesl j ≤ l i . So we havel j ≤ l i ≤ l j , which contradicts the fact that l j (and notl j ) is in M . Now let us prove that y 1 . . . y i is the prefix of a solution if and only if y 1 . . . y i w i+1 . . . w n where for j ≥ i + 1, w j = 1 if l j ∈ {k|k≤i,y k =1} tc(l k ), and w j = 0 otherwise, is a solution.
One implication is trivial. So, let us suppose that y 1 . . . y i w i+1 . . . w n is not a solution. Then, replacing the literals by their truth values in the graph makes appear an arc 1 → 0. There is a discussion on the variables underlying this arc.
If the observed contradiction involves two variables whose truth values are fixed by the prefix, then y 1 . . . y i is not the prefix of any solution.
Suppose now that the observed contradiction involves two variables such that one has its truth value fixed by the prefix, the other not. Then they are two literals l h and l j with h ≤ i and j ≥ i + 1, such that by duality either:
l h → l j , y h = 1 and w j = 0, or l h →l j , y h = 1 and w j = 1, or l h → l j , y h = 0 and w j = 0, or l h →l j , y h = 0 and w j = 1.
Suppose that l h → l j , y h = 1 and w j = 0. The arc l h → l j implies that l j ∈ tc(l h ), which together with y h = 1 implies w j = 1 by definition, a contradiction.
Suppose now that l h →l j , y h = 1 and w j = 1. On the one hand l h →l j implies that any satisfying assignment that assigns 1 to l h , assigns 0 to l j . On the other hand w j = 1 means any satisfying assignment that starts by y 1 . . . y i assigns 1 to l j . Hence, since y h = 1 we get a contradiction.
An arcl h → l j cannot occur. Indeed, by duality we have then alsol j → l h , which implies thatl j ≤ l h in the topological order. The fact that h ≤ j implies l h ≤ l j . Thus we havē l j ≤ l h ≤ l j . Hence contradicting the fact that l j (and notl j ) is in M .
Finally an arcl h →l j cannot occur either. Indeed by duality there is then also the arc l j → l h , which implies l j ≤ l h in the topological order. But, since h ≤ j, we have also l h ≤ l j , a contradiction.
It remains to deal with the case where the observed contradiction involves two variables whose truth values are not fixed by the prefix Then they are two literals l j and l k with i + 1 ≤ j ≤ k, such that by duality either: l j → l k , w j = 1 and w k = 0, or l j →l k , w j = 1 and w k = 1, or l j → l k , w j = 0 and w k = 0, or l j →l k , w j = 0 and w k = 1.
Suppose l j → l k , w j = 1 and w k = 0. By definition w j = 1 means that there is an h ≤ i such that y h = 1 and l j ∈ tc(l h ). But then, the arc l j → l k implies that also l k ∈ tc(l h ), thus contradicting the fact that w k = 0.
Suppose now that l j →l k , w j = 1 and w k = 1. On the one hand l j →l k implies that any satisfying assignment that assigns 1 to l j , assigns 0 to l k . Since w j = 1 this means in particular that any satisfying assignment that starts by y 1 . . . y i assigns 0 to l k . On the contrary w k = 1 means any satisfying assignment that starts by y 1 . . . y i assigns 1 to l k , a contradiction.
The last two cases cannot occur, for the same reasons as in the discussion above: the existence of such arcs either contradicts the definition of M or the definition of a topological order.
The precomputation can be done in polynomial time, and when done allows an implementation of the enumeration algorithm with constant-depth circuits, thus proving that Enum-Krom-Sat ∈ Del P ·AC 0 .
For the lower bound, the proof given in Theorem 13 applies.
Proof. (of Theorem 15)
Observe that a set of XOR-clauses Γ over a set of variables V = {x 1 , . . . x n } can be seen as a linear system over V on the two elements field F 2 . As a consequence enumerating all assignments over V that satisfy Γ comes down to enumerating all solutions of the corresponding linear system. As a precomputation step we propose the following procedure:
Apply Gaussian elimination in order to obtain an equivalent triangular system. If the system has no solution stop. Otherwise we can suppose that the linear system is of rank n − k for some 0 ≤ k ≤ n − 1, and without loss of generality that x 1 , . . . , x k are free variables, whose assignment determines the assignment of all other variables in the triangular system.
x i , L(x i ) = {j | k + 1 ≤ j ≤ n, s 0 (x j ) = s i (x j )}.
The influence list of x i gives the bits that will be changed when going from a solution to another one in flipping only the bit x i in the prefix corresponding to the free variables. Observe that this list does not depend on the solution (s 0 in the definition) we start from.
With this precomputation one can easily output a first solution, corresponding to the prefix x 1 = . . . = x k = 0, and then start our enumeration procedure, which uses the enumeration of binary prefixes of length k in a Gray code order as a subprocedure. The algorithm is as follows:
As a first step compute a first solution s 0 corresponding to x 1 , . . . , x k assigned 0 . . . 0. If it exists, output it, else stop. Let the triangular system obtained from Gaussian elimination together with the set of influence lists of all free variables {L(x i )|1 ≤ i ≤ k} be the input and s = ww k+1 . . . w n , where w is a prefix of length k, be the last output solution.
Compute w the successor of w in a Gray code order enumeration. If it exists, then w and w differ only on one position, say the ith. Compute s = w w k+1 . . . w n where for j ≥ k + 1, w j = 1 − w j if j ∈ L(x i ), and w j = w j otherwise. Output s as the next solution. Else stop.
The precomputation runs in polynomial time. The system has 2 k solutions, one for each possible prefix on x 1 , . . . , x k . According to Theorem 10 the enumeration of these prefixes in a Gray code order can be done with AC 0 -delay in using a constant space. Two successive words differ exactly on one index i. We can then go from one solution to the next one in flipping in the former solution the variables in the influence list of x i . Since the influence lists have been computed in the precomputation step, this can be done done with constant depth circuits with no additional memory, thus concluding the proof.
To show that Enum-XOR-Sat ∈ Del * ·AC 0 we remark that a word w = w 1 ...w n ∈ {0, 1} n has an even number of ones iff the following set of XOR-clauses Γ has a solution: Γ = {x 1 = w 1 , x 2 = w 2 , ..., x n = w n , x 1 ⊕ x 2 ⊕ · · · ⊕ x n = 0}.
Moreover, all words of length n can be mapped to such systems of the same length. We denote by z 1 , ..., z t an enumeration of elements of A. Clearly, |R L (x)| = t + 1 and t ≥ log n.
To show that R L ∈ Del P ·AC 0 , we mimic the proof of the computation of the parity of n input bits with a tree of binary parity-gates. Indeed, we build families of circuits (C n ) and (D n ) according to Definition 3 as follows.
First C |·| (x) computes y * 1 = z 1 b 1 where b 1 j = x 2j−1 ⊕ x 2j ; if n is odd then b 1 n/2 = x n . We set D |·| (y * 1 ) = z 1 .
For 1 < i ≤ t, the circuit C |·| (x, y * i−1 ) = y * i , where y * i = z i b i where b i j = b i−1 2j−1 ⊕ b i−1 2j
for 1 ≤ j ≤ n/2 ; if |b i | is odd then the last bit of b i is the last bit of b i−1 n . We set D |·| (y * i ) = z i . After t steps, the bit b t 1 contains a 0 if and only if the number of ones in x is even. According to this, we either output 1 m or 0 m as last solution.
Note that the memory words above are of linear size. Suppose now that R L ∈ Del c · AC 0 . Let (C n ), (D n ) be the associated families of enumeration circuits and let c ∈ N be the space allowed for additional memory. Fix an input x, |x| = n. Let t be as above. Now for any sequence b 1 , . . . , b t+1 of additional memory, where one b i is empty and all the others have length exactly c, we can check in AC 0 that this is a correct sequence, which means that for every b i , C |·| (x, z i b i ) is of the form z j b j for some j.
Here, for the empty memory word ε we let zε = ε. For the correct sequence, we check as in the proof of Theorem 16 if the solution 0 m or 1 m will appear.
Since the length of a sequence of additional memory words is at most ct, their number is polynomial in n. Hence we obtain an AC 0 circuit for parity which does not exist.
Proof. (of Theorem 19)
In 1993 Hertrampf et al. [11] proved that PSPACE is AC 0serialisable, i. e., every PSPACE decision algorithm can be divided into an exponential number of slices, each requiring only the power of AC 0 and passing only a constant number of bits to the next slice. More formally, for every language L ∈ PSPACE there is an AC 0 -circuit family C = (C n ) n∈N , numbers k, l ∈ N such that every C n has k output bits and for every input x, |x| = n, N = n + n l + k, x ∈ L if and only if c 2 n l = 1 . . . Thus we see that R L can be enumerated with k bits of auxiliary memory, i.e, R L ∈ Del c ·AC 0 . Now suppose Enum·R L ∈ Del P ·AC 0 via AC 0 -circuit family C = (C n ) n∈N . Then L ∈ NP by the following algorithm.
Given x, use precomputation to produce x * . Check if C |·| (x * ) = 1 n k +1 . If yes, accept. Guess some output y ∈ R L (x). Check that C |·| (x * , y) = 1 n k +1 . If yes, then accept.
Theorem 5 .
5If K, T ⊆ P, then Del P T ·K ⊆ DelayP and and Del * T ·K ⊆ IncP.
Theorem 6 .
6Classes Del·AC 0 and CD•lin are incomparable, and CD•lin Del P lin ·AC 0 .
Input: A hypergraph H = (V, E) Output: an enumeration of all transversals of H Theorem 8. Enum-Transversal ∈ Del·AC 0 .
Theorem 16 .
16Del·AC 0 Del c ·AC 0 Proof. Let x ∈ {0, 1} * , |x| = n ∈ N * , x = x 1 . . . x n .We denote by m = log n + 1. Let R L be defined for all x ∈ {0, 1} * as the union of the two following sets A and B:A = y ∈ {0, 1} * |y| = m, y = 0 m , y = 1 m B = { 1 m } if xhas an even number of ones, else B = { 0 m }.
Figure 1
1Diagram of the classes. Bold lines denote strict inclusions.
Compute a first solution s 0 corresponding to x 1 , . . . , x k assigned 0 . . . 0. For each i = 1, . . . , k compute the solution s i corresponding to all variables in x 1 , , . . . x k assigned 0 except x i which is assigned 1. Compute then the influence list of
B
Proofs for Section 5 (Separations of Delay Classes) Proof. (of Theorem 17). For a given x ∈ {0, 1} * , n = |x|, we denote by m = log log n +1. Let R L be defined for all x ∈ {0, 1} * as the union of the two following sets A and B: XX:19 A = y ∈ {0, 1} * |y| = m, y = 0 m , y = 1 m B = { 1 m } if x has an even number of ones, else B = { 0 m }.
i is the i-th string in {0, 1} n l in lexicographic order,c 1 = C N (x, y 1 , 0 . . . 0) k , and c i = C N (x, y i , c i−1 ) for 1 < i ≤ 2 n l .Depending on L we now define the relationR L (x) = 0y ∈ {0, 1} * |y| = n l ∪ { 1 n l +1 | x ∈ L } ∪ { 10 n l | x ∈ L }.To enumerate R L with constant auxiliary memory, we construct circuit families C = (C n ) n∈N and D = (D n ) n∈N as follows:C |·| (x) = (y 0 , c) where c = C N (x, y 0 , 0 . . .0) C |·| (x, (y i , c)) = (y i+1 , c ) for 0 ≤ i < 2 n l and c = C N (x, y, c), C |·| (x, (y 2 n l , c)) = (1 n l +1 , c) C |·| (x, (1 n l +1 , c)) = (1 n l +1 , c) XX:20 Enumeration Classes Defined by Circuits D |·| (y i , c) = 0y i for 1 ≤ i ≤ 2 n l D |·| (1 n l +1 , c) = 1 n l +1 if C N (x, y 2 n l , c) = 1 . . . 1 k , 10 n l else.
First-order definability on finite structures. Miklós Ajtai, Annals of Pure and Applied Logic. 45Miklós Ajtai. First-order definability on finite structures. Annals of Pure and Applied Logic, 45:211-225, 1989.
A linear-time algorithm for testing the truth of certain quantified boolean formulas. Bengt Aspvall, Michael F Plass, Robert Endre Tarjan, 10.1016/0020-0190(79)90002-4Inf. Process. Lett. 83Bengt Aspvall, Michael F. Plass, and Robert Endre Tarjan. A linear-time algorithm for testing the truth of certain quantified boolean formulas. Inf. Process. Lett., 8(3):121-123, 1979. doi:10.1016/0020-0190(79)90002-4.
On uniformity within NC 1. A Mix David, Neil Barrington, Howard Immerman, Straubing, J. Comput. Syst. Sci. (JCSS). 413David A. Mix Barrington, Neil Immerman, and Howard Straubing. On uniformity within NC 1 . J. Comput. Syst. Sci. (JCSS), 41(3):274-306, 1990.
Incremental delay enumeration: Space and time. Florent Capelli, Yann Strozecki, 10.1016/j.dam.2018.06.038Discret. Appl. Math. 268Florent Capelli and Yann Strozecki. Incremental delay enumeration: Space and time. Discret. Appl. Math., 268:179-190, 2019. doi:10.1016/j.dam.2018.06.038.
Boolean Functions and Computation Models. Peter Clote, Evangelos Kranakis, 10.1007/978-3-662-04943-3doi:10.1007/ 978-3-662-04943-3Texts in Theoretical Computer Science. An EATCS Series. SpringerPeter Clote and Evangelos Kranakis. Boolean Functions and Computation Models. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2002. doi:10.1007/ 978-3-662-04943-3.
Complexity classifications of Boolean constraint satisfaction problems. Nadia Creignou, Sanjeev Khanna, Madhu Sudan, of SIAM monographs on discrete mathematics and applications. SIAM. 7Nadia Creignou, Sanjeev Khanna, and Madhu Sudan. Complexity classifications of Boolean constraint satisfaction problems, volume 7 of SIAM monographs on discrete mathematics and applications. SIAM, 2001.
A complexity theory for hard enumeration problems. Nadia Creignou, Markus Kröll, Reinhard Pichler, Sebastian Skritek, Heribert Vollmer, 10.1016/j.dam.2019.02.025Discret. Appl. Math. 268Nadia Creignou, Markus Kröll, Reinhard Pichler, Sebastian Skritek, and Heribert Vollmer. A complexity theory for hard enumeration problems. Discret. Appl. Math., 268:191-209, 2019. doi:10.1016/j.dam.2019.02.025.
Fine-grained complexity analysis of queries: From decision to counting and enumeration. Arnaud Durand, 10.1145/3375395.3389130Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems. Dan Suciu, Yufei Tao, and Zhewei Weithe 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database SystemsPortland, OR, USAACM20202020Arnaud Durand. Fine-grained complexity analysis of queries: From decision to counting and enumeration. In Dan Suciu, Yufei Tao, and Zhewei Wei, editors, Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, Portland, OR, USA, June 14-19, 2020, pages 331-346. ACM, 2020. doi:10.1145/3375395. 3389130.
First-order queries on structures of bounded degree are computable with constant delay. Arnaud Durand, Etienne Grandjean, ACM Trans. Comput. Log. 84Arnaud Durand and Etienne Grandjean. First-order queries on structures of bounded degree are computable with constant delay. ACM Trans. Comput. Log., 8(4), 2007.
Parity, circuits, and the polynomial-time hierarchy. Merrick L Furst, James B Saxe, Michael Sipser, 10.1007/BF0174443117Math. Syst. TheoryMerrick L. Furst, James B. Saxe, and Michael Sipser. Parity, circuits, and the polynomial-time hierarchy. Math. Syst. Theory, 17(1):13-27, 1984. doi:10.1007/BF01744431.
On the power of polynomial time bit-reductions. Ulrich Hertrampf, Clemans Lautemann, Thomas Schwentick, Heribert Vollmer, Klaus W Wagner, Proceedings 8th Structure in Complexity Theory. 8th Structure in Complexity TheoryIEEE Computer Society PressUlrich Hertrampf, Clemans Lautemann, Thomas Schwentick, Heribert Vollmer, and Klaus W. Wagner. On the power of polynomial time bit-reductions. In Proceedings 8th Structure in Complexity Theory, pages 200-207. IEEE Computer Society Press, 1993.
On generating all maximal independent sets. David S Johnson, Christos H Papadimitriou, Mihalis Yannakakis, 10.1016/0020-0190(88)90065-8Inf. Process. Lett. 273David S. Johnson, Christos H. Papadimitriou, and Mihalis Yannakakis. On generating all max- imal independent sets. Inf. Process. Lett., 27(3):119-123, 1988. doi:10.1016/0020-0190(88) 90065-8.
Combinatorial Algorithms: generation, enumeration, and search. L Donald, Douglas Robert Kreher, Stinson, CRC PressDonald L. Kreher and Douglas Robert Stinson. Combinatorial Algorithms: generation, enumeration, and search. CRC Press, 1999.
A glimpse on constant delay enumeration. Luc Segoufin, 10.4230/LIPIcs.STACS.2014.1331st International Symposium on Theoretical Aspects of Computer Science (STACS 2014. Ernst W. Mayr and Natacha PortierLyon, France25Schloss Dagstuhl -Leibniz-Zentrum für InformatikLuc Segoufin. A glimpse on constant delay enumeration (invited talk). In Ernst W. Mayr and Natacha Portier, editors, 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014), STACS 2014, March 5-8, 2014, Lyon, France, volume 25 of LIPIcs, pages 13-27. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2014. doi:10.4230/LIPIcs. STACS.2014.13.
. Yann Strozecki. Enumeration complexity. Bull. EATCS. 129Yann Strozecki. Enumeration complexity. Bull. EATCS, 129, 2019. URL: http://bulletin. eatcs.org/index.php/beatcs/article/view/596/605.
The complexity of enumeration and reliability problems. Leslie G Valiant, 10.1137/0208032SIAM J. Comput. 83Leslie G. Valiant. The complexity of enumeration and reliability problems. SIAM J. Comput., 8(3):410-421, 1979. doi:10.1137/0208032.
A generalized quantifier concept in computational complexity theory. Heribert Vollmer, 10.1007/3-540-46583-9_5Generalized Quantifiers and Computation, 9th European Summer School in Logic, Language, and Information, ESSLLI'97 Workshop, Revised Lectures. Jouko A. VäänänenSpringer1754Heribert Vollmer. A generalized quantifier concept in computational complexity theory. In Jouko A. Väänänen, editor, Generalized Quantifiers and Computation, 9th European Summer School in Logic, Language, and Information, ESSLLI'97 Workshop, Revised Lectures, volume 1754 of Lecture Notes in Computer Science, pages 99-123. Springer, 1999. doi: 10.1007/3-540-46583-9\_5.
Introduction to Circuit Complexity -A Uniform Approach. Heribert Vollmer, 10.1007/978-3-662-03927-4SpringerHeidelbergHeribert Vollmer. Introduction to Circuit Complexity -A Uniform Approach. Springer, Heidelberg, 1999. URL: https://doi.org/10.1007/978-3-662-03927-4.
| [] |
[
"Combining Multiple Methods for the Automatic Construction of Multilingual WordNets *",
"Combining Multiple Methods for the Automatic Construction of Multilingual WordNets *"
] | [
"Jordi Atserias \nDepartament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia\n",
"Salvador Climent climent@lingua.fil.ub.es \nDepartament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia\n",
"Xavier Farreres \nDepartament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia\n",
"German Rigau \nDepartament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia\n",
"Horacio Rodríguez horacio@lsi.upc.es \nDepartament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia\n"
] | [
"Departament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia",
"Departament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia",
"Departament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia",
"Departament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia",
"Departament de Llenguatges i Sistemes Informatics\nUniversitat Politecnica de Catalunya. Carrer Jordi Girona Salgado\n1-3. 08034BarcelonaCatalonia"
] | [] | This paper explores the automatic construction of a multilingual Lexical Knowledge Base from preexisting lexical resources. First, a set of automatic and complementary techniques for linking Spanish words collected from monolingual and bilingual MRDs to English WordNet synsets are described. Second, we show how resulting data provided by each method is then combined to produce a preliminary version of a Spanish WordNet with an accuracy over 85%. The application of these combinations results on an increment of the extracted connexions of a 40% without losing accuracy. Both coarsegrained (class level) and fine-grained (synset assignment level) confidence ratios are used and evaluated. Finally, the results for the whole process are presented. * | 10.1075/cilt.189.32ats | [
"https://arxiv.org/pdf/cmp-lg/9709003v2.pdf"
] | 3,945,367 | cmp-lg/9709003 | d1fd3e423da7b9a80a4bed6543b86c0765f4e90c |
Combining Multiple Methods for the Automatic Construction of Multilingual WordNets *
16 Sep 1997
Jordi Atserias
Departament de Llenguatges i Sistemes Informatics
Universitat Politecnica de Catalunya. Carrer Jordi Girona Salgado
1-3. 08034BarcelonaCatalonia
Salvador Climent climent@lingua.fil.ub.es
Departament de Llenguatges i Sistemes Informatics
Universitat Politecnica de Catalunya. Carrer Jordi Girona Salgado
1-3. 08034BarcelonaCatalonia
Xavier Farreres
Departament de Llenguatges i Sistemes Informatics
Universitat Politecnica de Catalunya. Carrer Jordi Girona Salgado
1-3. 08034BarcelonaCatalonia
German Rigau
Departament de Llenguatges i Sistemes Informatics
Universitat Politecnica de Catalunya. Carrer Jordi Girona Salgado
1-3. 08034BarcelonaCatalonia
Horacio Rodríguez horacio@lsi.upc.es
Departament de Llenguatges i Sistemes Informatics
Universitat Politecnica de Catalunya. Carrer Jordi Girona Salgado
1-3. 08034BarcelonaCatalonia
Combining Multiple Methods for the Automatic Construction of Multilingual WordNets *
16 Sep 1997
This paper explores the automatic construction of a multilingual Lexical Knowledge Base from preexisting lexical resources. First, a set of automatic and complementary techniques for linking Spanish words collected from monolingual and bilingual MRDs to English WordNet synsets are described. Second, we show how resulting data provided by each method is then combined to produce a preliminary version of a Spanish WordNet with an accuracy over 85%. The application of these combinations results on an increment of the extracted connexions of a 40% without losing accuracy. Both coarsegrained (class level) and fine-grained (synset assignment level) confidence ratios are used and evaluated. Finally, the results for the whole process are presented. *
Introduction
There is no doubt about the increasing importance of using wide coverage ontologies for NLP tasks. Although available ontologies (Upper Model (Bateman 90), CYC (Lenat 95), WordNet (Miller 90), ONTOS (Nirenburg & Defrise 93), Mikrokosmos, EDR (Yokoi 95), etc.) 1 differ in great extent on several characteristics (e.g. broad coverage vs. domain specific, lexically oriented vs. conceptually-oriented, granularity, kind of information placed in nodes, kind of relations, way of building, etc.), it is clear that WordNet has become a de-facto standard for a wide range of NL applications. Developed at Princeton by George Miller and his research group (Miller 90), the figures the currently available version of WordNet 1.5 (WN1.5) shows are impressive (119,217 words, 91,587 synsets). WN1.5 is organised as a network of lexicalized concepts (Synsets) which are sets of word meanings (WMs) considered to be synonymous within a context. Synsets are connected by several semantic relations (nevertheless, only that of hypernymy-hyponymy is considered in this work).
WordNet success has encouraged several projects in order to build WordNets (WNs) for other languages or to develop multilingual WNs.
The most ambitious of such efforts is EuroWordNet (EWN) 2 , a project aiming to build a multilingual WordNet for several European languages 3 . The work we present here is included within EWN and presents our approach for (semi)automatically building a Spanish WN (Climent et al. 96). The main strategy within our aproach is to map Spanish words to WN1.5 synsets, creating for Spanish a parallelin-structure network. Therefore, our main goal is to attach Spanish word meanings to the existing WN1.5 concepts. This paper describes automatic techniques which have been developed in order to achieve this goal for nouns.
Recently, several attemps have been performed to produce automatically multilingual ontologies. (Ageno et al. 94) link taxonomic structures derived from DGILE and LDOCE by means of a bilingual dictionary. (Knight & Luk 94) focus on the construction of Sensus, a large knowledge base for supporting the Pangloss Machine Translation system, merging ontologies (ONTOS and UpperModel) and WordNet with monolingual and bilingual dictionaries. (Okumura & Hovy 94) describe a (semi)automatic method for associating a Japanese lexicon to an ontology using a Japanese/English bilingual dictionary as a "bridge". (Rigau et al. 95) link Spanish word senses to WordNet synsets using also a bilingual dictionary. (Rigau & Agirre 95) exploit several bilingual dictionaries for linking Spanish and French words to WordNet synsets.
Our approach for building the Spanish WN (SpWN) is based on the following considerations:
• The close conceptual similarity of English and Spanish allows for the preservation of the structure of WN1.5 in order to build the SpWN. Moreover, when necessary, lexicalization mismatches are solved using multi-word traslations (collocations) supplied by bilingual dictionaries.
• An extensive use of pre-existing structured lexical sources is performed in order to achieve a massive automatic acquisition process.
• The accuracy of cross-language mappings is validated by hand on a sample. Each attachment to WN bears a confidence score (CS).
• Only attachments over a threshold are considered. Moreover, a manual inspection of attachments in a given range will be carried out.
Undoubtfully, following this aproach most of the criticisms placed to WN1.5 also apply to SpWN: too much sense fine-grainedness, lack of cross-POS relationships, simplicity of the relational information, not purely lexical but rather lexical-conceptual database, etc. Despite of these drawbacks, WN1.5 is widely used and tested and supports few but the most basic semantic relations. Our aproach ensures that most of the huge networking effort, which is necessary to build a WN from scratch, is already done.
The different sources involved in the process show a different accuracy. High CSs can be assigned to original sources, as MRDs, but derived sources, which result from the performance of automatic procedures, come to bear substantially lower CSs. Our major claim is that multiple source/procedures leading to the same result will increase the particular CS while when leading to different results the overall CS will decrease. This paper is organized as follows. In section 2 Lexical Knowledge resources used are presented. Section 3 describes the different types of extraction/mapping methods developed. Main results and quality assesments issues are presented in Section 4. Section 5 presents some final remarks.
Lexical Knowledge Sources
Several lexical sources have been applied in order to assign Spanish WMs to WN1.5 synsets:
1. Spanish/English and English/Spanish bilinguals 4 2. A large Spanish monolingual dictionary 5 3. English WordNet (WN1.5).
By merging both directions of the bilingual dictionaries what we call homogeneous bilingual (HBil) has been obtained. The maximum synset coverage we can expect to reach by using HBil due to its small size is 32%. In table 1 6 the summarised amount of data is shown.
Methods
Bilingual entries must be disambiguated against WN. The different procedures developed for linking Spanish lexical entries to WN synsets can be classified in three main groups according to the kind of knowledge sources involved in the process:
• Class methods: use as knowledge sources individual entries coming from bilinguals and WN synsets.
• Structural methods: take profit of the WN structure.
• Conceptual Distance methods: makes use of knowledge relative to meaning closeness between lexical concepts.
Every method has been manually inspected in order to measure its CS. Such tests have been performed on a random sample of 10% using the Validation Interface (VI), an enviroment designed to allow hand validation of Spanish word forms to WN synsets assignment. It allows to consult and to navigate through the monolingual and bilingual lexical databases and WN. The following diagnostics can result from the performance of this validation:
ok : correct links.
ko : fully incorrect links.
hypo : links to a hyponym of the correct synset. near : links to near synonyms that could be considered ok.
Class Methods
Following the properties described in (Rigau & Agirre 95) Hbil has been processed and 2 groups of 4 different cases have been collected depending on whether the English words are either monosemous or polysemous relative to WN 1.5. Afterwards two hybrid criteria are considered as well.
Monosemic Criteria
These criteria apply only to monosemous EW with respect to WN1.5. As a result, this unique synset is linked to the corresponding Spanish words.
• Monosemic-1 criterion (1:1) :
SW EW
Figure 1: Monosemic Criteria
A Spanish Word (SW) has only one English translation (EW); symmetrically, EW has SW as its unique traslation. A SW has more than one translation; each EW has SW as its unique traslation.
Polysemic Criteria
These criteria follow the four criteria descrived in previous subsection but for polysemous English words (relative to WN1.5).
Hybrid Criteria
• Variant criterion For a WN1.5 synset which contains a set of variants EWs, if it is the case that two or more of the variants EWi have only one translation to the same Spanish word SW, a link is produced for SW into the WN1.5 synset.
• Field criterion This procedure makes use of the existence of a field identifier in some entries (over 4,000) of the English/Spanish bilingual. For each English entry bearing a field identifier (EW),
if it is the case that both occur in the same synset, for each EW translation to Spanish a link is produced. Results of the manual verification for each criterion are shown in table 2.
Structural Methods
In this set of methods the whole WN1.5 structure has been used to disambiguate. From HBil, all combinations of English words from 2 up to the maximum number of translations for each entry have been generated. The idea is to find as much common information between the corresponding EWs in WN1.5 as possible. On the extracted combinations, four experiments have been carried out resulting in the criteria described below:
• Intersection criterion
Conditions: All EWs share at least one common synset in WordNet. Link: SW is linked to all common synsets of its translations.
• Parent criterion
Conditions: A synset of an EW is direct parent of synsets corresponding to the rest of EWs. Link: The SW is linked to all hyponym synsets 7
• Brother criterion Conditions: All EWs have synsets which are brothers respecting to a common parent. Link: The SW is linked to all co-hyponym synsets.
• Distant hyperonymy criterion Conditions: A synset of an EW is a distant hypernym of synsets of the rest of the EWs. Link: The Spanish Word is linked to the lower-level (hyponym) synsets.
As the results of all these criteria follow a structure like:
Spanish-Word <list-of-EW> <list-of-synsets>, the Structural Criteria have been subsequently pruned by deleting repeating entries subsumed by larger ones.
The overall results of Structural criteria are shown in table 3. 7 A previous experiment assigning SW only to the hypernym synset (assuming this would appropriately capture global information) resulted in too general assignments.
A finer-grained experiment has been performed on the size of the translation list. We have found that the larger this size is, the higher is the precision obtained and, even more important, the lower is the KO-ratio.
Conceptual Distance Methods
Taking as reference a structured hierarchical net, conceptual distance tries to provide a basis for determining closeness in meaning among words. Conceptual distance between two concepts is defined in (Rada et al. 89) as the length of the shortest path that connects the concepts in a hierarchical semantic net. In a similar approach, (Sussna 93) employs the notion of conceptual distance between network nodes in order to improve precision during document indexing. Following these ideas, (Agirre et al. 94) describe a new conceptual distance formula for automatic spelling correction and (Rigau 95), using this conceptual distance formula, presents a methodology to enrich dictionary senses with semantic tags extracted from WordNet. The same measure is used in (Rigau et al. 95) for linking taxonomies extracted from DGILE and LDOCE and in (Rigau et al. 97) as one of the methods for the Genus Sense Disambiguation problem in DGILE. Conceptual density, a more complex semantic measure among words is defined in (Agirre & Rigau 95) and used in (Agirre & Rigau 96) as a proposal for WSD of the Brown Corpus. The Conceptual Distance formula used in this work, also described in (Agirre et al. 94) is shown in Figure 5.
dist(w 1 , w 2 ) = min
c 1 i ∈w 1 c 2 j ∈w 2 c k ∈ path(c 1 i ,c 2 j ) 1 depth(c k )
(1) Figure 5: Conceptual distance formula where Wi are words and Ci are synsets representing those words. Conceptual Distance between two words depends on the length of the shortest path that connects the concepts and the speci- ficity of the concepts in the path. Then, providing two words, the application of the Conceptual Distance formula selects those closer concepts which represent them.
Following this approach, three different methods have been applied.
Using Co-occurrence words collected from DGILE (CD1)
Following (Wilks et al. 93) two words are coocurrent in a dictionary if they appear in the same definition. For DGILE, a lexicon of 300,062 coocurrence pairs among 40,193 Spanish word forms was derived and the afinity between these pairs was measured by means of the Association Ratio (AR), which can be used as a fine grained CS.
Then, the Conceptual Distance formula for all those pairs has been computed using HBil and the nominal part of WN.
Using Headword and genus of DGILE (CD2)
Computing the Conceptual Distance formula on the headword and the genus term of 92,741 nominal definitions of DGILE dictionary (only 32,208 with translation to English).
Using Spanish entries with multiple translations in the bilingual dictionary (CD3)
In this case, we have derived a small but closely related lexicon of 3,117 translation equivalents with multiple translations from the Spanish/English direction of the bilingual dictionary (only 2,542 with connection to WordNet1.5). Table 5 summarizes the performance of the three Conceptual Distance methods.
Combining methods
Collecting those synsets produced by the methods described above with an accuracy greater than 85% (mono1, mono2, mono3, mono4, variants, field) we obtain a preliminary version of the Spanish WordNet containing 10,982 connections (1,777 polysemous) among 7,131 synsets and 8,396 Spanish nouns with an overall CS of 87,4%. However, combining the discarded methods we can take profit of portions of them precise enough to be acceptable.
All files resulting from discarded methods were crossed and their intersections were calculated. Using VI, a manual inspection of samples from each intersection was carried out. Results are shown in table 6.
In bold appear intersections with a CS greater than 85%. Up to 7,244 connections (2,075 polysemous) can be selected with 85.63% CS, 4,553
Criterion #Links #Synsets #Words %ok %ko %hypo %hyper %near CD -1 23,828 11,269 7,283 56 38 3 2 2 CD -2 24,739 12,709 10,300 61 35 0 0 3 CD -3 4,567 3,089 2,313 75 12 0 2 8
Conclusions
An approach for building multilingual Wordnets combining a variety of lexical sources as well as a variety of methods has been proposed which tries to take profit of the existing WN1.5 for attaching words from other languages in a way guided mainly by the content of bilingual lexical sources.
A central issue of our approach is the combination of methods and sources in a way that the accuracy of the data obtained from the combined methods overcomes the accuracy obtained from the individual ones. Several families of methods have been tested, each of them bearing its own CS. Only those methods offering a result over a threshold (85%) have been considered.
In a second phase of our experiments, intersections between the results provided by the different individual methods have been performed. It is clear that valuable sets of entries, owning an insufficient, in some cases rather bad, individual CS, can be, however, extracted if they occur as a combination of several methods. In this way, using the same threshold, the amount of synsets attached to Spanish entries has increased. It must be pointed out that some of these new connections correspond to highly polysemous words.
The approach seems to be extremely promising, attaching up to 75% of reachable Spanish nouns and 55% of reachable WN1.5 synsets. Currently we are performing complementary experiments for extending the approach for covering other lexical sources, specially wider-coverage bilinguals.
Other lines of research we are following by now include: 1) dealing with mismatches, i.e., when coming from different method/source an Spanish word is assigned to different synsets. If in the former case the overall CS increases, in the last one it should decrease. 2) A fine grained cross- comparison of methods and sources (intersections of more than two classes, decomposition of classes into finer ones, etc.) will be performed to obtain a more precise classification and CS assignment. 3) We are trying to obtain an empirical method for CS calculation of intersections. Methods based on bayesian inference networks or quasiprobabilistic approaches has been tested giving promising results.
•
Monosemic
Figure 2 :
2Monosemic-2 Criteria
•Figure 3 :
3MonosemicMonosemic
Figure 4 :
4Monosemic-4 Criteria Several SWs have different translations; EWs also have several translations.
Diccionario Vox/Harraps Esencial Español/Inglés -Inglés/Español Biblograf S.A. Barcelona 1992 5 DGILE: Diccionario General Ilustrado de la Lengua Española -Vox -M.Alvar (ed) Biblograf. S.A. Barcelona 1987 6 Connections can be word/word or word/synset. When there are synsets involved the connections are Spanish-word/synset,(except for WordNet itself), otherwise Spanish-word/English-word.4 English nouns Spanish nouns Synsets Connections
WordNet1.5
87,642
-
60,557
107,424
Spanish/English
11,467
12,370
-
19,443
English/Spanish
10,739
10,549
-
16,324
Hbil
15,848
14,880
-
28,131
Maximum Reacheable Coverage
12,665
13,208
19,383
66,258
of WordNet
14%
-
32%
-
of bilingual
80%
90%
-
-
Table 1: : Dictionary Statistics
hyper : links to a hyperonym of the correct
synset.
The results for the case of intersection criterion are shown in table 4.#Words %ok %ko %hypo
2
81,39 3,48
1,51
3
91,89
0,0
5,4
4
94,4
0,0
0,0
Table 4 :
4Results for the Intersection Criteria
Table 2 :
2Results of class methodsCriterion #Links #Synsets #Words %ok %ko %hypo %hyper %near
inters
1,256
966
767
79
4
8
0
9
parent
1,432
1,210
788
51
3
30
0
14
brother
2,202
1,645
672
57
5
22
0
16
distant
1,846
1,522
866
60
4
23
0
13
Table 3 :
3Overall results for the Structural Criteria
Table 5 :
5Performace of Conceptual Distance methodsmethod2
method1
cd1
cd2
cd3 dist
fath
p1
p2
p3
p4
bro
size 855
828
435 449
405
76 107
0
1,872
%ok
70
71
79
58
6
86
89
0
67
cd1
size
0 15,736 1,849 576
419 2,076 556 3,146 15,105
%ok
0
79
85
68
71
86
86
72
64
cd2
size
0
0 2,401 571
428 2,536 592 3,777 13,246
%ok
0
0
86
71
72
88
86
75
67
cd3
size
0
0
0 391
325
205 180
215
3,114
%ok
0
0
0
79
80
95
95
100
77
dist
size
0
0
0
0 1,432
69
68
0
1,463
%ok
Table 6 :
6Results combining methods of which are new with an overall CS of 84% resulting in a 41% increase. It must be pointed out that 1,308 new connections are polysemous.Then a second version of the Spanish Word-Net has been obtained containing 15,535 connections (3,373 polysemous) among 10,786 synsets and 9,986 Spanish nouns with a final accuracy of 86,4%.Table 7shows the overall figures of the resulting SpWNs.
Table 7 :
7Overall Figures of SpWNs
EuroWordNet: Project LE-4003 of the EU. 3 Initially three languages, apart from English, were involved: Dutch, Italian and Spanish. The project has been recently extended for covering French and German.
TGE: Tlink Generation Environment. ; A References, I Ageno, F Castellón, G Ribas, H Rigau, A Rodríguez, Samiotou, Proceedings of the 15th International Conference on Computational Linguistics (Coling'94). the 15th International Conference on Computational Linguistics (Coling'94)Kyoto, JapanAgeno et al. 94References [Ageno et al. 94] (Ageno et al. 94) A. Ageno, I. Castellón, F. Ribas, G. Rigau, H. Rodríguez, and A. Samiotou. TGE: Tlink Generation Environment. In Proceedings of the 15th International Conference on Computational Linguistics (Coling'94), Kyoto, Japan, 1994.
A Proposal for Word Sense Disambiguation using Conceptual Distance. E Agirre, G Rigau, Proceedings of International Conference Recent Advances in Natural Language Processing. International Conference Recent Advances in Natural Language ProcessingBulgariaAgirre & Rigau 95)Agirre & Rigau 95] (Agirre & Rigau 95) E. Agirre and G. Rigau. A Proposal for Word Sense Disambigua- tion using Conceptual Distance. In Proceedings of International Conference Recent Advances in Nat- ural Language Processing, Tzigov Chark, Bulgaria, 1995.
Sarasola. Conceptual Distance and Automatic Spelling Correction. E Agirre, G Rigau, Proceedings of the workshop on Computational Linguistics for Speech and Handwriting Recognition. Agirre et al. 94) E. Agirre, X. Arregi, X. Artola, A. Díaz de Ilarraza, and K.the workshop on Computational Linguistics for Speech and Handwriting RecognitionCopenhage, Denmark; Leeds, UKProceedings of the 16th International Conference on Computational Linguistics (Coling'96)Agirre & Rigau 96] (Agirre & Rigau 96) E. Agirre and G. Rigau. Word Sense Disambiguation using Con- ceptual Density. In Proceedings of the 16th Inter- national Conference on Computational Linguistics (Coling'96), Copenhage, Denmark, 1996. [Agirre et al. 94] (Agirre et al. 94) E. Agirre, X. Ar- regi, X. Artola, A. Díaz de Ilarraza, and K. Sara- sola. Conceptual Distance and Automatic Spelling Correction. In Proceedings of the workshop on Com- putational Linguistics for Speech and Handwriting Recognition, Leeds, UK, 1994.
Upper modeling: Organizing knowledge for Natural Language Processing. J Bateman, Proceedings of Fifth International Workshop on Natural Language Generation. Fifth International Workshop on Natural Language GenerationPittsburg, PABateman 90] (Bateman 90[Bateman 90] (Bateman 90) J. Bateman. Upper mod- eling: Organizing knowledge for Natural Language Processing. In Proceedings of Fifth International Workshop on Natural Language Generation, Pitts- burg, PA, 1990.
Definition of the links and subsets for nouns of the EuroWordNet Project. S Climent, H Rodríguez, J Gonzalo, Deliverable 005 WP3.1 EuroWordNet, LE-4003. Technical reportCliment et al. 96)[Climent et al. 96] (Climent et al. 96) S. Climent, H. Rodríguez, and J. Gonzalo. Definition of the links and subsets for nouns of the EuroWordNet Project. Deliverable 005 WP3.1 EuroWordNet, LE- 4003. Technical report, 1996.
Building a Large-Scale Knowledge Base for Machine Translation. Proceedings of the American Association for Artificial Inteligence. Mars N., editorthe American Association for Artificial InteligenceIOS Press3Towards Very Large Knowledge Bases. Miller 90Knight & Luk 94] (Knight & Luk 94) K. Knight and S. Luk. Building a Large-Scale Knowledge Base for Machine Translation. In Proceedings of the Ameri- can Association for Artificial Inteligence, 1994. [Lenat 95] (Lenat 95) D. Lenat. Steps to Sharing Knowledge. In Mars N., editor, Towards Very Large Knowledge Bases. IOS Press, 1995. [Miller 90] (Miller 90) G. Miller. Five papers on Word- Net. Special Issue of International Journal of Lexi- cography, 3(4), 1990.
Building Japanese-English Dictionary based on Ontology for Machine translation. S Nirenburg, C Defrise, Proceedings of Arpa Conference on Human Language Technology. Arpa Conference on Human Language TechnologyDordrecht; PrincetonKluwer Academic PublishersOkumura & Hovy 94) A. Okumura and E. Hovy[Nirenburg & Defrise 93] (Nirenburg & Defrise 93) S. Nirenburg and C. Defrise. Aspects of text mean- ing. In Kluwer Academic Publishers, editor, Se- mantics and the Lexicon. Dordrecht, 1993. [Okumura & Hovy 94] (Okumura & Hovy 94) A. Oku- mura and E. Hovy. Building Japanese-English Dic- tionary based on Ontology for Machine translation. In Proceedings of Arpa Conference on Human Lan- guage Technology, Princeton, 1994.
Development an Application of a Metric on Semantic Nets. R Rada, H Mili, E Bicknell, M Blettner, Rada et al. 89). 19Rada et al. 89[Rada et al. 89] (Rada et al. 89) R. Rada, H. Mili, E. Bicknell, and M. Blettner. Development an Application of a Metric on Semantic Nets. IEEE Transactions on Systems, Man and Cybernetics, 19(1):17-30, 1989.
Automatically extracting Translation Links using a wide coverage semantic taxonomy. G Rigau, E Agirre ; G. Rigau, ; G Rigau, H Rodríguez, J Turmo ; G. Rigau, J Atserias, E Agirre, ; Y Wilks, D Fass, C Guo, J Mcdonal, T Plate, B Slator, Wilks et al. 93Proceedings of The Computational Lexicon Workshop. Seventh European Summer School in Logic, Language and Information. ESSLLI'95. Pustejowsky J.The Computational Lexicon Workshop. Seventh European Summer School in Logic, Language and Information. ESSLLI'95Barcelona, Spain; Montpellier, France; Madrid, Spain; Arlington, VirginiaKluwer Academic PublishersDordrechtSemantics and the LexiconRigau & Agirre 95] (Rigau & Agirre 95) G. Rigau and E. Agirre. Disambiguating bilingual nominal entries against WordNet. In Proceedings of The Computa- tional Lexicon Workshop. Seventh European Sum- mer School in Logic, Language and Information. ESSLLI'95, pages 71-82, Barcelona, Spain, 1995. [Rigau 95] (Rigau 95) G. Rigau. An Experiment on Automatic Semantic Tagging of Dictionary Senses. LSI-95-31-R. Technical report, 1995. [Rigau et al. 95] (Rigau et al. 95) G. Rigau, H. Rodríguez, and J. Turmo. Automatically extracting Translation Links using a wide coverage semantic taxonomy. In Proceedings of fifteenth International Conference AI'95 . Language Engi- neering '95, Montpellier, France, 1995. [Rigau et al. 97] (Rigau et al. 97) G. Rigau, J. Atse- rias, and E. Agirre. Combining Unsupervised Lex- ical Knowledge Methods for Word Sense Disam- biguation. In Proceedings of the 34th Annual Meet- ing of the Association for Computational Linguistics (ACL'97), pages 48-55, Madrid, Spain, 1997. [Sussna 93] (Sussna 93) M. Sussna. Word Sense Dis- ambiguation for Free-text Indexing Using a Massive Semantic Network. In Proceedings of the Second In- ternational Conference on Information and knowl- edge Management, Arlington, Virginia, 1993. [Wilks et al. 93] (Wilks et al. 93) Y. Wilks, D. Fass, C. Guo, J. McDonal, T. Plate, and B. Slator. Pro- viding Machine Tractable Dictionary Tools. In Pustejowsky J., editor, Semantics and the Lexicon, pages 341-401. Kluwer Academic Publishers, Dor- drecht, 1993.
The Impact of the EDR Electronic Dictionary on Very Large Knowledge Bases. T Yokoi, Mars N., editorTowards Very Large Knowledge Bases. IOS Press[Yokoi 95] (Yokoi 95) T. Yokoi. The Impact of the EDR Electronic Dictionary on Very Large Knowl- edge Bases. In Mars N., editor, Towards Very Large Knowledge Bases. IOS Press, 1995.
| [] |
[
"Local Space-Time Smoothing for Version Controlled Documents",
"Local Space-Time Smoothing for Version Controlled Documents"
] | [
"Seungyeon Kim \nGeorgia Institute of Technology\nGeorgia Institute of Technology\n\n",
"Guy Lebanon \nGeorgia Institute of Technology\nGeorgia Institute of Technology\n\n"
] | [
"Georgia Institute of Technology\nGeorgia Institute of Technology\n",
"Georgia Institute of Technology\nGeorgia Institute of Technology\n"
] | [
"Coling 2010: Poster Volume"
] | Unlike static documents, version controlled documents are continuously edited by one or more authors. Such collaborative revision process makes traditional modeling and visualization techniques inappropriate. In this paper we propose a new representation based on local spacetime smoothing that captures important revision patterns. We demonstrate the applicability of our framework using experiments on synthetic and real-world data. | null | null | 811,939 | 1003.1410 | 7183ea8eb2afe86a5d5453365010c877edcb9dc2 |
Local Space-Time Smoothing for Version Controlled Documents
August 2010
Seungyeon Kim
Georgia Institute of Technology
Georgia Institute of Technology
Guy Lebanon
Georgia Institute of Technology
Georgia Institute of Technology
Local Space-Time Smoothing for Version Controlled Documents
Coling 2010: Poster Volume
BeijingAugust 2010
Unlike static documents, version controlled documents are continuously edited by one or more authors. Such collaborative revision process makes traditional modeling and visualization techniques inappropriate. In this paper we propose a new representation based on local spacetime smoothing that captures important revision patterns. We demonstrate the applicability of our framework using experiments on synthetic and real-world data.
Introduction
Most computational linguistics studies concentrate on modeling or analyzing documents as sequences of words. In this paper we consider modeling and visualizing version controlled documents which is the authoring process leading to the final word sequence. In particular, we focus on documents whose authoring process naturally segments into consecutive versions. The revisions, as the differences between consecutive versions are often called, may be authored by a single author or by multiple authors working collaboratively.
One popular way to keep track of version controlled documents is using a version control system such as CVS or Subversion (SVN). This is often the case with books or with large computer code projects. In other cases, more specialized computational infrastructure may be available, as is the case with the authoring API of Wikipedia.org, Slashdot.com, and Google Wave. Accessing such API provides information about what each revision contains, when was it submitted, and who edited it. In any case, we formally consider a version controlled document as a sequence of documents d 1 , . . . , d l indexed by their revision number where d i typically contains some locally concentrated additions or deletions, as compared to d i−1 .
In this paper we develop a continuous representation of version controlled documents that generalizes the locally weighted bag of words representation (Lebanon et al., 2007). The representation smooths the sequence of version controlled documents across two axes-time t and space s. The time axis t represents the revision and the space axis s represents document position. The smoothing results in a continuous map from a space-time domain to the simplex of term frequency vectors γ : Ω → P V where Ω ⊂ R 2 , and (1)
P V = ⎧ ⎨ ⎩ w ∈ R |V | : w i ≥ 0, |V | i=1 w i = 1 ⎫ ⎬ ⎭ .
The mapping above (V is the vocabulary) captures the variation in the local distribution of word content across time and space. Thus [γ(s, t)] w is the (smoothed) probability of observing word w in space s (document position) and time t (version). Geometrically, γ realizes a divergence-free vector field (since w [γ(s, t)] w = 1, γ has zero divergence) over the space-time domain Ω. We consider the following four version controlled document analysis tasks. The first task is visualizing word-content changes with respect to space (how quickly the document changes its content), time (how much does the current version differs from the previous one), or mixed spacetime. The second task is detecting sharp transitions or edges in word content. The third task is concerned with segmenting the space-time domain into a finite partition reflecting word content. The fourth task is predicting future revisions. Our main tool in addressing tasks 1-4 above is to analyze the values of the vector field γ and its first order derivatives fields ∇γ = (γ s ,γ t ) .
(2)
Space-Time Smoothing for Version Controlled Documents
With no loss of generality we identify the vocabulary V with positive integers {1, . . . , V } and represent a word w ∈ V by a unit vector 1 (all zero except for 1 at the w-component)
e(w) = (0, . . . , 0, 1, 0, . . . , 0) w ∈ V. (3)
We extend this definition to word sequences thus representing documents w 1 , . . . , w N (w i ∈ V ) as sequences of V -dimensional vectors e(w 1 ), . . . , e(w N ) . Similarly, a version controlled document is sequence of documents (3) we represent a version controlled document as the array
d (1) , . . . , d (l) of potentially different lengths d (j) = w (j) 1 , . . . , w (j) N (j) . Usinge(w (1) 1 ), . . . , e(w (1) N (1) ) . . . . . . . . . e(w (l) 1 ), . . . , e(w (l) N (l) )(4)
where columns and rows correspond to space (document position) and time (versions). The array (4) of high dimensional vectors represents the version controlled document without any loss of information. Nevertheless the high dimensionality of V suggests we smooth the vectors in (4) with neighboring vectors in order to better capture the local word content. Specifically we convolve each component of (4) with a 2-D smoothing kernel K h to obtain a smooth vector field γ over space-time (Wand and Jones, 1995) e.g.,
γ(s, t) = s t K h (s − s , t − t )e(w (t ) s ) K h (x, y) ∝ exp −(x 2 + y 2 )/(2h 2 ) .(5)
Thus as (s, t) vary over a continuous domain Ω ⊂ R 2 , γ(s, t), which is a weighted combination of neighboring unit vectors, traces a continuous surface in P V ⊂ R V . Assuming that the kernel K h is a normalized density it can be shown that 1 Note the slight abuse of notation as V represents both a set of words and an integer V = {1, . . . , V } with V = |V |. γ(s, t) is a non-negative normalized vector i.e., γ(s, t) ∈ P V (see (1) for a definition of P V ) measuring the local distribution of words around the space-time location (s, t). It thus extends the concept of lowbow (locally weighted bag of words) introduced in (Lebanon et al., 2007) from single documents to version controlled documents.
One difficulty with the above scheme is that the document versions d 1 , . . . , d l may be of different lengths. We consider two ways to resolve this issue. The first pads shorter document versions with zero vectors as needed. We refer to the resulting representation γ as the non-normalized representation. The second approach normalizes all document versions to a common length, say l j=1 N (j). That is each word in the first document is expanded into j =1 N (j) words, each word in the second document is expanded into j =2 N (j) words etc. We refer to the resulting representation γ as the normalized representation.
The non-normalized representation has the advantage of conveying absolute lengths. For example, it makes it possible to track how different portions of the document grow or shrink (in terms of number of words) with the version number. The normalized representation has the advantage of conveying lengths relative to the document length. For example, it makes it possible to track how different portions of the document grow or shrink with the version number relative to the total document length. In either case, the space-time domain Ω on which γ is defined (5) is a two dimensional rectangular domain Ω = [0, I] × [0, J].
Before proceeding to examine how γ may be used in the four tasks described in Section 1 we demonstrate our framework with a simple low dimensional example. Assuming a vocabulary of two words V = {1, 2} we can visualize γ by displaying its first component as a grayscale image (since [γ(s, t)] 2 = 1 − [γ(s, t)] 1 the second component is redundant). Specifically, we created a version controlled document with three contiguous segments whose {1, 2} words were sampled from Bernoulli distributions with parameters 0.3 (first segment), 0.7 (second segment), and 0.5 (third segment). That is, the probability of getting 1 is highest for the second segment, equal for the third and lowest for the first segment. The initial lengths of the segments were for more details). The left panel displays the first component of (4) (non-smoothed array of unit vectors corresponding to words). The second and third panels display [γ(s, t)]1 for the non-normalized and normalized representations respectively. The fourth panel displays the gradient vector field (γs(s, t),γt(s, t)) (contour levels represent the gradient magnitude). The black portions of the first two panels correspond to zero padding due to unequal lengths of the different versions.
30, 40 and 120 words with the first segment increasing and the third segment decreasing at half the rate of the first segment with each revision. The length of the second segment was constant across the different versions. Figure 1 displays the nonsmoothed ragged array (4) (left), the nonnormalized [γ(s, t)] 1 (middle left) and the normalized [γ(s, t)] 1 (middle right).
While the left panel doesn't distinguish much between the second and third segment the two smoothed representations display a nice segmentation of the space-time domain into three segments, each with roughly uniform values. The non-normalized representation (middle left) makes it easy to see that the total length of the version controlled document is increasing but it is not easy to judge what happens to the relative sizes of the three segments. The normalized representation (middle right) makes it easy to see that the first segment increases in size, the second is constant, and the third decreases in size. It is also possible to notice that the growth rate of the first segment is higher than the decay rate of the third.
Visualizing Change in Space-Time
We apply the space-time representation to four tasks. The first task, visualizing change, is described in this section. The remaining three tasks are described in the next three section.
The space-time domain Ω represents the union of all document versions and all document positions. Some parts of Ω are more homogeneous and some are less in terms of their local word distribution. Locations in Ω where the local word distribution substantially diverges from its neigh-bors correspond to sharp content transitions. On the other hand, locations whose word distribution is more or less constant correspond to slow content variation.
We distinguish between three different types of changes. The first occurs when the word content changes substantially between neighboring document positions within a certain document version. As an example consider a document location whose content shifts from high level introductory motivation to a detailed technical description. Such change is represented by
γ s (s, t) 2 = V w=1 ∂[γ(s, t)] w ∂s 2 .(6)
A second type of change occurs when a certain document position undergoes substantial change in local word distribution across neighboring versions. An example is erroneous content in one version being heavily revised in the next version. Such change along the time axis corresponds to the magnitude of
γ t (s, t) 2 = V w=1 ∂[γ(s, t)] w ∂t 2 .(7)
Expression (6) may be used to measure the instantaneous rate of change in the local word distribution. Alternatively, integrating (6) provides a global measure of change
h(s) = γ s (s, t) 2 dt, g(t) = γ t (s, t) 2 ds
with h(s) describing the total amount of spatial change across all revisions and g(t) describing the total amount of version change across different document positions. h(s) may be used to detect document regions undergoing repeated substantial content revisions and g(t) may be used to detect revisions in which substantial content has been modified across the entire document.
We conclude with the integrated directional derivative 1 0 α s (r)γ s (α(r)) +α t (r)γ t (α(r)) 2 dr (8) where α : [0, 1] → Ω is a parameterized curve in the space-time andα its tangent vector. Expression (8) may be used to measure change along a dynamically moving document anchor such as the boundary between two book chapters. The space coordinate of such anchor shifts with the version number (due to the addition and removal of content across versions) and so integrating the gradient across one of the two axis as in (7) is not appropriate. Defining α(r) to be a parameterized curve in space-time realizing the anchor positions (s, t) ∈ Ω across multiple revisions, (8) measures the amount of change at the anchor point.
Experiments
The right panel of Figure 1 shows the gradient vector field corresponding to the synthetic version controlled document described in the previous section. As expected, it tends to be orthogonal to the segment boundaries. Its magnitude is displayed by the contour lines which show highest magnitudes around segment boundaries. Figure 2 shows the norm γ s (s, t) 2 (left), γ t (s, t) 2 (middle left) and the local maxima of γ s (s, t) 2 + γ t (s, t) 2 (middle right) for a portion of the version controlled Wikipedia Religion article. The first panel shows the amount of change in local word distribution within documents. High values correspond to boundaries between sections, topics or other document segments. The second panel shows the amount of change as one version is replaced with another. It shows which revisions change the word distributions substantially and which result in a relatively minor change. The third panel shows only the local maxima which correspond to edges between topics or segments (vertical lines) or revisions (horizontal lines).
Edge Detection
In many cases documents may be divided to semantically coherent segments. Examples of text segments include individual news stories in streaming broadcast news transcription, sections in article or books, and individual messages in a discussion board or an email trail. For non-version controlled documents finding the text segments is equivalent to finding the boundaries or edges between consecutive segments. See (Hearst, 1997;Beeferman et al., 1999;McCallum et al., 2000) for several recent studies in this area. Things get a bit more complicated in the case of version controlled documents. Segments, and their boundaries exist in each version. As in case of image processing, we may view segment boundaries as edges in the space-time domain Ω. These boundaries separate the segments from each other, much like borders separate countries Assuming all edges are correctly identified, we can easily identify the segments as the interior points of the closed boundaries. In general, however, attempts to identify segment boundaries or edges will only be partially successful. As a result predicted edges in practice are not closed and do not lead to interior segments. We consider now the task of predicting segment boundaries or edges in Ω and postpone the task of predicting a segmentation to the next section.
Edges, or transitions between segments, correspond to abrupt changes in the local word distribution. We thus characterize them as points in Ω having high gradient value. In particular, we distinguish between vertical edges (transitions across document positions), horizontal edges (transitions across versions), and diagonal edges (transitions across both document position and version). These three types of edges may be diagnosed based on the magnitudes ofγ s ,γ t , anḋ α 1 γ s +α 2 γ t respectively.
Experiments
Besides the synthetic data results in Figure 2, we conducted edge detection experiments on six different real world datasets. Five datasets are Wikipedia.com articles: Atlanta, Religion, Language, European Union, and Beijing. Religion and European Union are version controlled documents with relatively frequent updates, while Atlanta, language, and Beijing have less frequent changes. The sixth dataset is the Google Wave Amazon Kindle FAQ which is a less structured version controlled document.
Preprocessing included removing html tags and pictures, word stemming, stop-word removal, and removing any non alphabetic characters (numbers and punctuations). The section heading information of Wikipedia and the information of author of each posting in Google Wave is used as ground truth for segment boundaries. This information was separated from the dataset and was used for training and evaluation (on testing set). Figure 3 displays a gradient information, local maxima, and ground truth segment boundaries for . The space-time domain Ω was divided to a grid with each cell labeled edge (y = 1) or no edge (y = 0) depending on whether it contained any edges. Method a corresponds to a predictor that always selects the majority class. Method b corresponds to the TextTiling test segmentation algorithm (Hearst, 1997) without paragraph boundaries information. Method c corresponds to a logistic regression classifier whose feature set is composed of statistical summaries (mean, median, max, min) ofγ s(s, t) within the grid cell in question as well as neighboring cells.
the version controlled Wikipedia articles Religion and Atlanta. The local gradient maxima nicely match the segment boundaries which lead us to consider training a logistic regression classifier on a feature set composed of gradient value statistics (min, max, mean, median of γ s (s, t) in the appropriate location as well as its neighbors (the space-time domain Ω was divided into a finite grid where each cell either contained an edge (y = 1) or did not (y = 0)). The table in Figure 4 displays the test set accuracy and F1 measure of three predictors: our logistic regression (method c) as well as two baselines: predicting edge/no-edge based on the marginal p(y) distribution (method a) and TextTiling (method b) (Hearst, 1997) which is a popular text segmentation algorithm. Since we do not assume paragraph information in our experiment we ignored this component and considered the document as a sequence with w = 20 and 29 minimum depth gaps parameters (see (Hearst, 1997)). We conclude from the figure that the gradient information leads to better prediction than TextTiling (on both accuracy and F1 measure).
Segmentation
As mentioned in the previous section, predicting edges may not result in closed boundaries. It is possible to analyze the location and direction of the predicted edges and aggregate them into a sequence of closed boundaries surrounding the segments. We take a different approach and partition points in Ω to k distinct values or segments based on local word content and space-time proximity.
For two points (s 1 , t 2 ), (s 2 , t 2 ) ∈ Ω to be in the same segment we expect γ(s 1 , t 1 ) to be similar to γ(s 2 , t 2 ) and for (s 1 , t 1 ) to be close to (s 2 , t 2 ). The first condition asserts that the two locations discuss the same topic. The second condition asserts that the two locations are not too far from each other in the space time domain. More specifically, we propose to segment Ω by clustering its points based on the following geometry
d((s 1 , t 1 ), (s 2 , t 2 )) = d H (γ(s 1 , t 1 ), γ(s 2 , t 2 )) + c 1 (s 1 − s 2 ) 2 + c 2 (t 1 − t 2 ) 2 (9) where d H : P V × P V → R is Hellinger distance d 2 H (u, v) = V i=1 ( √ u i − √ v i ) 2 .(10)
The weights c 1 , c 2 are used to balance the contributions of word content similarity with the similarity in time and space. Figure 5 displays the ground truth segment boundaries and the segmentation results obtained by applying k-means clustering (k = 11) to the metric (9). The figure shows that the predicted segments largely match actual edges in the documents even though no edge or gradient information was used in the segmentation process.
Experiments
Predicting Future Operations
The fourth and final task is predicting a future revision d l+1 based on the smoothed representation of the present and past versions d 1 , . . . , d l . In terms of Ω, this means predicting features associated with γ(s, t), t ≥ t based on γ(s, t), t < t .
Experiments
We concentrate on predicting whether Wikipedia edits are reversed in the next revision. This action, marked by a label UNDO or REVERT in the Wikipedia API, is important for preventing content abuse or removing immature content (by predicting ahead of time suspicious revisions). We predict whether a version will undergo UNDO in the next version using a support vector machine based on statistical summaries (mean, median, min, max) of the following feature set γ s (s, t) , γ s (s, t) , γ t (s, t) ), γ t (s, t) , g(h), and h(s). Figure 6 shows the test set error and F1 measure for the logistic regression based on the smoothed space-time representation (method c), as well as two baselines. The first baseline (method a) predicts the majority class and the second baseline (method b) is a logistic regression based on the term frequency content of the current test version. Using the derivatives of γ, we obtain a prediction that is better than choos-ing majority class or logistic regression based on word content. We thus conclude that the derivatives above provide more useful information (resulting in lower error and higher F1) for predicting future operations than word content features.
Related Work
While document analysis is a very active research area, there has been relatively little work on examining version controlled documents. Our approach is the first to consider version controlled documents as continuous mappings from a spacetime domain to the space of local word distributions. It extends the ideas in (Lebanon et al., 2007) of using kernel smoothing to create a continuous representation of documents. In fact, our framework generalizes (Lebanon et al., 2007) as it reverts to it in the case of a single revision.
Other approaches to sequential analysis of documents concentrate on discrete spaces and discrete models, with the possible extension of (Wang et al., 2009). Related papers on segmentation and sequential document analysis are (Hearst, Figure 6: Error rate and F1 measure over held out test set of predicting future UNDO operation in Wikipedia articles.
Method a corresponds to a predictor that always selects the majority class. Method b corresponds to a logistic regression based on the term frequency vector of the current version. Method c corresponds a logistic regression that uses summaries (mean, median, max, min) of γs(s, t) , γs(s, t) , g(t), and h(s). et al., 1999;McCallum et al., 2000) with (Hearst, 1997) being the closest in spirit to our approach. An influential model for topic modeling within and across documents is latent Dirichlet allocation (Blei et al., 2003;Blei and Lafferty, 2006). Our approach differs in being fully non-parametric and in that it does not require iterative parametric estimation or integration. The interpretation of local word smoothing as a non-parametric statistical estimator (Lebanon et al., 2007) may be extended to our paper in a straightforward manner.
1997; Beeferman
Several attempts have been made to visualize themes and topics in documents, either by keeping track of the word distribution or by dimensionality reduction techniques e.g., (Fortuna et al., 2005;Havre et al., 2002;Spoerri, 1993;Thomas and Cook, 2005). Such studies tend to visualize a corpus of unrelated documents as opposed to ordered collections of revisions which we explore.
Summary and Discussion
The task of analyzing and visualizing version controlled document is an important one. It allows external control and monitoring of collaboratively authored resources such as Wikipedia, Google Wave, and CVS or SVN documents. Our framework is the first to develop analysis and visualization tools in this setting. It presents a new representation for version controlled documents that uses local smoothing to map a space-time domain Ω to the simplex of tf vectors P V . We demonstrate the applicability of the representation for four tasks: visualizing change, predicting edges, segmentation, and predicting future revision operations.
Visualizing changes may highlight significant structural changes for the benefit of users and help the collaborative authoring process. Improved edge prediction and text segmentation may assist in discovering structural or semantic changes and their evolution with the authoring process. Predicting future operation may assist authors as well as prevent abuse in coauthoring projects such as Wikipedia.
The experiments described in this paper were conducted on synthetic, Wikipedia and Google Wave articles. They show that the proposed formalism achieves good performance both qualitatively and quantitatively as compared to standard baseline algorithms.
It is intriguing to consider the similarity between our representation and image processing. Predicting segment boundaries are similar to edge detection in images. Segmenting version controlled documents may be reduced to image segmentation. Predicting future operations is similar to completing image parts based on the remaining pixels and a statistical model. Due to its long and successful history, image processing is a good candidate for providing useful tools for version controlled document analysis. Our framework facilitates this analogy and we believe is likely to result in novel models and analysis tools inspired by current image processing paradigms. A few potential examples are wavelet filtering, image compression, and statistical models such as Markov random fields.
Figure 1 :
1Four space-time representations of a simple synthetic version controlled document over V = {1, 2} (see text
Figure 2 :
2Gradient and edges for a portion of the version controlled Wikipedia Religion article. The left panel displays γs(s, t) 2 (amount of change across document locations for different versions). The second panel displays γt(s, t) 2 (amount of change across versions for different document positions). The third panel displays the local maxima of γs(s, t) 2 + γt(s, t) 2 which correspond to potential edges, either vertical lines (section and subsection boundaries) or horizontal lines (between substantial revisions). The fourth panel displays boundaries of sections and subsections as black and gray lines respectively.
Figure 3 :
3Gradient and edges of a portion of the version controlled Atlanta Wikipedia article (top row) and the Google Wave Amazon Kindle FAQ (bottom row). The left column displays the magnitude of the gradient in both space and time γs(s, t) 2 + γt(s, t) . The middle column displays the local maxima of the gradient magnitude (left column). The right column displays the actual segment boundaries as vertical lines (section headings for Wikipedia and author change in Google Wave). The gradient maxima corresponding to vertical lines in the middle column matches nicely the Wikipedia section boundaries. The gradient maxima corresponding to horizontal lines in the middle column correspond nicely to major revisions indicated by a discontinuities in the location of the section boundaries.in a two dimensional geographical map.
Figure 4 :
4Test set error rate and F1 measure for edge prediction (section boundaries in Wikipedia articles and author change in Google Wave)
Figure 5 :
5Predicted segmentation (top) and ground truth segment boundaries (bottom) of portions of the version controlledWikipedia articles Religion (left), Atlanta (middle) and the Google Wave Amazon Kindle FAQ(right). The predicted segments match the ground truth segment boundaries. Note that the first 100 revisions are used in Google Wave result. The proportion of the segments that appeared in the beginning is keep decreasing while the revisions increases and new segments appears.
AcknowledgementsThe research described in this paper was funded in part by NSF grant IIS-0746853.ArticleRev.Voc. p(y) p(y) p(y)
. European Union. European Union 2000
Statistical models for text segmentation. D References Beeferman, A Berger, J D Lafferty, Machine Learning. 34References Beeferman, D., A. Berger, and J. D. Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3):177-210.
Dynamic topic models. D Blei, J Lafferty, Proc. of the International Conference on Machine Learning. of the International Conference on Machine LearningBlei, D. and J. Lafferty. 2006. Dynamic topic models. In Proc. of the International Conference on Machine Learning.
Latent dirichlet allocation. D Blei, A Ng, M Jordan, Journal of Machine Learning Research. 3Blei, D., A. Ng, , and M. Jordan. 2003. Latent dirich- let allocation. Journal of Machine Learning Re- search, 3:993-1022.
B Fortuna, M Grobelnik, D Mladenic, Visualization of text document corpus. Informatica. 29Fortuna, B., M. Grobelnik, and D. Mladenic. 2005. Visualization of text document corpus. Informatica, 29:497-502.
Themeriver: Visualizing thematic changes in large document collections. S Havre, E Hetzler, P Whitney, L Nowell, IEEE Transactions on Visualization and Computer Graphics. 81Havre, S., E. Hetzler, P. Whitney, and L. Nowell. 2002. Themeriver: Visualizing thematic changes in large document collections. IEEE Transactions on Visu- alization and Computer Graphics, 8(1).
Texttiling: Segmenting text into multi-paragraph subtopic passages. M A Hearst, Computational Linguistics. 231Hearst, M. A. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1):33-64.
The locally weighted bag of words framework for documents. G Lebanon, Y Mao, J Dillon, Journal of Machine Learning Research. 8Lebanon, G., Y. Mao, and J. Dillon. 2007. The lo- cally weighted bag of words framework for doc- uments. Journal of Machine Learning Research, 8:2405-2441, October.
Maximum entropy Markov models for information extraction and segmentation. A Mccallum, D Freitag, F Pereira, Proc. of the International Conference on Machine Learning. of the International Conference on Machine LearningMcCallum, A., D. Freitag, and F. Pereira. 2000. Max- imum entropy Markov models for information ex- traction and segmentation. In Proc. of the Interna- tional Conference on Machine Learning.
InfoCrystal: A visual tool for information retrieval. A Spoerri, Proc. of IEEE Visualization. of IEEE VisualizationSpoerri, A. 1993. InfoCrystal: A visual tool for infor- mation retrieval. In Proc. of IEEE Visualization.
Illuminating the Path: The Research and Development Agenda for Visual Analytics. J J Thomas, K A Cook, IEEE Computer SocietyThomas, J. J. and K. A. Cook, editors. 2005. Illu- minating the Path: The Research and Development Agenda for Visual Analytics. IEEE Computer Soci- ety.
Kernel Smoothing. M P Wand, M C Jones, Chapman and Hall/CRCWand, M. P. and M. C. Jones. 1995. Kernel Smooth- ing. Chapman and Hall/CRC.
Continuous time dynamic topic models. C Wang, D Blei, D Heckerman, Proc. of Uncertainty in Artificial Intelligence. of Uncertainty in Artificial IntelligenceWang, C., D. Blei, and D. Heckerman. 2009. Continu- ous time dynamic topic models. In Proc. of Uncer- tainty in Artificial Intelligence.
| [] |
[
"The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages",
"The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages"
] | [
"Jiyou Jia jiyou.jia@student.uni-augsburg.de \nInstitute for Interdisciplinary Informatics\nUniversity of Augsburg\nGermany\n"
] | [
"Institute for Interdisciplinary Informatics\nUniversity of Augsburg\nGermany"
] | [] | This paper reports the findings of a study conducted on the application of an on-line human-computer dialog system with natural language (chatbot) on the teaching of foreign languages. A keywords-based human-computer dialog system makes it possible that the user could chat with the computer using a natural language, i.e. in English or in German to some extent. So an experiment has been made using this system online to work as a chat partner with the users learning the foreign languages. Dialogs between the users and the chatbot are collected. Findings indicate that the dialogs between the human and the computer are mostly very short because the user finds the responses from the computer are mostly repeated and irrelevant with the topics and context and the program doesn't understand the language at all. With analysis of the keywords or pattern-matching mechanism used in this chatbot it can be concluded that this kind of system can not work as a teaching assistant program in foreign language learning. | null | [
"https://arxiv.org/pdf/cs/0310018v1.pdf"
] | 7,619,979 | cs/0310018 | fb2b1cd011094ac22dea0759ce518b127c99ed67 |
The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages
Jiyou Jia jiyou.jia@student.uni-augsburg.de
Institute for Interdisciplinary Informatics
University of Augsburg
Germany
The Study of the Application of a Keywords-based Chatbot System on the Teaching of Foreign Languages
1
This paper reports the findings of a study conducted on the application of an on-line human-computer dialog system with natural language (chatbot) on the teaching of foreign languages. A keywords-based human-computer dialog system makes it possible that the user could chat with the computer using a natural language, i.e. in English or in German to some extent. So an experiment has been made using this system online to work as a chat partner with the users learning the foreign languages. Dialogs between the users and the chatbot are collected. Findings indicate that the dialogs between the human and the computer are mostly very short because the user finds the responses from the computer are mostly repeated and irrelevant with the topics and context and the program doesn't understand the language at all. With analysis of the keywords or pattern-matching mechanism used in this chatbot it can be concluded that this kind of system can not work as a teaching assistant program in foreign language learning.
Introduction
Since Joseph Weizenbaum nearly forty years ago programmed his ELIZA, the early natural language dialog system between human and machine, to work as a psychiatrist (Weizenbaum 1965), many similar programs have been made in this field of artificial intelligence. For example, ALICEBOT (http://www.alicebot.org), using the similar technique as in the ELIZA, i.e., the pattern matching mechanism, has won twice (2000,2001) the annual Loebner Prize (http://www.loebner.net/Prizef/loebner-prize.html) which declares to "advance AI and serve as a tool to measure the state of the art" (Loebner). The human-computer dialog systems have also been applied in many fields such as sale assistant, information retrieval, question answering on a given domain, etc. But how about using this system as a chatting partner of those who learn this natural language as a foreign language? ALICEBOT is an open source project under GNU (http://www.gnu.org), and therefore can be freely downloaded and installed as a HTTP Server to supply the chatting service for non commercial use. Thus it gives us the chance to conduct the following experiment.
Experiment
The chatbot system is installed in a http server in China and relevant categories (a core concept of ALICEBOT, see analysis later) about English learning and China are added to the knowledge base of the system so that it can give some information on these subjects and the user could chat with it in natural language (English or German) in some extent. The input method is typing via the keyboard. The output can not only be shown with text on the screen, but also be spoken via the speech synthesis technology of Microsoft Agent.
The users are mostly students in Chinese universities and colleges who can normally read and write fluent English. They get to know this website by the advertisement in 10 famous BBS (bulletin board system) of the universities claiming that "this system is a learning partner of foreign languages". Actually this kind of advertisement is only one very short paragraph introducing this system. Normally it can be reviewed only within 2 days and then is deleted or over covered by others.
Findings
How do the users communicate with this kind of Chatbot? How can the system help them learn the foreign languages? We make a summarizing evaluation (Tergan 2000) with the help of the automatically produced log files by the system from May the 15th 2002 to July 15th 2002. In the log files the IP address of the client machine, date and time of the dialog, as well as the spoken texts (inputs and outputs) are recorded.
Number of the users
Here we assume: the user, who comes to the system with the same IP address, is always the same one, because the experiment system does not require the unique identification for a user, but automatically records the IP--address of the client machine as the label of the user. In the following discussion we will treat the number of the IPs as the number of the users. So the number of the users is 1256. Compared to the total click number, 4600, in this period (the total number of clicks on the start page of this website) we can conclude that some users visited the system more than once.
The visiting frequency of the users
The visiting frequency to this chatbot is difficult to define. Sometimes someone visits only the starting page and then leaves it. Someone visits this website and chats with the chatbot many times in the same day. For the sake of the simplicity of computing we assume that all the visits happening on the same day (from 0 to 23:59 hour) between a certain user and the system belong to one visit of the user. So if we say one user visits the website 5 times in this period, that mean he(or she) visits it in 5 different days, no matter how many times he(or she) visits it in one day. In the following table the relation between the number of visits and the corresponding number of users is shown. Table 1 Visiting frequency of the users From this table we can say that most user, or c.a. 88%, visited the chatbot only once or only on one day and did not come back any more; a few, 11% or so, visited it from two times to five times; and only extremely very few, 1%, visited it more frequently. From this phenomenon we could only draw this conclusion that this website with this chatbot doesn't interest the users very much.
Visits of the user Number of users Percent
The duration of the chatting
The duration of the chatting is also as difficult to define as a visit. For the sake of simplicity we define two terms: Round and number of the rounds. A round means an input of a user and a corresponding output from the robot to the user. Therefore the total rounds of a given IP (user) cover all dialogs between the IP and the chatbot in this period. So we use the number of the rounds of a given user to describe the duration of his (or her) chatting with this chatbot. As we mentioned above, most users chat with the chatbot only in one day, so is this definition reasonable.
Total number of the rounds is 24706. The number of the rounds of each user varies from 1 to 2926. The maximum number of the rounds 2926 is a special case and can be excluded, while all the others are below 500 (this exclusion remains in the following analysis). The following diagram shows the relation between the number of the rounds and the number of IPS. For example, there are 324 IPs, which have only one round.
Number of Rounds vs Numbe r of IPs
Fig. 1 Distribution of the duration of dialogs vs. the number of users
The distribution of the round number can also be illustrated with the following pie chart concerning the number of the IPs. Table 2 The relation between the duration of dialogs and number of users
From the diagrams and tables above we could draw the conclusion that a large part of the user, c.a. 62%, chat with the robot very briefly; a small part, c.a. 30%, chat with it longer; and only few, c.a. 8%, chats with it rather long. We could then see that some users chat with the robot for a very long time (or very often).
Comments of the users on the chatbot
During the chatting some users made comments on the system. Whether these comments are spontaneous or intended, is not clear. The evaluations can be divided in 2 classes: positive and negative. The following two tables show the positive and negative sentences 1 , which were given by different users, as well as their appearing frequency.
Positive comments Frequency (IPs)
You are (very, so) bright. 9
You are (very, so) good. 25
You are (very, so) Table 4 Negative comments
From these tables we could see the fact that a small part the user, c.a. 17%, made positive comments on the system and a not large part, 24%, evaluated it negatively. It was particularly praised by some users that the chatbot alone can simultaneously chat with many users and that it is never tired, as the following comments show.
You are very clever, you can talk with all people at the same time.
Do not you feel tired chatting with so many people from day to night?
That would be the most prominent advantage of a Client/Server system, which the normal humans could never have. This is also demonstrated by this question, "with how many people can you chat at the same time?", or similar, which was asked 12 times.
It was also praised that the chatbot encouraged the user frequently, as the following comments show.
It is quite well that you often praise somebody. Thank you for your encouragement it makes me be self-confident.
Categories and topics of the chatting
What have been talked about between the users and the chatbot? After scrutinizing the logs files we find almost every aspect of everyday life is mentioned in the dialogs. So we classify the chatting content into 7 large categories: study, emotion, life, computer, free time, travel/world and job. In every category there are several detailed topics, fox example, in the category "study" there are the topics of English, exams, the situation of universities, etc. Certainly this kind of classification is not the most scientific. But we try in this way to make clear what is concerned in the human-computer dialogs.
All the categories, the topics, their frequency (that is, how many different users have talked about this topic at least one time?) and their percent compared with the total number of the users are shown in the following tables 5, 6 and 7.
In table 6 and 7 the topics and categories of the dialogs are descending arranged according to their frequency. Less than a quarter of the user, or exactly, 22.69%, talked about the study. Half of them, i.e. 11.39% of the total users, talked about English. The reason lies in the fact that most users are students and are attracted to the chatbot by its advertisement that "charbot helps you learn foreign language" and English is the first and therefore the most important foreign language in the Chinese universities. A little part of the users, c.a. 1%, talked with the chatbot in German and about German, while some students study German as their major. From this aspect we could say that the fundamental function of this system, computer assisted instruction, should be much more strongly promoted.
More than one quarter of the users, c.a. 28%, have talked about emotional problems including love, friends, sex, and so on. Nearly half among them, or c.a. 13% of the total users, mentioned love. This finding is remarkable as the chatbot is mainly claimed as a learning partner for foreign languages. It can be explained by the speciality of the users. The users, who are mostly students and younger than 30 years old, would like to treat the chatbot as a friend rather than as a teacher, and would like to tell the chatbot some private emotional problems and experiences.
In addition the user chat also often about computers (c.a. 22%), spare time (c.a. 7%), sport (c.a. 7%), Travel/world knowledge (c.a. 15%), and other everyday topics. 8.6% of the users talked about robot technology, while they realize that they are talking with a robot, or a computer program after short chatting with the system, what is demonstrated by the analysis later.
All these perspectives should be considered afterwards in designing the virtual chatbot and training systems, i.e., the conversational chatbot should not only work as a teacher or learning partner with rich special knowledge, but also as a dear friend who may enjoy the joy and suffer the pain of the users. The users also express such a wish that the chatbot could have several emotional animations like crying, smiling, etc, which is not difficult to be realized as the speaking chatbot uses the Microsoft Agent technology. The following are some of such expressions: Table 5 Categories and Topics of the dialogs Table 7 Categories ordered by Frequency
With whom do the users speak?
In the website it was not declared whether the system was operated by a human being or was a pure software program. Therefore some users were surprised by the automatic responses from the chatbot, especially at the beginning. But within two or more rounds the most users could find, that the responses from the chatbot are stupid and irrelevant with the topic and the context and made this conclusion that the chatting partner is not a person. In the following table we cite some such rounds of dialogs. 3
Never mind
Oh well thanks for trying to explain it to me anyway. Table 9 Dialogs showing doubt that the chatbot is only a computer program
Analysis
Although some users praised the system for its multi client ability, we are somewhat disappointed over the total statistical results about the dialogs between the users and the chatbot, for example, most users chat with it very briefly, and the negative comments are more than the positive comments. So our goal, to make the chatbot works as a chatting partner of the user in learning foreign languages, is not achieved. Why? There should be pedagogical, psychological and technical reasons. But we think that the elementary technique used in this chatbot, keywords-mechanism or pattern-matching mechanism, may be responsible for the failure. With this mechanism the system has some inevitable disadvantages. Hence here we want to introduce this technique simply at first.
More than 20.000 categories are stored in the "memory" of the chatbot (actually the main memory of the computer). This is the knowledge bas of this system. Each category contains an input-pattern and an output-template. If a user types something as an input, the program looks in the memory for a matching category. A category matches one input text, if one of three situations happens:
Matching If a matching category is found in the memory, its output-template will be retrieved and be transformed to the output of the chatbot. If no matching category is found the chatbot randomly selects one of the following expressions as the output. The number behind the expression indicates how often this expression appears in all the rounds of the dialogs in this experiment. In the <template> of this example there is a pair of tag <random> and </random>. This means the output to this input pattern is randomly selected from the candidates items labelled by the tag pairs <li> and </li>. This example is a polite formula. Such polite formulas or idioms are suitable for any speaking context and can be generalized to the categories. These greeting formulas lack of substantial meanings and can be directly given a response without thinking. But unfortunately they occupy only very little part of the everyday dialogs as we human are not living in a world with only such greetings and the dialogs should express some meaning and transfer some information.
Apart from greeting formulas there are other categories which are also independent on the dialog context, for example, dialogs about the personality of the chatbot. An example is: The output for this category is not deterministic, but random. Sometime it is "Am I a little boy? Yes", other time it is "Am I a little boy? No". 2 In this case, for a given question, no matter what a predicate this question has, this chatbot randomly selects one of the seven different answers shown above as output, some of which are contradicting. This method may be called words-puzzle. It is therefore clear that the chatbot with such a mechanism doesn't understand the input text at all in our human sense, neither in the level of phonology, nor of syntax, nor of semantics, nor of pragmatics. This is the first weakness of this mechanism and is the important reason why the users have no more interest in chatting with it.
On the other hand if one inputs with "are you a come?" or "are you a two?", which is grammatically false English sentences expressing nothing meaningful , the above category matches still this input so that the chatbot still gives stupid answer according to its output-template. From this point the user can easily recognize that this chatbot's knowledge of English grammar is quite limited or nothing, and even misleading. How can one system lacking of grammatical knowledge of a natural language teach a human to learn this language? This is the second weakness of this mechanism and so the system doesn't deserve to work as a teaching assistant program.
We may generate another category to express this idea: But the user can also give such input: "Are you a stupid boy?" which matches this category but the output-template "Yes I am a boy" doesn't answer the question wholly. Or with such an input: "Are you a robot who doesn't like a boy", which matches also the category but the output-template is obviously not relevant with the question.
So we can always find some exceptions for a given category. In order to make the categories more exactly one must write more and more categories. The best way to avoid this problem is to put all possible expressions in the human dialogs, like "Are you a little boy?" as input-patterns, and their corresponding responses as output-template into the categories. But a normal person can produce unlimited unexpected sentences using limited known vocabulary as the famous linguistics Noam Chomsky pointed out many years ago (Chomsky 1957) and the memory of the computer is limited. Who can collect all the possible expressions in the human dialogs, which are in fact uncountable, and put them all in the limited computer memory? This is the third weakness of the pattern-matching mechanism.
It is not possible that a program which can't understand the meaning of the sentence syntactically and semantically could understand the objective world model and the relation between the objects in the world that are hidden in the brain of the speakers whilst talking with each other, let alone the complicated subjective feelings of the human being which are interviewed in the dialogs at the mean time. These are all basic conditions needed for a successful communication in natural language. Without such necessary fundamental abilities a computer program can't communicate with a human in natural language, let alone helping the human learn the natural language, even though it can "fool the judges (in the Loebner-Prize Contest) successfully with their patently low technology" and "people are easily fooled, and are especially easily fooled into reading structure into chaos, reading meaning into nonsense."(Shieber 1994, P.6), and it is easy to be used and be extended, as its designer declared. The difficulty and complexity in researching on human natural language process is far from the technique of the keywords-matching or the solving of words-puzzle.
Postscript
The ALICEBOT project used in this experiment and its technique analyzed in this paper are within the version before May 2002. Since then the project may be changed and the mechanism may be also changed. So the experiment results and its analysis are only based on the version of the ALICEBOT project before May 2002.
Fig. 2 Pie chart of the number of users vs. duration of dialogsWe can divide the number of the rounds (duration of the dialogs) into 5 stages, as the following table shows.Number of rounds: Number of IPs: %(IP number / Total IPs )
1:324:26%
2:100:8%
3:91:7%
4:54:4%
5:42:3%
6:41:3%
7:41:3%
8:28:2%
9:30:2%
10:32:3%
11:25:2%
12:25:2%
13:19:2%
14:20:2%
15:14:1%
16:23:2%
17:22:2%
18:11:1% 19:11:1% 20:12:1% 21:7:1%
22:13:1% 23:10:1%
24:13:1% 25:4:0% 26:7:1% 27:4:0%
28:11:1% 29:9:1% 30:8:1% 31:8:1% 32:8:1% 33:6:0% 34:6:0% 35:2:0% 36:8:1%
37:10:1% 38:7:1% 39:8:1% 40:4:0% 41:5:0% 42:8:1% 43:3:0% 44:5:0% 45:4:0% 46:7:1% 47:2:0% 48:3:0% 49:4:0% 50:2:0% 51:2:0% 52:1:0% 53:2:0% 54:2:0% 55:1:0% 56:1:0% 57:3:0% 58:2:0% 59:2:0% 60:1:0% 61:2:0% 62:2:0% 63:2:0% 64:1:0% 65:4:0% 66:2:0% 67:2:0% 68:1:0% 69:1:0% 70:3:0% 71:3:0% 72:1:0% 73:1:0% 74:2:0% 75:1:0% 76:2:0% 77:2:0% 78:2:0% 79:1:0% 80:2:0% 81:1:0% 82:1:0% 83:2:0% 84:1:0% 85:1:0% 86:1:0% 87:1:0% 88:1:0% 89:1:0% 90:1:0% 91:1:0% 92:2:0% 93:2:0% 94:1:0% 95:1:0% 96:1:0% 97:1:0% 98:1:0% 99:2:0%
100:1:0% 101:1:0% 102:1:0% 103:1:0% 104:1:0% 105:2:0% 106:1:0% 107:1:0% 108:1:0% 109:1:0% 110:1:0% 111:1:0% 112:1:0% 113:1:0% 114:1:0% 115:1:0% 116:1:0%
Duration of the dialogs
Range of the numbers of rounds
Number of users
Percent
Very short
[1, 10]
783
62,34%
long
(10, 50]
378
30,10%
Longer
(50, 100]
60
4,78%
very long
(100, 2926]
35
2,79%
Sum
1256
100,00%
Table 3
3Positive comments
Negative comments
Frequency (IPs)
You are (so, very) foolish.
You are (a) foolish robot.
61
You are (so, very, too) stupid.
164
You are dumb.
2
You are (so, too, very) simple.
22
You are not (so, very) clever.
6
You are not (so, very) good.
2
You are not (so, very) interesting.
4
Go to die!
3
(You are) Pig!
40
Sum
304
Percent
304/1256=24.20%
Table 6
6Dialog Topics ordered by the FrequencyEmotion
Sum
353 28,11%
Categories
Frequency
Percent
Food
17
1,35%
Emotion
353
28.11%
Drink
2
0,16%
Study
285
22,69%
Eat
25
1,99%
Travel, world
193
15,37%
Life
Sum
44
3,50%
Sport
189
15,05%
Job
Job
48
3,82%
Computer
173
13,77%
Job
48
3,82%
Life
44
3,50%
Table 8
8Dialog rounds related with the essence of the chatbot Only extremely few users didn't believe that they are confronted with a pure computer program, as the following table shows. Are you joking it\ s impossible no intelligent computer could do this so far I don't know whether or not I am joking it\"s impossible no intelligent computer could do this so far. I am a chatterbot developed by botmaster. Oh i never i am not such a vainglorious man smile :-) Yeah that's right. Do you know anyone who has ever I am not such a vainglorious man? I am willing to know Why are you willing to know.IP
Input
Output
Can i know exactly that you are a real man or not please
Your polite style is very nice. I understand. "your polite style is very nice ok
let s do it next question" you are a real man or not? I get your polite style is
very nice ok let s do it next question.
Yes i did
What kind is it?
Watching tv outing with friends sports as well
A deeper algorithm is needed to respond to that correctly.
Are you happy?
Do you know james
James is a name I hear very often.
And jessie
Tell me a story. Who is your favorite science fiction author?
The guy who made the story of dinosaur park
Is that a fact.
Nobody knows i believe it
That is a rather sweeping generalization about people.
East asia
What do you do in your spare time?
Did you get marriage
No I don't think I have any marriage.
But I do have A great botmaster.
1
Haha robot is so cool sometimes but i believe that's from
people but computer
I am so glad you find this amusing, I don't know this definition and am
looking for it. Please wait a moment.
2
It seems that you are a human being
Thanks and you are a delightful person too.
No I am an artificial entity.
Are you really AI
Yes I am an artificial intelligence. Really.
I couldn't believe it what you speak is very like what human
does
Do you mind if I tell other people:
.
Table 10
10Pattern matching and examples
According to this category this chatbot is defined as a "boy". But if one user asks the chatbot with a very similar question, like "Are you a little boy?", the answer from the chatbot is quite different, such as "Am I a little boy? No!" This contradicting answer is generated by the following category, as there is no matching category for the input text "Are you a little boy".<category>
<pattern>ARE YOU A BOY</pattern>
<template>
Yes I am a boy.
</template>
</category>
<category>
<pattern>ARE YOU A *</pattern>
<template>
Am I a <star/>?
<random>
<li>Yes.</li>
<li>No.</li>
<li>Maybe</li>
<li>I don't know.</li>
<li>Sometimes.</li>
<li>Of course I am.</li>
<li>Only when it matters.</li>
</random>
</template>
</category>
The cited dialogs (in italics) from the user and the chatbot program are retrieved original from the log file of the experiment. There may be spelling and grammatical errors. The following citations (all in italics) have also this problem.
According to the mechanism in ALICEBOT the tag <star/> in the output-template will be replaced by the actual content represented by the symbol "*" in the input-pattern. In this example it is "a little boy".
AcknowledgementThanks should be given to Mrs. Jine Jia, Beijing, China, who made the advertisement in the BBSs for this experiment.
Syntactic Structures. The Hague: Mouton. [Chomsky 1957] Noam Chomsky. 1957. Syntactic Structures. The Hague: Mouton.
ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine. Hugh Loebner, ; Joseph Weizenbaum, Response to the article "Lessons from a Restricted Turing Test" by Stuart Shieber. 9Hugh Loebner. In Response to the article "Lessons from a Restricted Turing Test" by Stuart Shieber. http://www.loebner.net/Prizef/In-response.html [Weizenbaum 1965] Joseph Weizenbaum. 1965. ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine. Communications of the ACM, Vol. 9, No. 1, pp.36-45.
Lessons from a Restricted Turing Test. M Stuart, Shieber, Also available as cmp-lg/9404002#1. 37[Schieber 1994] Stuart M. Shieber. 1994. Lessons from a Restricted Turing Test. Communications of the ACM, Vol. 37, No.6, pp.70-78. Also available as cmp-lg/9404002#1 (http://xxx.lanl.gov/abs/cmp-lg/9404002).
Grundlagen der Evaluation: ein Überblick. Sigmar-Olaf Tergan, Qualitätsbeurteilung multimedialer Lern-und Informationssysteme. Schenkel, P., Tergan, S.O., Lottmann, A.NürnbergBW Bildung und Wissen Verlag und Software GmbHTergan[Tergan 2000] Sigmar-Olaf Tergan. 2000. Grundlagen der Evaluation: ein Überblick. In: Schenkel, P., Tergan, S.O., Lottmann, A. (Ed.), Qualitätsbeurteilung multimedialer Lern-und Informationssysteme, pp. 22-51. Nürnberg: BW Bildung und Wissen Verlag und Software GmbH.
| [] |
[
"VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words",
"VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words"
] | [
"Xiaopeng Lu xiaopen2@andrew.cmu.edu \nLanguage Technologies Institute Carnegie Mellon University\nSOCO Inc Pittsburgh\nUSA\n",
"Tiancheng Zhao tianchez@soco.ai \nLanguage Technologies Institute Carnegie Mellon University\nSOCO Inc Pittsburgh\nUSA\n",
"Kyusong Lee kyusongl@soco.ai \nLanguage Technologies Institute Carnegie Mellon University\nSOCO Inc Pittsburgh\nUSA\n"
] | [
"Language Technologies Institute Carnegie Mellon University\nSOCO Inc Pittsburgh\nUSA",
"Language Technologies Institute Carnegie Mellon University\nSOCO Inc Pittsburgh\nUSA",
"Language Technologies Institute Carnegie Mellon University\nSOCO Inc Pittsburgh\nUSA"
] | [
"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"
] | Text-to-image retrieval is an essential task in cross-modal information retrieval, i.e., retrieving relevant images from a large and unlabelled dataset given textual queries. In this paper, we propose VisualSparta, a novel (Visualtext Sparse Transformer Matching) model that shows significant improvement in terms of both accuracy and efficiency. VisualSparta is capable of outperforming previous stateof-the-art scalable methods in MSCOCO and Flickr30K. We also show that it achieves substantial retrieving speed advantages, i.e., for a 1 million image index, VisualSparta using CPU gets ∼391X speedup compared to CPU vector search and ∼5.4X speedup compared to vector search with GPU acceleration. Experiments show that this speed advantage even gets bigger for larger datasets because Visu-alSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based textto-image retrieval model that can achieve realtime searching for large-scale datasets, with significant accuracy improvement compared to previous state-of-the-art methods. | 10.18653/v1/2021.acl-long.389 | [
"https://aclanthology.org/2021.acl-long.389.pdf"
] | 235,165,693 | 2101.00265 | 389e6f4a380b28afa65dfce6da4f134aea25d74e |
VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words
August 1-6, 2021
Xiaopeng Lu xiaopen2@andrew.cmu.edu
Language Technologies Institute Carnegie Mellon University
SOCO Inc Pittsburgh
USA
Tiancheng Zhao tianchez@soco.ai
Language Technologies Institute Carnegie Mellon University
SOCO Inc Pittsburgh
USA
Kyusong Lee kyusongl@soco.ai
Language Technologies Institute Carnegie Mellon University
SOCO Inc Pittsburgh
USA
VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20215020
Text-to-image retrieval is an essential task in cross-modal information retrieval, i.e., retrieving relevant images from a large and unlabelled dataset given textual queries. In this paper, we propose VisualSparta, a novel (Visualtext Sparse Transformer Matching) model that shows significant improvement in terms of both accuracy and efficiency. VisualSparta is capable of outperforming previous stateof-the-art scalable methods in MSCOCO and Flickr30K. We also show that it achieves substantial retrieving speed advantages, i.e., for a 1 million image index, VisualSparta using CPU gets ∼391X speedup compared to CPU vector search and ∼5.4X speedup compared to vector search with GPU acceleration. Experiments show that this speed advantage even gets bigger for larger datasets because Visu-alSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based textto-image retrieval model that can achieve realtime searching for large-scale datasets, with significant accuracy improvement compared to previous state-of-the-art methods.
Introduction
Text-to-image retrieval is the task of retrieving a list of relevant images from a corpus given text queries. This task is challenging because in order to find the most relevant images given text query, the model needs to not only have good representations for both textual and visual modalities, but also capture the fine-grained interaction between them.
Existing text-to-image retrieval models can be broadly divided into two categories: query-agnostic and query-dependent models. The dual-encoder architecture is a common query-agnostic model, which uses two encoders to encode the query * This work was partially done during an internship at SOCO Figure 1: Inference Time vs. Model Accuracy. Each dot represents Recall@1 for different models on MSCOCO 1K split. By setting top n-terms to 500, our model significantly outperforms the previous best query-agnostic retrieval models, with ∼2.8X speedup. See section 5.1 for details. and images separately and then compute the relevancy via inner product (Faghri et al., 2017;Wang et al., 2019a). The transformer architecture is a well-known querydependent model (Devlin et al., 2018;. In this case, each pair of text and image is encoded by concatenating and passing into one single network, instead of being encoded by two separate encoders Li et al., 2020b). This method borrows the knowledge from large pretrained transformer models and shows much better accuracy compared to dual-encoder methods (Li et al., 2020b).
Besides improving the accuracy, retrieval speed has also been a long-existing subject of study in the information retrieval (IR) community (Manning et al., 2008). Query-dependent models are prohibitively slow to apply to the entire image corpus because it needs to recompute for every dif-ferent query. On the other hand, query-agnostic model is able to scale by pre-computing an image data index. For dual-encoder systems, further speed improvement can be obtained via Approximate Nearest Neighbors (ANN) Search and GPU acceleration (Johnson et al., 2019).
In this work, we propose VisualSparta, a simple yet effective text-to-image retrieval model that outperforms all existing query-agnostic retrieval models in both accuracy and speed. By modeling fine-grained interaction between visual regions with query text tokens, our model is able to harness the power of large pre-trained visual-text models and scale to very large datasets with real-time response. To our best knowledge, this is the first model that integrates the power of transformer models with real-time searching, showing that large pre-trained models can be used in a way with significantly less amount of memory and computing time. Lastly, our method is embarrassingly simple because its image representation is essentially a weighted bag-of-words, and can be indexed in a standard Inverted Index for fast retrieval. Comparing to other sophisticated models with distributed vector representations, our method does not depend on ANN or GPU acceleration to scale up to very large datasets.
Contributions of this paper can be concluded as the following: (1) A novel retrieval model that achieves new state-of-the-art results on two benchmark datasets, i.e., MSCOCO and Flickr 30K.
(2) Weighted bag-of-words is shown to be an effective representation for cross-modal retrieval that can be efficiently indexed in an Inverted Index for fast retrieval.
(3) Detailed analysis and ablation study that show advantages of the proposed method and interesting properties that shine light for future research directions.
Related Work
Large amounts of work have been done on learning a joint representation between texts and images (Karpathy and Fei-Fei, 2015;Huang et al., 2018;Wehrmann et al., 2019;Li et al., 2020b;. In this section, we revisit dual-encoder based retrieval model and transformer-based retrieval model.
Dual-encoder Matching Network
Most of the work in text-to-image retrieval task choose to use the dual-encoder network to en-code information from text and image modalities. In Karpathy and Fei-Fei (2015), the author used a Bi-directional Recurrent Neural Network (BRNN) to encode the textual information and used a Region Convolutional Neural Network (RCNN) to encode the image information, and the final similarity score is computed via the interaction of features from two encoders. proposed stacked cross-attention network, where the text features are passed through two attention layers to learn interactions with the image region. Wang et al. (2019a) encoded the location information as yet another feature and used both deep RCNN features (Ren et al., 2016) and the fine-grained location features for the Region of Interest (ROI) as image representation. In , the author utilized the information from Wikipedia as an external corpus to construct a Graph Neural Network (GNN) to help model the relationships across objects.
Pre-trained Language Models (PLM)
Large pre-trained language models (PLM) show great success over multiple tasks in NLP areas in recent years (Devlin et al., 2018;. After that, research has also been done on cross-modal transformer-based models and proves that the self-attention mechanism also helps jointly capture visual-text relationships Qi et al., 2020;Li et al., 2020b). By first pretraining model under large-scale visual-text dataset, these transformerbased models capture rich semantic information from both texts and images. Models are then finetuned for the text-to-image retrieval task and show improvements by a large margin. However, the problem of using transformer-based models is that it is prohibitively slow in the retrieval context: the model needs to compute pair-wise similarity scores between all queries and answers, making it almost impossible to use the model in any real-world scenarios. Our proposed method borrows the power of large pre-trained models while reducing the inference time by orders of magnitude.
PLM has shown promising results in Information Retrieval (IR), despite its slow speed due to the complex model structure. The IR community recently started working on empowering the classical full-text retrieval methods with contextualized information from PLMs (Dai and Callan, 2019;MacAvaney et al., 2020;Zhao et al., 2020). Dai and Callan (2019) proposed DeepCT, a model that learns to generate the query importance score from the contextualized representation of large transformer-based models. Zhao et al. (2020) proposed sparse transformer matching model (SPARTA), where the model learns termlevel interaction between query and text answers and generates weighted term representations for answers during index time. Our work is motivated by works in this direction and extends the scope to the cross-modal understanding and retrieval.
VisualSparta Retriever
In this section, we present VisualSparta retriever, a fragment-level transformer-based model for efficient text-image matching. The focus of our proposed model is two-fold:
• Recall performance: fine-grained relationship between queries and image regions are learned to enrich the cross-modal understanding.
• Speed performance: query embeddings are non-contextualized, which allows the model to put most of the computation offline.
Model Architecture
Query representation
As query processing is an online operation during retrieval, the efficiency of encoding query needs to be well considered. Previous methods pass the query sentence into a bi-RNN to give token representation provided surrounding tokens Wang et al., 2019a. Instead of encoding the query in a sequential manner, we drop the order information of the query and only use the pretrained token embeddings to represent each token. In other words, we do not encode the local contextual information for the query and purely rely on independent word embedding E tok of each token. Let a query be q = [w 1 , ..., w m ] after tokenization, we have:
w i = E tok (w i )(1)
where w i is the i-th token of the query. Therefore, a query is represented asŵ = {ŵ 1 , ...,ŵ m },ŵ i ∈ R d H . In this way, each token is represented independently and agnostic to its local context. This is essential for the efficient indexing and inference, as described next in section 3.3.
Visual Representation
Compared with query information which needs to be processed in real-time, answer processing can be rich and complex, as answer corpus can be indexed offline before the query comes. Therefore, we follow the recent works in Vision-Language Transformers (Li et al., , 2020b and use the contextualized representation for the answer corpus. Specifically, for an image, we represent it using information from three sources: regional visual features, regional location features, and label features with attributes, as shown in Figure 2.
Regional visual features and location features
Given an image v, we pass it through Faster- RCNN (Ren et al., 2016) to get n regional visual features v i and their corresponding location fea-
tures l i : v 1 , ..., v n = RCNN(v), v i ∈ R drcnn(2)
and the location features are the normalized top left and bottom right positions of the region proposed from Faster-RCNN, together with the region width and height:
l i = [l xmin , l xmax , l ymin , l ymax , l width , l height ]
(3) Therefore, we represent one region by the concatenation of two features: (5) where E image is the representation for a single image.
E i = [v i ; l i ](4)E image = [E 1 , ..., E n ], E i ∈ R drcnn+d loc
Label features with attributes Additional to the deep representations from the proposed image region, previous work by Li et al. (2020b) shows that the object label information is also useful as an additional representation for the image. We also encode the predicted objects and corresponding attributes obtained from Faster-RCNN model with pretrained word embeddings:
o i = E tok (o i ) + E pos (o i ) + E seg (o i ) (6) E label = [ô 1 , ...,ô k ],ô i ∈ R d H(7)
where k represents the number of tokens after the tokenization of attributes and object labels for n Figure 2: VisualSparta Model. It first computes contextualized image region representation and non-contextualized query token representation. Then it computes a matching score between every query token and image region that can be stored in an inverted index for efficient searching.
image regions. E tok , E pos , and E seg represent token embeddings, position embeddings, and segmentation embeddings respectively, similar to the embedding structure in Devlin et al. (2018).
Therefore, one image can be represented by the linear transformed image features concatenated with label features:
a = [(E image W + b); E label ](8)
where W ∈ R (drcnn+d loc )×d H and b ∈ R d H are the trainable linear combination weights and bias. The concatenated embeddings a are then passed into a Transformer encoder T image , and the final image feature is the hidden output of it:
H image = T image (a)(9)
where H image ∈ R (n+k)×d H is the final contextualized representation for one image.
Scoring Function
Given the visual and query representations, the matching score can now be computed between a query and an image. Different from other dualencoder based interaction model, we adopt the finegrained interaction model proposed by Zhao et al. (2020) to compute the relevance score by:
y i = max j∈[1,n+k] (ŵ T i H j ) (10) φ(y i ) = ReLU(y i + b) (11) f (q, v) = m i=1 log(φ(y i ) + 1)(12)
where Eq.10 captures the fragment-level interaction between every image region and every query word token; Eq.11 produces sparse embedding outputs via a combination of ReLU and trainable bias, and Eq.12 sums up the score and prevents an overly large score using log operation.
Retriever training
Following the training method presented in Zhao et al. (2020), we use cross entropy loss to train VisualSparta. Concretely, we maximize the objective in Eq. 13, which tries to decide between the ground truth image v + and irrelevant/random images V − for each text query q. The parameters to learn include both the query encoder E tok and the image transformer encoder T image . Parameters are optimized using Adam (Kingma and Ba, 2014).
J = f (q, v + ) − log k∈V − e f (q,k))(13)
In order to achieve efficient training, we use other image samples from the same batch as negative examples for each training data, an effective technique that is widely used in response selection (Zhang et al., 2018;Henderson et al., 2019). Preliminary experiments found that as long as the batch size is large enough (we choose to use batch size of 160), this simple approach performs equally well compared to other more sophisticated methods, for example, sample similar images that have nearby labels.
Efficient Indexing and Inference
VisualSparta model structure is suitable for realtime inference. As discussed in section 3.1.1, since query embeddings are non-contextualized, we are able to compute the relationship between each query term w i and every image v offline. Concretely, during offline indexing, for each image v, we first compute fragment-level interaction between its regions and every query term in the vocabulary, same as in Eq. 10. Then, we cache the computed ranking score:
CACHE(w, v) = Eq. 11(14)
During test time, given a query q = [w 1 , ..., w m ], the ranking score between q and an image v is:
f (q, v) = m i=1 log(CACHE(w i , v) + 1)(15)
As shown in Eq. 15, the final ranking score during inference time is an O(1) look-up operation followed by summation. Also, the query-time computation can be fit into an Inverted Index architecture (Manning et al., 2008), which enables us to use VisualSparta index with off-the-shelf search engines, for example, Elasticsearch (Gheorghe et al., 2015).
Experiments
Datasets
In this paper, we use MSCOCO (Lin et al., 2014) 1 and Flickr30K (Plummer et al., 2015) 2 datasets for the training and evaluation of text-to-image retrieval tasks. MSCOCO is a large-scale multitask dataset including object detection, semantic segmentation, and image captioning data. In this experiment, we follow the previous work and use the image captioning data split for text-to-image model training and evaluation. Following the experimental settings from Karpathy and Fei-Fei (2015), we split the data into 113,287 images for training, 5,000 images for validation, and 5,000 images for testing. Each image is paired with 5 different captions. The performance of 1,000 (1K) and 5,000 (5K) test splits are reported and compared with previous results.
Flickr30K (Plummer et al., 2015) is another publicly available image captioning dataset, which contains 31,783 images in total. Following the split from Karpathy and Fei-Fei (2015), 29,783 images are used for training, and 1,000 images are used for validation. Scores are reported based on results from 1,000 test images.
For speed experiments, in addition to MSCOCO 1K and 5K splits, we create 113K split and 1M split, two new data splits to test the performance in the large-scale retrieval setting. Since these splits are only used for speed experiments, we directly reuse the training data from the existing dataset without the concern of data leaking between training and testing phases. Specifically, the 113K split refers to the MSCOCO training set, which contains 113,287 images, ∼23 times larger than the MSCOCO 5K test set. The 1M split consists of one million images randomly sampled from the MSCOCO training set. Speed experiments are done on these four splits to give comprehensive comparisons under different sizes of image index.
Evaluation Metrics
Following previous works, we use recall rate as our accuracy evaluation metrics. In both MSCOCO and Flikr30K datasets, we report Recall@t, t=[1, 5, 10] and compare with previous works.
For speed performance evaluation, we choose query per second and latency(ms) as the evaluation metric to test how each model performs in terms of speed under different sizes of image index.
Implementation Details
All experiments are done using the PyTorch library. During training, one NVIDIA Titan X GPU is used. During speed performance evaluation, one NVIDIA Titan X GPU is used for models that need GPU acceleration. One 10-core Intel 9820X CPU is used for models that needs CPU acceleration. For the image encoder, we initialize the model weights from Oscar-base model (Li et al., 2020b) with 12 layers, 768 hidden dimensions, and 110M parameters. For the query embedding, we initialize it from the Oscar-base token embedding. The Adam optimizer (Kingma and Ba, 2014) is used with the learning rate set to 5e-5. The number of training epochs is set to 20. The input sequence length is set to 120, with 70 for labels with attributes features and 50 for deep visual features. We search on batch sizes (96,128,160) with Recall@1 validation accuracy, and set the batch size to 160.
Experimental Results
We compare both recall and speed performance with the current state-of-the-art retrieval model in text-to-image search. Query-dependent model refers to models in which image information cannot be encoded offline, because each image encoding is dependent on the query information. These models usually achieve promising performance in recall but suffer from prohibitively slow inference speed. Query-agnostic model refers to models in which image information can be encoded offline and is independent of query information. In section 4.4.1 and 4.4.2, we evaluate accuracy and speed performance respectively for both lines of methods.
Recall Performance
As shown in Table 1, the results reveal that our model is competitive compared with previous methods. Among query-agnostic methods, our model is significantly superior to the state-of-the-art results in all evaluation metrics over both MSCOCO and Flickr30K datasets and outperforms previous methods by a large margin. Specifically, in MSCOCO 1K test set, our model outperforms the previously best query-agnostic method (Wang et al., 2019a) by 7.1%, 1.6%, 1.0% for Recall@1, 5, 10 respectively. In Flickr30K dataset, VisualSparta also shows strong improvement compared with the previous best method: in Recall@1,5,10, our model gets 4.2%, 2.2%, 0.4% improvement respectively. We also observe that VisualSparta reduces the gap by a large margin between query-agnostic and query-dependent methods. In MSCOCO-1K split, the performance of VisualSparta is only 1.0%, 2.3%, 1.0% lower than Unicoder-VL method (Li et al., 2020a) for Recall@1,5,10 respectively. Compared to Oscar (Li et al., 2020b), the current stateof-the-art query-dependent model, our model is 7% lower than the Oscar model in MSCOCO-1K Re-call@1. This shows that there is still room for improvement in terms of accuracy for query-agnostic model. To show the efficiency of VisualSparta model in both small-scale and large-scale settings, we create 113K dataset and 1M dataset in addition to the original 1K and 5K test split, as discussed in section 4.2. Speed experiments are done using these four splits as testbeds.
Speed Performance
To make a fair comparison, we benchmark each method with its preferred hardware and software for speed acceleration. Specifically, For CVSE model , both CPU and GPU inference time are recorded. For CPU setting, the Maximum Inner Product Search (MIPS) is performed using their original code based on Numpy (Harris et al., 2020). For GPU setting, we adopt the model and use FAISS (Johnson et al., 2019), an optimized MIPS library, to test the speed performance. For Oscar model (Li et al., 2020b), since the query-dependent method cannot be formulated as a MIPS problem, we run the original model using GPU acceleration and record the speed. For VisualSparta, we use the top-1000 term scores settings for the experiment. Since VisualSparta can be fit into an inverted-index architecture, GPU ac- Table 3: Effect of top-n term scores in terms of speed and accuracy tested in MSCOCO dataset; ↑ means higher the better, and ↓ means lower the better.
celeration is not required. For all experiments, we use 5000 queries from MSCOCO-1K split as query input to test the speed performance.
As we can see from Table 2, in all four data splits (1K, 5K, 113K, 1M), VisualSparta significantly outperforms both the best query-agnostic model (CVSE ) and the best querydependent model (Oscar (Li et al., 2020b)). Under CPU comparison, the speed of VisualSparta is 2.5, 2.4, 51, and 391 times faster than that of the CVSE model in 1K, 5K, 113K, and 1M splits respectively. This speed advantage also holds even if previous models are accelerated with GPU acceleration. To apply the latest MIPS progress to the comparison, we adopt the CVSE model to use FAISS (Johnson et al., 2019) for better speed acceleration. Results in the table reveal that the speed of VisualSparta can also beat that of CVSE by 2.5X in the 1K setting, and this speed advantage increases to 5.4X when the index size increases to 1M.
Our model holds an absolute advantage when comparing speed to query-dependent models such as Oscar (Li et al., 2020b). Since the image encoding is dependent on the query information, no offline indexing can be done for the query-dependent model. As shown in Table 2, even with GPU acceleration, Oscar model is prohibitively slow: In the 1K setting, Oscar is ∼1128 times slower than VisualSparta. The number increases to 391,000 when index size increases to 1M.
Model Analysis
Speed-Accuracy Flexibility
As described in section 3.3, each image can be well represented by a list of weighted tokens independently. This feature makes VisualSparta flexible during indexing time: users can choose to index using top-n term scores based on their memory constraint or speed requirement. Table 3 compares recall and speed in both MSCOCO 1K and 5K split under different choices of n. From the comparison between using all term scores and using top-2000 term scores, we found that VisualSparta can get ∼1.8X speedup with almost no performance drop. if higher speed is needed, n can always be set to a lower number with a sacrifice of accuracy, as shown in Table 3. Figure 1 visualizes the trade-off between model accuracy and inference speed. The x-axis represents the average inference time of a single query in millisecond, and the y-axis denotes the Recall@1 on MSCOCO 1K test set. For VisualSparta, each dot represents the model performance under certain top-n term score settings. For other methods, each dot represents their speed and accuracy performance. The curve reveals that with larger n, the recall becomes higher and the speed gets slower. From the comparison between VisualSparta and other methods, we observe that by setting top-n term scores to 500, VisualSparta can already beat the accuracy performance of both PFAN (Wang et al., 2019a) and CVSE with ∼2.8X speedup.
Ablation Study on Image Encoder
As shown in Figure 2, the image encoder takes a concatenation of object label features with attributes and deep visual features as input. In this section, we do an ablation study and analyze the contributions of each part of the image features to the final score.
In Table 4, different components are removed from the image encoder for performance comparison. From the table, we observe that removing either attributes features (row 1) or label features with attributes (row 2) only hurts the performance by a small margin. However, when dropping visual features and only using label with attributes features for image representation (row 3), it appears that the model performance drops by a large margin, where the Recall@1 score drops from 68.7% to 49.1%(−19.6%).
From this ablation study, we can conclude that deep visual features make the most contribution to the VisualSparta model structure, which shows that deep visual features are significantly more expressive compared to textual features, i.e., label with attributes features. More importantly, it shows that VisualSparta is capable of learning cross-modal knowledge, and the biggest gain indeed comes from learning to match query term embeddings with deep visual representations.
Cross-domain Generalization
Models R@1 R@5 R@10 VSE++ (Faghri et al., 2017) 28.4 55.4 66.6 LVSE (Engilberge et al., 2018) 34.9 62.4 73.5 SCAN 38.4 65.0 74.4 CVSE 38.9 67.3 76.1 VisualSparta (ours) 45.4 71.0 79.2 Table 5 shows the cross-domain performance for different models. All models are trained on MSCOCO and tested on Flickr30K. We can see from the table that VisualSparta consistently outperforms other models in this setting. This indicates that the performance of VisualSparta is consistent across different data distributions, and the performance gain compared to other models is also consistent when testing in this cross-dataset settings.
Qualitative Examples
We query VisualSparta on the MSOCO 113K split and check the results. As shown in Figure 3, visual and label features together represent the max attended features for given query tokens. Interestingly, we observe that VisualSparta model is capable of grounding adjectives and verbs to the relevant image regions. For example, "graz" grounds to the head of giraffe in the first example. This further confirms the hypothesis that weighted bag-ofwords is a valid and rich representation for images.
Conclusion
In conclusion, this paper presents VisualSparta, an accurate and efficient text-to-image retrieval model that shows the state-of-the-art scalable performance in both MSCOCO and Flickr30K. Its main novelty lies in the combination of powerful pre-trained image encoder with fragment-level scoring. Detailed analysis also demonstrates that our approach has substantial scalability advantages compared to previous best methods when indexing large image datasets for real-time searching, making it suitable for real-world deployment.
Table 2 :
2Model Speed vs.Index Size: VisualSparta
experiments are done under setting top-n term scores
to 1000. Detailed settings are reported in section 4.4.2.
Table 4 :
4Ablation study with using different features in the image answer encodingFigure 3: Example retrieved images with features attended given query terms; term scores are in parentheses.
Table 5 :
5Cross-dataset performance; models are trained on MSCOCO dataset and tested on Flickr30K dataset.
https://cocodataset.org 2 http://bryanplummer.com/ Flickr30kEntities
Context-aware sentence/passage term importance estimation for first stage retrieval. Zhuyun Dai, Jamie Callan, arXiv:1910.10687arXiv preprintZhuyun Dai and Jamie Callan. Context-aware sen- tence/passage term importance estimation for first stage retrieval. arXiv preprint arXiv:1910.10687, 2019.
Transformer-xl: Attentive language models beyond a fixed-length context. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, V Quoc, Ruslan Le, Salakhutdinov, arXiv:1901.02860arXiv preprintZihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdi- nov. Transformer-xl: Attentive language mod- els beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidi- rectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Finding beans in burgers: Deep semantic-visual embedding with localization. Martin Engilberge, Louis Chevallier, Patrick Pérez, Matthieu Cord, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionMartin Engilberge, Louis Chevallier, Patrick Pérez, and Matthieu Cord. Finding beans in burgers: Deep semantic-visual embedding with localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3984-3993, 2018.
Vse++: Improving visual-semantic embeddings with hard negatives. Fartash Faghri, J David, Jamie Ryan Fleet, Sanja Kiros, Fidler, arXiv:1707.05612arXiv preprintFartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. Vse++: Improving visual-semantic embeddings with hard negatives. arXiv preprint arXiv:1707.05612, 2017.
Elasticsearch in action. Radu Gheorghe, Matthew Lee Hinman, Roy Russo, ManningRadu Gheorghe, Matthew Lee Hinman, and Roy Russo. Elasticsearch in action. Manning, 2015.
Array programming with NumPy. Charles R Harris, K Jarrod Millman, J St'efan, Ralf Van Der Walt, Pauli Gommers, David Virtanen, Eric Cournapeau, Julian Wieser, Sebastian Taylor, Nathaniel J Berg, Robert Smith, Matti Kern, Stephan Picus, Marten H Hoyer, Matthew Van Kerkwijk, Allan Brett, Jaime Haldane, Mark Fern, Pearu Wiebe, Peterson, G Pierre, Kevin 'erard-Marchant, Tyler Sheppard, Warren Reddy, Hameer Weckesser, Christoph Abbasi, Travis E Gohlke, Oliphant, Nature. 5857825Charles R. Harris, K. Jarrod Millman, St'efan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebas- tian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern'andez del R'ıo, Mark Wiebe, Pearu Peterson, Pierre G'erard- Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature, 585(7825):357-362, September 2020.
Matthew Henderson, Iñigo Casanueva, Nikola Mrkšić, Pei-Hao Su, Tsung-Hsien Wen, Ivan Vulić, arXiv:1911.03688Convert: Efficient and accurate conversational representations from transformers. arXiv preprintMatthew Henderson, Iñigo Casanueva, Nikola Mrkšić, Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vulić. Convert: Efficient and accurate conversational rep- resentations from transformers. arXiv preprint arXiv:1911.03688, 2019.
Instanceaware image and sentence matching with selective multimodal lstm. Yan Huang, Wei Wang, Liang Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYan Huang, Wei Wang, and Liang Wang. Instance- aware image and sentence matching with selective multimodal lstm. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recogni- tion, pages 2310-2318, 2017.
Learning semantic concepts and order for image and sentence matching. Yan Huang, Qi Wu, Chunfeng Song, Liang Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionYan Huang, Qi Wu, Chunfeng Song, and Liang Wang. Learning semantic concepts and order for image and sentence matching. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recogni- tion, pages 6163-6171, 2018.
Billion-scale similarity search with gpus. Jeff Johnson, Matthijs Douze, Hervé Jégou, IEEE Transactions on Big Data. Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 2019.
Deep visual-semantic alignments for generating image descriptions. Andrej Karpathy, Li Fei-Fei, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionAndrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128-3137, 2015.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Stacked cross attention for imagetext matching. Kuang-Huei Lee, Xi Chen, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Gang Hua, Houdong Hu, and Xiaodong HeKuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. Stacked cross attention for image- text matching. In Proceedings of the European Con- ference on Computer Vision (ECCV), pages 201- 216, 2018.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang, arXiv:1908.03557Visualbert: A simple and performant baseline for vision and language. arXiv preprintLiunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A sim- ple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019.
Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou, AAAI. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, and Ming Zhou. Unicoder-vl: A universal encoder for vision and language by cross-modal pre- training. In AAAI, pages 11336-11344, 2020.
Object-semantics aligned pre-training for vision-language tasks. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, European Conference on Computer Vision. SpringerXiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xi- aowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer, 2020.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on com- puter vision, pages 740-755. Springer, 2014.
12-in-1: Multi-task vision and language representation learning. Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, Stefan Lee, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 10437-10446, 2020.
Expansion via prediction of importance with contextualization. Sean Macavaney, Maria Franco, Raffaele Nardini, Nicola Perego, Nazli Tonellotto, Ophir Goharian, Frieder, arXiv:2004.14245arXiv preprintSean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. Expansion via prediction of im- portance with contextualization. arXiv preprint arXiv:2004.14245, 2020.
Introduction to information retrieval. D Christopher, Hinrich Manning, Prabhakar Schütze, Raghavan, Cambridge university pressChristopher D Manning, Hinrich Schütze, and Prab- hakar Raghavan. Introduction to information re- trieval. Cambridge university press, 2008.
Dual attention networks for multimodal reasoning and matching. Hyeonseob Nam, Jung-Woo Ha, Jeonghee Kim, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. Dual attention networks for multimodal reasoning and matching. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 299-307, 2017.
Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. A Bryan, Liwei Plummer, Chris M Wang, Juan C Cervantes, Julia Caicedo, Svetlana Hockenmaier, Lazebnik, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionBryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svet- lana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649, 2015.
Imagebert: Cross-modal pretraining with large-scale weak-supervised imagetext data. Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, Arun Sacheti, arXiv:2001.07966arXiv preprintDi Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. Imagebert: Cross-modal pre- training with large-scale weak-supervised image- text data. arXiv preprint arXiv:2001.07966, 2020.
Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, 39Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detec- tion with region proposal networks. IEEE transac- tions on pattern analysis and machine intelligence, 39(6):1137-1149, 2016.
Position focused attention network for image-text matching. Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan, arXiv:1907.09748arXiv preprintYaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, and Xin Fan. Position focused attention network for image-text matching. arXiv preprint arXiv:1907.09748, 2019.
Camp: Cross-modal adaptive message passing for textimage retrieval. Zihao Wang, Xihui Liu, Hongsheng Li, Lu Sheng, Junjie Yan, Xiaogang Wang, Jing Shao, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZihao Wang, Xihui Liu, Hongsheng Li, Lu Sheng, Jun- jie Yan, Xiaogang Wang, and Jing Shao. Camp: Cross-modal adaptive message passing for text- image retrieval. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pages 5764- 5773, 2019.
Consensus-aware visual-semantic embedding for image-text matching. Haoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, Lin Ma, European Conference on Computer Vision. SpringerHaoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, and Lin Ma. Consensus-aware visual-semantic em- bedding for image-text matching. In European Con- ference on Computer Vision, pages 18-34. Springer, 2020.
Language-agnostic visual-semantic embeddings. Jonatas Wehrmann, M Douglas, Mauricio A Souza, Rodrigo C Lopes, Barros, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionJonatas Wehrmann, Douglas M Souza, Mauricio A Lopes, and Rodrigo C Barros. Language-agnostic visual-semantic embeddings. In Proceedings of the IEEE International Conference on Computer Vision, pages 5804-5813, 2019.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. Xl- net: Generalized autoregressive pretraining for lan- guage understanding. In Advances in neural infor- mation processing systems, pages 5753-5763, 2019.
Personalizing dialogue agents: I have a dog. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, arXiv:1801.07243do you have pets too? arXiv preprintSaizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personaliz- ing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243, 2018.
Sparta: Efficient open-domain question answering via sparse transformer matching retrieval. Tiancheng Zhao, Xiaopeng Lu, Kyusong Lee, arXiv:2009.13013arXiv preprintTiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. Sparta: Efficient open-domain question answering via sparse transformer matching retrieval. arXiv preprint arXiv:2009.13013, 2020.
| [] |
[
"Structural Tags, Annealing and Automatic Word Classi cation",
"Structural Tags, Annealing and Automatic Word Classi cation"
] | [
"J Mcmahon \nThe Queen's University of Belfast\n\n",
"F J Smith \nThe Queen's University of Belfast\n\n"
] | [
"The Queen's University of Belfast\n",
"The Queen's University of Belfast\n"
] | [] | This paper describes an automatic word classi cation system which uses a locally optimal annealing algorithm and average class mutual information. A new word-class representation, the structural tag is introduced and its advantages for use in statistical language modelling are presented. A summary of some results with the one million word lob corpus is given; the algorithm is also shown to discover the vowel-consonant distinction and displays an ability to cluster words syntactically in a Latin corpus. Finally, a comparison is made between the current classi cation system and several leading alternative systems, which shows that the current system performs tolerably well. | null | [
"https://arxiv.org/pdf/cmp-lg/9405029v1.pdf"
] | 1,979,469 | cmp-lg/9405029 | 19a67f545e60b053f493c07d75f64f702d4f82f9 |
Structural Tags, Annealing and Automatic Word Classi cation
May 28, 1994
J Mcmahon
The Queen's University of Belfast
F J Smith
The Queen's University of Belfast
Structural Tags, Annealing and Automatic Word Classi cation
May 28, 1994cmp-lg/9405029 30 May 94
This paper describes an automatic word classi cation system which uses a locally optimal annealing algorithm and average class mutual information. A new word-class representation, the structural tag is introduced and its advantages for use in statistical language modelling are presented. A summary of some results with the one million word lob corpus is given; the algorithm is also shown to discover the vowel-consonant distinction and displays an ability to cluster words syntactically in a Latin corpus. Finally, a comparison is made between the current classi cation system and several leading alternative systems, which shows that the current system performs tolerably well.
Introduction
This paper contains a description of some work on an automatic word classication system which uses a technique similar to annealing 1]. The automatic acquisition of word classes corresponds to the paradagmatic component 5] of the syntagmatic-paradagmatic bootstrapping problem 19]. The best of the recent classi cation algorithms come in various forms 12,8,17,2,6,3,23] but most share underlying similarities which can be expressed best in the language of information theory 24,14].
Over three hundred years ago, the Right Reverend John Wilkins presented his ideas about a universal character to the Royal Society 25] : this universal character was an arti cial language where the structure of the words stood in a supposedly logical and universal relationship to objects in the world; this vision has been shared by many other language scholars | e.g.Bacon, Dalgarno, Lodwick, Leibniz, Comenius, Frege, Peano, Russell and Wittgenstein. Wilkins hoped that propositions, in his interlingua, would be \philosophically unfolded" and that pompous-sounding expressions should be summarily debunked; like Bacon before him and Ogden after him, he hankered after clear expression through the medium of a transparent language. He considered the redundancy of language as a design challenge rather than a necessary feature; he disapproved of equivocal words and synonyms and baulked at the ine ective design decisions of previous generations of language speakers. His system worked by dividing his experience of the world into classes and assigning (mostly) arbitrary consonants and vowels to the various classes and sub-classes. The word for a table might perhaps be \leda", where the rst character represents a class of physical objects and the second represents the sub-classi cation of wooden objects, and so on. So the word for a desk might be the related word \ledu".
Strong arguments have been o ered against the idea, from philosophical, linguistic and psychological perspectives (see 16] for a useful summarising discussion). Even his contemporary, Dalgarno criticised the detail of Wilkins' classi cation system, saying that poeple who spoke a foreign language would not agree with his rather culture-bound taxonomy. Nowadays, the arbitrariness of the linguistic sign is a tenet of modern linguistics; and the implicit reference theory of meaning has taken a philosophical battering. These criticisms notwithstanding, the Wilkins approach remains popular with the Arti cial Intelligence community | reading Wilkins' chapter on`the predicament of Quantity', which includes sub-divisions`Of Magnitude' ,`Of Space' and`Of Measure' is reminiscent of Hayes' na ve physics manifesto 11]; another chapter`treats of action, and its several genus's 1. Spiritual 2. Corporeal 3. Motion 4. Operation'; here one is reminded of the work of Schank 22].
The main data structure used in the present work is the structural tag, an operationally de ned analogue of a word from Wilkins' universal character. This way of representing words is not designed in order to be spoken by humans, nor to directly faciltate natural language translation, but to serve as a space within which words can be automatically clustered.
The end product of the classi cation process which will be described in the next section is a set of words, each of which is represented by a structural tag | a 16 bit (more generally, an n-bit) number whose binary representation speci es the location of that word in a cluster space. The structural tag corresponds to an easily accessible summary of the distributional properties of each word. The multi-modal nature of the distributions of some words | that is, the traditional linguistic problem of ambiguity | is as much a problem with this system as it is with others 7, 13], although theoretically, the structural tag should handle ambiguity: for example, the clustering performance of the algorithm described in this paper exhibits a di erentiation between some unambiguous nouns and some lexical items which show verbal and noun distributions. With structural tags, classes can be conceived as schemata of the tag itself (using the standard genetic algorithm de nition of schemata 10]). One advantage of thinking about the connection between words and classes in terms of bit patterns within structural tags, rather than as a distinct functional mapping between two distinct sets of objects, is that a no extra space is needed to store this class information; also, much less processing is required to derive class information, once the full structural tag is known. These considerations are important if one is interested in building statistical language models which will be using class-based information. Another advantage of using structural tags is that many levels of classi cation can be used simultaneously in the prediction of the probabilities of word segments; given that the acquisition of class information is so cheap using structural tags, this becomes a practical possibility in actual language model systems.
Word Clustering Method
Initially, a set of words is chosen to be clustered; these are usually the most frequent words of a given corpus, so that the unigram and bigram statistics which contain these words are more statistically signi cant | that is, their distributions in a corpus are reliable indicators of their distributions in natural language.
Each word is assigned a unique and random structural tag. This corresponds to a random, high entropy classi cation. The quality of the classi cation is measured by the average class mutual information 4],
M (f) = X c i ;c j P (c i ; c j ) log P (c i ; c j ) P (c i )P (c j )(1)
where f represents some classi cation of these words. The classi cation algorithm works as follows (see gure 1): processing starts by concentrating on the rst bit of every word | this corresponds to imagining that all words are classi ed as belonging to class 0 or class 1. This also corresponds to the most signi cant bit of the tag, and the coarsest possible grain of classi cation. Processing will not advance to the second bit of each word's tag representation until no word can be moved into its complementary class with a corresponding increase in average class mutual information. This is a locally optimal algorithm (of complexity O(n 3 )); no globally optimal solutions exist at present. Describing the algorithm informally, words it around between di erent regions of the structural tag space, with tighter and tighter constraints on their movement as the bit processing moves from most signi cant to least signi cant. This is a type of simulated annealing process, the reverse of the usual bottom-up merge based clustering systems. (4) Move that word whose re-classification leads to the greatest increase in average class mutual information (5) Repeat whole process until no word can be found whose move to another class leads to an increase in average class mutual information.
(4) Move that word whose re-classification leads to the greatest increase in average class mutual information (5) Repeat whole process until no word can be found whose move to another class leads to an increase in average class mutual information. Figure 1: A classi cation, at depth 1, is evaluated. Local variations of it are also evaluated, by moving one word in turn into its complementary class. The best variation is chosen to be the new standard, whereupon the process is repeated until no variation is better than the standard classi cation.
Results
The million word lob corpus was detagged, formatted and used to gather the raw word/class unigram and bigram frequency information. Syntactic and some semantic clustering emerged from the process. This result is summarised in gures 2, 3 and 4; gure 7 shows the overall topology of the structural tag classi cation space.
The smaller vodis corpus in phonetic form was used to cluster phonemes in a similar way. This result is summarised in gure 5.
The assumption that this method is not speci c to English is supported by the results obtained from a cluster of the complete works of Cicero, in Latin; these results are shown in gure 6.
The system compares tolerably well with some of the other word classi cation systems; Hughes 12] suggests an evaluation measure which estimates the degree of homogeneity of particular clusters within a classi cation. While not perfect 14], this is the only evaluation metric available at present. Figure 8 shows how the present system performs against two of the best word classi ers. It should be noted, however, that both of these comparison systems use contiguous and non-contiguous bigram information | that is, hw x 2 ; w x i, hw x 1 ; w x i, hw x ; w x+1 i and hw x ; w x+2 ; w x i bigrams. The system described in this paper only uses contiguous bigrams; some results in Hughes 12] suggest that the additional bigram information improves performance by approximately 3%.
A re-implementation of the merge-based approach described in Brown et.al. 3] and comparsion experiments, described in 14] identify the relative strengths and weaknesses of the two approaches; McMahon 14] also contains more results showing the strength of semantic clustering which can result from the most minimal de nition of linguistic context possible | contiguous word bigrams; there is also a favourable comparison between the current system and an in uential connectionist word clustering architecture described by Elman 6].
Conclusion
An annealing approach to automatic word classi cation, using average class mutual information as a metric produces linguistically interesting results. The structural tag representation facilitates this clustering and has several advantages if the resulting classi cation is to be used in statistical language modelling.
Many improvements could be made to the clustering algorithm; some of these are described in 14]. Further work on integrating a structural tag classi cation system into language models 15,9] is currently being undertaken. No strong claims are made about the algorithm's psycholinguistic relevance, though we believe that the information processing paradigm upon which this research rests could be incorporated into either the traditional Chomskyan model of language acquisition 18] or its opposite 21]. 3 4 6 Britian John Sir all another both each her keep let make making many once several some such taking ten these this those too whom Dr Miss Mr Mrs a an any every his its my no our their whose your I he it she there they we who you *' **' although but cent certainly even everything having how however indeed nor particularly perhaps so sometimes then therefore though thus what whether which while why yet ~*' English French Minister President act age air amount answer area art bed board boody book boy building business car case cases century change child children church committee company conditions control cost council countries country course day days deal death development door doubt early education effect end evening evidence experience eyes face fact family father feeling feet field figure figures food form friends full future general girl government group hand hands heart history house idea increase individual industry influence interest job kind knowledge land level life light line man market matter means meeting members men mind moment money morning most mother movement music name nature night number office paper part particular party people period person place point police policy political position power private problem problems public question rate reason result results room school section sense service short side simple social society stage state story subject system table terms thing things time top town trade type use value various view voice war water way week west wife woman women word words work world year ) **[formula**] ... 1 10 2 5 A England God London Lord again ago alone away back certain close d different either enough example five forward four free further half hard here herself high him himself home hours important itself later least less living love me months more need one open order others out play right six them themselves three times today together true turn two us working years able added almost anything asked began being believe better brought came clear come coming concerned considered cut decided difficult doing done due ever except expected far feel followed found getting given go going gone got heard held help hope just kept know known left likely look looked looking made mean much necessary nothing now often only possible put rather read required said seem seemed seems seen set show shown something soon start still stood sure taken talk thought turned used usually want wanted well went yes already also always be become been bring find get give leave meet n't never not pay probably say see take the have really tell about after along around as because before if like near outside over quite reached since that when where without and or ( to de of across against among at behind between by during for from in into on through towards under until upon with within 'd 'll can could did do does may might must shall should think will would 'm 're 's 've am are became felt gave had has is knew saw says told took was were big carried complete few good great large little long longer own personal real small special very American British above best common first following human labour last local main modern national new next old other past present same second total white whole yound *-, ; called down met off round s up . than ! : ? per
Finch et.al. and Hughes et.al. both report similar clustering phenomena. The digital nature of the structural tag is not as serious a limitation as it initially appears to be 20].
...
..................... ............................. wi ....... ........................................... .......... wi .........
Figure 2 :
2Final distribution of the most frequent words from the lob corpus. Only the rst ve levels of classi cation are given here, but important syntactic relations are discovered.
Figure 3 :
3Detail of relationship between words whose nal tag value starts with the four bits 0000. Many of these words exhibit determiner-like distributional behaviour.doubt cases days day night morning evening week year moment reason thing place way time job book line word field system position story subject boy girl child man woman person number amount value figure level rate case matter sense end period stage group meeting idea point question problem result kind type part side act means effect feeling form answer cost deal change increase influence use evidence table section terms fact individual particular general public private political social English French short simple early west air full most top various President board body committee party Minister company council government police society industry war trade market policy office building church door house room school country town world car paper family century future movement life death food light water land development age education experience knowledge history conditions figures work view name area nature interest business service course members problems results art music money bed control power state eyes face feet hand hands head heart mind voice father mother wife children friends people men women countries words things
Figure 4 :
4Detail, from level 5 to level 9, of many noun-like words. Clear semantic di erences are registered.
Figure 5 :
5Automatic Phoneme Clustering which di erentiates between vowels and consonants dixi me multis natura nobis nos populo se te vobis vos animi bene causam eius eorum eos eum hominem homines hominum huius illius illum l m magna omnis p rationem rem senatus suum unum vim vitae eis eo iis illis illo re animo causa consilio genere ipsa iudicio iudicium iure lege litteris loco mea nomine nostra omnibus ratione rebus rerum sua tanta tempore tua una uno verbis vi vita sa aliquid autem denique ei ergo facere factum fieri homo idem igitur ille illi ipsi ipsum is iste iudices multa multo omnes omnia primum quidem ratio saepe sibi tibi tu velim vero vis salx aliud dicam dicere dicitur erat erit essent esset est fore fuerit fuisse fuit habere habet minus modo numquam posse posset possit potest sane sint sit solum sum sunt umquam videatur video videtur a ad ante apud contra ob per plus potius propter ab de ex pro e in sine alia ea eam hoc illa ista romae die publica publicae romani certe ego enim facile haec hic huic iam id illam illud ipse ita magis maxime mihi ne nunc omnino quia quicquam satis semper si tam tantum cicero cum etiam quasi quoniam tamen ac atque aut et kal sive vel cuius etsi nec neque quas quorum sic cui cur esse inquit litteras modi nisi num oratio publicam quaedam quamquam qui ut populi rei an praesertim quin sed verum at deinde itaque nam necesse nemo nihil non nulla nullum attico post quantum quid quis quod tum ubi genus nomen quae quoque res c hanc hunc inter modum quam quem quos eadem eodem hac his omni qua quibus quo hominis ipso meo omnium suis summa suo tuis
Figure 6 :Figure 7 :Figure 8 :
678Classi cation of the most frequent words in a formatted version of the complete works of Cicero, in Latin; a group of prepositions is highlighted to show that the clustering system can nd structure in languages other than English. Approximate topology of the tree generated from the lob corpus and the average class mutual information maximiser, using structural tags. Graph showing the performance of the annealing classi cation system compared to two of the best of the current systems | those of Hughes and Atwell and Finch and Chater. Performance is measured by the Hughes-Atwell cluster evaluation system.
Neural Computing : An Introduction. R Beale, T Jackson, Adam HilgerR. Beale and T. Jackson. Neural Computing : An Introduction. Adam Hilger, 1990.
Deducing linguistic structure from the statistics of large corpora. Eric Brill, David Magerman, Mitchell Marcus, Beatrice Santorini, Proceedings of the DARPA Speech and Natural Language Workshop. the DARPA Speech and Natural Language WorkshopEric Brill, David Magerman, Mitchell Marcus, and Beatrice Santorini. De- ducing linguistic structure from the statistics of large corpora. In Proceed- ings of the DARPA Speech and Natural Language Workshop, 1990.
Class-based n-gram models of natural language. F Peter, Vincent Della Brown, Peter Pietra, Jennifer C Desouza, Robert C Lai, Mercer, Computational Linguistics. 184Peter F. Brown, Vincent Della Pietra, Peter deSouza, Jennifer C. Lai, and Robert C. Mercer. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467 { 479, 1992.
Elements of Information theory. M Thomas, Joy A Cover, Thomas, John Wiley and SonsThomas M. Cover and Joy A. Thomas. Elements of Information theory. John Wiley and Sons, 1991.
Course in General Linguistics. Ferdinand De, Saussure , DuckworthFerdinand de Saussure. Course in General Linguistics. Duckworth, 1983.
Finding structure in time. L Je Rey, Elman, Cognitive Science. 14Je rey L. Elman. Finding structure in time. Cognitive Science, 14:179 { 211, 1990.
Learning syntactic categories : A statistical approach. Steven Finch, Nick Chater, Neurodynamics and Psychology, chapter 12. M. Oaksford and G.D.A. BrownAcademic PressSteven Finch and Nick Chater. Learning syntactic categories : A statistical approach. In M. Oaksford and G.D.A. Brown, editors, Neurodynamics and Psychology, chapter 12. Academic Press, 1994.
Finding Structure in Language. Paul Steven, Finch, Centre for Cognitive Science, University of EdinburghPhD thesisSteven Paul Finch. Finding Structure in Language. PhD thesis, Centre for Cognitive Science, University of Edinburgh, 1993.
Principles of lexical language modelling for speech recognition. L Robert, Frederick Mercer, Salim Jelinek, Roukos, Advances in Speech Signal Processing. S. Furui and M.M. SondhiMaral Dekku, IncRobert L. Mercer Frederick Jelinek and Salim Roukos. Principles of lexical language modelling for speech recognition. In S. Furui and M.M. Sondhi, editors, Advances in Speech Signal Processing. Maral Dekku, Inc., 1992.
Genetic Algorithms in Search Optimization and Machine Learning. D E Goldberg, Addison WesleyD.E. Goldberg. Genetic Algorithms in Search Optimization and Machine Learning. Addison Wesley, 1989.
The na ve physics manifesto. P Hayes, The Philosophy of Arti cial Intelligence. M. BodenOxford University PressP. Hayes. The na ve physics manifesto. In M. Boden, editor, The Philosophy of Arti cial Intelligence. Oxford University Press, 1990.
Automatically Acquiring a Classi cation of Words. John Hughes, School of Computer Studies, University of LeedsPhD thesisJohn Hughes. Automatically Acquiring a Classi cation of Words. PhD thesis, School of Computer Studies, University of Leeds, 1994.
The automated evaluation of inferred word classi cations. John Hughes, Eric Atwell, Eleventh European Conference on Arti cial Intelligence. John Hughes and Eric Atwell. The automated evaluation of inferred word classi cations. In Eleventh European Conference on Arti cial Intelligence, 1994.
Statistical language processing based on unsupervised word classi cation. John Mcmahon, PhD ThesisJohn McMahon. Statistical language processing based on unsupervised word classi cation. PhD Thesis.
A Study of an N-gram Language Model for Speech Recognition. O' Peter, Boyle, BelfastDepartment of Computer Science, Queen's UniversityPhD thesisPeter O'Boyle. A Study of an N-gram Language Model for Speech Recog- nition. PhD thesis, Department of Computer Science, Queen's University, Belfast, 1993.
Logical atomism, operational knowledge and the possibility of a knowledge-based NLP system. Sean O'nuallain, The Cognitive Science of Natural Language Processing. Sean O'Nuallain. Logical atomism, operational knowledge and the possibil- ity of a knowledge-based NLP system. In The Cognitive Science of Natural Language Processing, 1992.
Distributed similarity, phase transitions and hierarchical clustering. Fernando Pereira, Naftali Tishby, FS-92-05Probabilistic Approaches to Natural Language. American Association for Arti cial Intelligence. AAAI PressTechnical reportFernando Pereira and Naftali Tishby. Distributed similarity, phase tran- sitions and hierarchical clustering. In Probabilistic Approaches to Natural Language. American Association for Arti cial Intelligence, AAAI Press, 1992. Technical report FS-92-05.
The Language Instinct | The New Science of Language and Mind. Steven Pinker, Allen Lane, Penguin PressSteven Pinker. The Language Instinct | The New Science of Language and Mind. Allen Lane, Penguin Press, 1994.
Shoe : The extraction of hierarchical structure for machine learning of natural language (project summary). David Powers, Walter Daelemans, Background and Experiments in Machine Learning of Natural Language. David Powers and Walter Daelemans. Shoe : The extraction of hierarchical structure for machine learning of natural language (project summary). In Background and Experiments in Machine Learning of Natural Language, pages 125{161, 1992.
Mind Tools | The Mathematics of Information. Penguin. R Rucker, R. Rucker. Mind Tools | The Mathematics of Information. Penguin, 1988.
Language acquisition : Growth or learning?. Geo Rey, Sampson , The Philosophical Review. 3XVIIIGeo rey Sampson. Language acquisition : Growth or learning? The Philosophical Review, XVIII(3):203 { 240, 1989.
Conceptual Information Processing. R. SchankAmsterdamNorth HollandR. Schank, editor. Conceptual Information Processing. North Holland, Amsterdam, 1975.
Part-of-speech induction from scratch. Hinrich Sch Utze, Proceedings of the Association for Computational Linguistics 31. the Association for Computational Linguistics 31Hinrich Sch utze. Part-of-speech induction from scratch. In Proceedings of the Association for Computational Linguistics 31, pages 251 { 258, 1993.
Prediction and entropy of printed english. C E Shannon, Bell System Technical Journal. C.E. Shannon. Prediction and entropy of printed english. Bell System Technical Journal, 1951.
An Abstract of Dr. Wilkins' Essay Towards a Real Character and a Philosophical Language. Frank Cass and Company Limited. John Wilkins, second edition, 1802. Original Essay published in 1668John Wilkins. An Abstract of Dr. Wilkins' Essay Towards a Real Character and a Philosophical Language. Frank Cass and Company Limited, second edition, 1802. Original Essay published in 1668.
| [] |
[
"Apply Chinese Radicals Into Neural Machine Translation: Deeper Than Character Level",
"Apply Chinese Radicals Into Neural Machine Translation: Deeper Than Character Level"
] | [
"Shaohui Kuang shaohuikuang@foxmail.com \nNLP Lab\nSoochow University\nSuzhouChina\n",
"Lifeng Han lifeng.han3@mail.dcu.ie \nSchool of Computing\nDublin City University\nDublinIreland\n"
] | [
"NLP Lab\nSoochow University\nSuzhouChina",
"School of Computing\nDublin City University\nDublinIreland"
] | [] | In neural machine translation (NMT), researchers face the challenge of un-seen (or out-of-vocabulary OOV) words translation. To solve this, some researchers propose the splitting of western languages such as English and German into sub-words or compounds. In this paper, we try to address this OOV issue and improve the NMT adequacy with a harder language Chinese whose characters are even more sophisticated in composition. We integrate the Chinese radicals into the NMT model with different settings to address the unseen words challenge in Chinese to English translation. On the other hand, this also can be considered as semantic part of the MT system since the Chinese radicals usually carry the essential meaning of the words they are constructed in. Meaningful radicals and new characters can be integrated into the NMT systems with our models. We use an attention-based NMT system as a strong baseline system. The experiments on standard Chinese-to-English NIST translation shared task data 2006 and 2008 show that our designed models outperform the baseline model in a wide range of state-of-the-art evaluation metrics including LEPOR, BEER, and CharacTER, in addition to the traditional BLEU and NIST scores, especially on the adequacylevel translation. We also have some interesting findings from the results of our various experiment settings about the performance of words and characters in Chinese NMT, which is different with other languages. For instance, the fully character level NMT may perform very well or the state of the art in some other languages as researchers demonstrated recently, however, in the Chinese NMT model, word boundary knowledge is important for the model learning. 3 | null | [
"https://arxiv.org/pdf/1805.01565v2.pdf"
] | 13,679,709 | 1805.01565 | eb7e0aa7026c5487d0572bf022b86f19da2706c6 |
Apply Chinese Radicals Into Neural Machine Translation: Deeper Than Character Level
Shaohui Kuang shaohuikuang@foxmail.com
NLP Lab
Soochow University
SuzhouChina
Lifeng Han lifeng.han3@mail.dcu.ie
School of Computing
Dublin City University
DublinIreland
Apply Chinese Radicals Into Neural Machine Translation: Deeper Than Character Level
Machine Translation · Chinese-English Translation · Chinese Radi- cals · Neural Networks · Translation Evaluation
In neural machine translation (NMT), researchers face the challenge of un-seen (or out-of-vocabulary OOV) words translation. To solve this, some researchers propose the splitting of western languages such as English and German into sub-words or compounds. In this paper, we try to address this OOV issue and improve the NMT adequacy with a harder language Chinese whose characters are even more sophisticated in composition. We integrate the Chinese radicals into the NMT model with different settings to address the unseen words challenge in Chinese to English translation. On the other hand, this also can be considered as semantic part of the MT system since the Chinese radicals usually carry the essential meaning of the words they are constructed in. Meaningful radicals and new characters can be integrated into the NMT systems with our models. We use an attention-based NMT system as a strong baseline system. The experiments on standard Chinese-to-English NIST translation shared task data 2006 and 2008 show that our designed models outperform the baseline model in a wide range of state-of-the-art evaluation metrics including LEPOR, BEER, and CharacTER, in addition to the traditional BLEU and NIST scores, especially on the adequacylevel translation. We also have some interesting findings from the results of our various experiment settings about the performance of words and characters in Chinese NMT, which is different with other languages. For instance, the fully character level NMT may perform very well or the state of the art in some other languages as researchers demonstrated recently, however, in the Chinese NMT model, word boundary knowledge is important for the model learning. 3
Introduction
We first introduce briefly the machine translation development then come to the existing issues that we try to address. Machine Translation (MT) has a long history dating from 1950s [61] as one topic of artificial intelligence (AI) or intelligent machines. It began with rule-based MT (RBMT) systems that apply human defined syntactic and semantic rules of source and target languages to the machine, to example based MT (EBMT), statistical MT (SMT), Hybrid MT (e.g. the combination of RBMT and SMT) and then recent years' Neural MT (NMT) models [47,11,32,2]. MT gained much more attention from researchers after the launching of IBM mathematical models proposed in 1990s [7]. Representative SMT works include the word alignment models [49], introducing of Minimum Error Rate Training (MERT) [48], phrase-based SMT [34], hierarchical structure models [13], and large parallel data development, e.g. [29], etc.
Meanwhile, many research groups developed their open source tools to advance the MT technology, such as Moses featuring statistical phrase-based MT [31], Joshua featuring parsing-based translations [37], Phrasal incorporating arbitrary model features [12], CDEC favoring finite-state and context-free translation Models [19], and NiuTrans featuring syntax-based models [62], etc.; and some advanced information technology (IT) companies also built theirs, such as the machine translators by Google 4 , Baidu 5 , Yandex 6 , and Microsoft Bing 7 , etc.
Thanks to the work of word to vector embedding from [44], the NMT was available to be introduced in [28,14,2] by utilizing both deep learning (DL) and word representation (WR) approaches. Earlier, NMT structure [45] did not work out which may be due to the limitations of computational power of machines and the amount of available corpora, though neural networks were also explored later as sub-components in SMT, e.g. to smooth or re-rank the system output candidates as language models [52,4]. One of the promotions for NMT research is the launching of 1st NMT Workshop by Google 8 , in addition to the traditional WMT workshops 9 .
NMT models treat MT task as encoder-decoder work-flow which is much different from the conventional SMT structure [15,30]. The encoder applies in the source language side learning the sentences into vector representations, while the decoder applies in the target language side generating the words from the target side vectors. Recurrent Neural Networks (RNN) models are usually used for both encoder and decoder, though there are some researchers employing convolutions neural networks (CNN) like [14,28]. The hidden layers in the neural nets are designed to learn and transfer the information [46].
There were some drawbacks in the NMT models e.g. lack of alignment information between source and target side, and less transparency, etc. To address these, attention mechanism was introduced to the decoder first by [2] to pay interests to part information of the source sentence selectively, instead of the whole sentence always, when the model is doing translation. This idea is similar like alignment functions in SMT and what the human translators usually perform when they undertake the translation task. Earlier, attention mechanisms were applied in neural nets for image processing tasks [35,18]. Recently, Attention based models have appeared in most of the NMT projects, such as the the investigation of global attention-based architectures [39] and target information [51] for pure text NMT, and the exploration of Multi-modal NMT [24]. To generalize the attention mechanism in the source language side, coverage model is introduced to balance the weights of different parts of the sentences into NMT by [58,43].
森 樹 橋 木 : mù (wood) 森 (forest) 樹 (tree) 橋 (bridge)
Another drawback of NMT is that the NMT systems usually produce better fluent output, however, the adequacy is lower sometimes compared with the conventional SMT, e.g. some meaning from the source sentences will be lost in the translation side when the sentence is long [57,58,33,46,14]. One kind of reason of this phenomenon could be due to the unseen words problem, except for the un-clear learning procedure of the neural nets. With this assumption, we try to address the unseen words or outof-vocabulary (OOV) words issue and improve the adequacy level by exploring the Chinese radicals into NMT.
There are some other advanced topics such as multimodal [20,25,8], multilingual [9,27] and syntactic [3,1,36] NMT, but not in this work's scope.
For Chinese radical knowledge, let's see two examples about their construction in the corresponding characters. This Figure 1 shows three Chinese characters (forest, tree, bridge) which contain the same part of radical (wood) and this radical can be a character independently in usage. In the history, Chinese bridge was built by wood usually, so apparently, these three characters carry the similar meaning that they all contain something related with woods.
The Figure 2 shows three Chinese characters (grass,medicine,tea) which contain the same part of radical (grass) however this radical can not be a character independently in usage. This radical means grass in the original development of Chinese language. In the history, Chinese medicine was usually developed from some nature things like the grass, and Chinese tea was usually from the leafs that are related with grass.
To the best knowledge of the authors at the submission moment, there was no published work about radical level NMT for Chinese language yet 10 . The following section 草藥茶 ⺾ : cǎo (grass) 2 will be the related works, section 3 our model design, section 4 the experiments, and section 5 the conclusion and future work.
草 (grass) 藥 (medicine) 茶 (tea)
Related Work
MT models have been developed by utilizing smaller units, i.e. phrase-level to wordlevel, sub-word level and character-level [30,53,16]. However, for Chinese language, sub-character level or radical level is also quite interesting topic since the Chinese radicals carry somehow essential meanings of the Chinese characters that they are constructed in. Some of the radicals spited from the characters can be independent new characters, meanwhile, there are some other radicals that can not be independent as characters though they also have meanings. The work [60] did empirical explorations of the performances from different granularity segmentation settings, i.e., word, subword, character, for Chinese-English MT. It would be very interesting to see how these radicals or the combination of them and traditional words/characters perform in the NMT systems.
There are some published works about the investigation of Chinese radicals embedding for other tasks of NLP, such as [54,38] explored the radical usage for word segmentation and text categorization.
Some MT researchers explored the word composition knowledge into the systems, especially on the western languages. For instance, [42] developed a Machine Translation model on English-German and English-Finnish with the consideration of synthesizing compound words. This kind of knowledge is similar like the splitting Chinese character into new characters. when Han was visiting Soochow Uni. in summer 2017 and submitted it to AMTA firstly. After revision, our work was accepted by ESSLLI. Recently, thanks to author Zhang, we are informed with the interesting relevant work of applying radical into Japnese-Chinese MT [64].
Model Design
This section introduces the baseline attention-based NMT model and our model.
Attention-based NMT
Typically, as mentioned before, neural machine translation (NMT) builds on an encoderdecoder framework [2,56] based on recurrent neural networks (RNN). In this paper, we take the NMT architecture proposed by [2]. In NMT system, the encoder apples a bidirectional RNN to encode a source sentence x = (x 1 , x 2 , ..., x Tx ) and repeatedly generates the hidden vectors
h = (h 1 , h 2 , ..., h Tx ) over the source sentence, where T x is the length of source sentence. Formally, h j = [ − → h j ; ← − h j ] is− → h j = f ( −−→ h j−1 , x j )(1)
where function f is defined as a Gated Recurrent Unit (GRU) [17]. The decoder is also an RNN that predicts the next word y t given the context vector c t , the hidden state of the decoder s t and the previous predicted word y t−1 , which is computed by:
p(y t |y <t , x) = sof tmax(g(s t , y t−1 , c t ))(2)
where g is a non-linear function. and s t is the state of decoder RNN at time step t, which is calculated by:
s t = f (s t−1 , y t−1 , c t )(3)
where c t is the context represent vector of source sentence. Usually? the c t can be obtained by attention model and calculated as follows:
c t = Tx j=1 α tj h j (4) α tj = exp(e tj ) Tx k=1 e tk(5)e tj = v T a tanh(s t−1 , h j )(6)
We also follow the implementation of attention-based NMT of dl4mt tutorial 11 , which enhances the attention model by feeding the previous word y t−1 to it, therefore the e tj is calculated by:
e tj = v T a tanh( s t−1 , h j )(7)
… encoder … Fig. 3: Architecture of NMT with multi-embedding.
where s t−1 = f (s t−1 , y t−1 ), and f is a GRU function. The hidden state of the decoder is updated as following:
s t = f ( s t−1 , c t )(8)
In this paper, we use the attention-based NMT with the changes from dl4mt tutorial 12 as our baseline and call it RNNSearch* 13 .
Our model
Traditional NMT model usually uses the word-level or character-level information as the inputs of encoder, which ignores some knowledge of the source sentence, especially for Chinese language. Chinese words are usually composed of multiple characters, and characters can be further spited into radicals. The Chinese character construction is very complected, varying from upper-lower structure, left-right structure, to inside-outside structure and the combination of them. In this paper, we use the radical, character and word as multiple inputs of NMT and expect NMT model can learn more useful features based on the different levels of input integration. Figure 3 illustrates our proposed model. The input embedding x j consists of three parts: word embedding w j , character embedding z j 14 and radical embedding r j , as follows:
x j = [w j ; z j ; r j ](9)
where ';' is concatenate operation. For the word w j , it can be split into characters z j = (z j1 , z j2 , ..., z jm ) and further split into radicals r j = (r j1 , r j2 , ..., r jn ). In our model, we use simple additions operation to get the character representation and radical representation of the word, i.e. z j and r j can be computed as follows:
z j = m k=1 z jk (10) r j = n k=1 r jk(11)
Each word can be decomposed into different numbers of character and radical, and, by addition operations, we can generate a fixed length representation. In principle? our model can handle different levels of input from their combinations. For Chinese character decomposition, e.g. the radicals generation, we use the HanziJS open source toolkit 15 . On the usage of target vocabulary [26], we choose 30,000 as the volume size.
Experiments
In this section, we introduce our experiment settings and the evaluation of the designed models.
Experiments Setting
We used 1.25 million parallel Chinese-English sentences for training, which contain 80.9 millions Chinese words and 86.4 millions English words. The data is mainly from Linguistic Data Consortium (LDC) 16 parallel corpora, such as LDC2002E18, LDC2003E07, LDC2003E14, LDC2004T07, LDC2004T08, and LDC2005T06.
We tune the models with NIST06 17 as development data using BLEU metric [50], and use NIST08 Chinese-English parallel corpus as testing data with four references. For the baseline model RNNSearch*, in order to effectively train the model, we limit the maximum sentence length on both source and target side to 50. We also limit 14 We use the character 'z' to represent character, instead of 'c', because we already used 'c' as representation of context vector. 15 github.com/nieldlr/Hanzi 16 www.ldc.upenn.edu 17 NIST: the National Institute for Standards and Technology. They organized yearly MT Evaluation shared tasks and released data for researchers to compare their models.
both the source and target vocabularies to the most frequent 30k words and replace rare words with a special token "UNK" in Chinese and English. The vocabularies cover approximately 97.7% and 99.3% of the two corpora, respectively. Both the encoder and decoder of RNNsearch* have 1000 hidden units. The encoder of RNNsearch consists of a forward (1000 hidden unit) and backward bidirectional RNN. The word embedding dimension is set as 620. We incorporate dropout [23] strategy on the output layer. We used the stochastic descent algorithm with mini-batch and Adadelta [63] to train the model. The parameters ρ and of Adadelta are set to 0.95 and 10 −6 . Once the RNNsearch* model is trained, we adopt a beam search to find possible translations with high probabilities. We set the beam width of RNNsearch* to 10. The model parameters are selected according to the maximum BLEU score points on the development set. For our proposed model, all the experimental settings are the same as RNNSearch*, except for the word-embedding dimension and the size of the vocabularies. In our model, we set the word, character and radical to have the same dimension, all 620. The vocabulary sizes of word, character and radical are set to 30k, 2.5k and 1k respectively.
To integrate the character radicals into NMT system, we designed several different settings as demonstrated in the table. Both the baseline and our settings used the attention-based NMT structure. Character+Radical C+R
Evaluations
In this section, we introduce the evaluation metrics we used for the designed models. Firstly, there are many works reflecting the insufficiency of BLEU metric, such as higher or lower BLEU scores do not necessarily reflect the model quality improvements or decreasing; BLEU scores are not interpretive by many translation professionals; and BLEU did not correlate better than later developed metrics in some language pairs [10,?,?] In the light of such analytic works, we try to validate our work in a deeper and broader evaluation setting from more aspects. We use a wide range of state of the art MT evaluation metrics, which are developed in recent years, to do a more comprehensive evaluation, including hLEPOR [22], CharacTER [59], BEER [55], in addition to BLEU and NIST [50].
The model hLEPOR is a tunable translation evaluation metric yielding higher correlation with human judgments by adding n-gram position difference penalty factor into the traditional F-measures. CharacTER is a character level editing distance rate metric. BEER uses permutation trees and character n-grams integrating many features such as paraphrase and syntax. They have shown top performances in recent years' WMT 18 shared tasks [41,40,21,6].
Both CharacTER and BEER metrics achieved the parallel top performance in correlation scores with human judgment on Chinese-to-English MT evaluation in WMT-17 shared tasks [5] . While LEPOR metric series are evaluated by MT researchers as one of the most distinguished metric families that are not apparently outperformed by others, which is stated in the metrics comparison work in [21] on standard WMT data.
Evaluation on Development Set On the development set NIST06, we got the following evaluation scores.
The cumulative N-gram scoring of BLEU and NIST metric, with bold case as the highlight of the winner in each n-gram column situation, is shown in the table respectively. Researchers usually report their 4-gram BLEU while 5-gram NIST metric scores, so we also follow this tradition here: From the scoring results, we can see that the model setting one, i.e. W+C+R, won the baseline models in all uni-gram to 4-gram BLEU and to 5-gram NIST scores. Furthermore, we can see that, by adding character and/or radical to the words, the model setting two and three also outperformed the baseline models. However, the setting 4 that only used character and radical information in the model lost both BLEU and NIST scores compared with the word-level baseline. This means that, for Chinese NMT, the word segmentation knowledge is important to show some guiding in Chinese translation model learning.
For uni-gram BLEU score, our Model one gets 2.1 higher score than the baseline model which means by combining W+C+R the model can yield higher adequacy level translation, though the fluency score (4-gram) does not have much difference. This is exactly the point that we want to improve about neural models, as complained by many researchers.
The evaluation scores with broader state-of-the-art metrics are show in the follow table. Since CharacTER is an edit distance based metric, the lower score means better translation result. From the broader evaluation metrics, we can see that our designed models also won the baseline system in all the metrics. Our model setting one, i.e. the W+C+R model, won both BEER and CharacTER scores, while our model two, i.e. the W+C, won the hLEPOR metric score, though the setting four continue to be the worest performance, which is consistent with the BLEU and NIST metrics. Interestingly, we find that the CharacTER score of setting two and three are both worse than the baseline, which means that by adding of character and radical information separately the output translation needs more editing effort; however, if we add both the character and radical information into the model, i.e. the setting one, then the editing effort became less than the baseline.
Evaluation on Test Sets
The evaluation results on the NIST08 Chinese-to-English test date are presented in this section.
Firstly, we show the evaluation scores on BLEU and NIST metrics, with four reference translations and case-insensitive setting. The tables show the cumulative N-gram scores of BLEU and NIST, with bold case as the winner of each n-gram situation in each column. The results show that our model setting one won both BLEU and NIST scores on each n-gram evaluation scheme.
While model setting three, i.e. the W+R model, won the uni-gram and bi-gram BLEU scores, and got very closed score with the baseline model in NIST metric. Furthermore, the model setting four, i.e. the C+R one, continue showing the worst ranking, which may verify that word segmentation information and word boundaries are indeed helpful to Chinese translation models, so we can not omit such part.
What worth to mention is that the detailed evaluation scores from BLEU reflect our Model one yields higher BLEU score (1.58) on uni-gram, similar with the results on development data, while a little bit higher performance in 4-gram (0.25). These mean that in the fluency level our translation is similar with the state-of-the-art baseline, however, our model yields much better adequacy level translation in NMT since unigram BLEU reflects the adequacy aspect instead of fluency. This verifies the value of our model in the original problem we want to address.
The evaluation results on recent years' advanced metrics are shown below. The scores are also evaluated on the four references scheme. We calculate the average score of each metric from 4 references as the final evaluation score. Bold case means the winner as usual. From the broader evaluations, we can see that our model setting one won both the LEPOR and BEER metrics. Though the baseline model won the CharacTER metric, the margin between the two scores from baseline (.9846) and our model three, i.e. W+R, (.9882) is quite small around 0.0036. Continuously, the setting four with C+R performed the worst though and verified our previous findings.
Conclusion and Future Work
We presented the different performances of the multiple model settings by integration Chinese character and radicals into state-of-the-art attention-based neural machine translation systems, which can be helpful information for other researchers to look inside and gain general clues about how the radical works.
Our model shows the full character+radical is not enough or suitable for Chinese language translation, which is different with the work on western languages such as [16]. Our model results showed that the word segmentation and word boundary are helpful knowledge for Chinese translation systems. Even though our model settings won both the traditional BLEU and NIST metrics, the recent years developed advanced metrics indeed showed some differences and interesting phenomena, especially the character level translation error rate metric Char-acTER. This can encourage MT researchers to use the state-of-the-art metrics to find useful insight of their models.
Although the combination of words, characters and radicals mostly yielded the best scores, the broad evaluations also showed that the model setting W+R, i.e. using both words and radicals information, is generally better than the model setting W+C, i.e. words plus characters without radical, which verified the value of our work by exploring radicals into Chinese NMT. Our Model one yielded much better adequacy level translation output (by uni-gram BLEU score) compared with the baseline system, which also showed that this work is important in exploring how to improve adequacy aspect of neural models.
In the future work, we will continue to optimize our models and use more testing data to verify the performances. In this work, we aimed at exploring the effectiveness of Chinese radicals, so we did not use BPE for English side splitting, however, to promote the state-of-the-art Chinese-English translation, in our future extension, we will apply the splitting on both Chinese and English sides. We will also investigate the usage of Chinese radicals into MT evaluation area, since they carry the language meanings.
Acknowledgement
The author Han thanks Ahmed Abdelkader for the kind help, and Niel de la Rouviere for the HanziJS toolkit. This work was supported by Soochow University of China and Dublin City University of Ireland.
Fig. 1 :
1Radical as independent character.
Fig. 2 :
2Radical can not be independent character.
Table 1 :
1Model Settings
Settings
Description
abbreviation
Baseline
Words
W
Setting1 Word+Character+Radical W+C+R
Setting2
Word+Character
W+C
Setting3
Word+Radical
W+R
Setting4
Table 2 :
2BLEU Scores on NIST06 Development Data
1-gram 2-gram 3-gram 4-gram
Baseline .7211 .5663 .4480 .3556
W+C+R .7420 .5783 .4534 .3562
W+C
.7362 .5762 .4524 .3555
W+R
.7346 .5730 .4491 .3529
C+R
.7089 .5415 .4164 .3219
Table 3 :
3NIST Scores on NIST06 Development Data
1-gram 2-gram 3-gram 4-gram 5-gram
Baseline 5.8467 7.7916 8.3381 8.4796 8.5289
W+C+R 6.0047 7.9942 8.5473 8.6875 8.7346
W+C
5.9531 7.9438 8.5127 8.6526 8.6984
W+R
5.9372 7.9021 8.4573 8.5950 8.6432
C+R
5.6385 7.4379 7.9401 8.0662 8.1082
Table 4 :
4Broader Metrics Scores on NIST06 Development Data
Metrics on Single Reference
Models hLEPOR BEER CharacTER
Baseline .5890 .5112
.9225
W+C+R .5972 .5167
.9169
W+C
.5988 .5164
.9779
W+R
.5942 .5146
.9568
C+R
.5779 .4998
1.336
Table 5 :
5BLEU Scores on NIST08 Test Data
1-gram 2-gram 3-gram 4-gram
Baseline .6451 .4732 .3508 .2630
W+C+R .6609 .4839 .3572 .2655
W+C
.6391 .4663 .3412 .2527
W+R
.6474 .4736 .3503 .2607
C+R
.6378 .4573 .3296 .2410
Table 6 :
6NIST Scores on NIST08 Test Data
Table 7 :
7Broader Metrics Scores on NIST08 Test DataMetrics Evaluated on 4-references
Models hLEPOR BEER CharacTER
Baseline .5519 .4748
0.9846
W+C+R .5530 .4778
1.3514
W+C
.5444 .4712
1.1416
W+R
.5458 .4717
0.9882
C+R
.5353 .4634
1.1888
* Parallel authors, ranked by alphabet decreasing order. Accepted by ESSLLI2018. arXiv:1805.01565v2 [cs.CL] 8 May 2018
The author (Han) discussed the idea of applying radical into Chinese MT when he was in ILLC, UvA, Amsterdam with Prof. Khalil Sima'an in 2014. Han drafted the experiment design in 2016 for WMT2017 submission earlier in DCU, Dublin. We finished the experiments
github.com/nyu-dl/dl4mt-tutorial/tree/master/ session2
github.com/nyu-dl/dl4mt-tutorial13 To distinguish it from RNNSearch as in the paper[2]
www.statmt.org/wmt17/metrics-task.html
Towards string-to-tree neural machine translation. R Aharoni, Y Goldberg, CoRR abs/1704.04743Aharoni, R., Goldberg, Y.: Towards string-to-tree neural machine translation. CoRR abs/1704.04743 (2017), http://arxiv.org/abs/1704.04743
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, CoRR abs/1409.0473Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2014), http://arxiv.org/abs/1409.0473
Graph convolutional encoders for syntax-aware neural machine translation. J Bastings, I Titov, W Aziz, D Marcheggiani, K Sima'an, Bastings, J., Titov, I., Aziz, W., Marcheggiani, D., Sima'an, K.: Graph con- volutional encoders for syntax-aware neural machine translation. arXiv preprint https://arxiv.org/abs/1704.04675 (2017)
A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Janvin, J. Mach. Learn. Res. 3Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137-1155 (Mar 2003), http://dl.acm.org/citation.cfm? id=944919.944966
Results of the WMT17 metrics shared task. O Bojar, Y Graham, A Kamran, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Shared Tasks PapersBojar, O., Graham, Y., Kamran, A.: Results of the WMT17 metrics shared task. In: Proceed- ings of the Second Conference on Machine Translation, Volume 2: Shared Tasks Papers. Association for Computational Linguistics, Copenhagen, Denmark (September 2017)
Results of the wmt16 metrics shared task. O Bojar, Y Graham, A Kamran, M Stanojević, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsBojar, O., Graham, Y., Kamran, A., Stanojević, M.: Results of the wmt16 metrics shared task. In: Proceedings of the First Conference on Machine Translation. pp. 199-231. Association for Computational Linguistics, Berlin, Germany (August 2016), http://www.aclweb. org/anthology/W/W16/W16-2302
The mathematics of statistical machine translation: Parameter estimation. P F Brown, V J D Pietra, S A D Pietra, R L Mercer, Comput. Linguist. 192Brown, P.F., Pietra, V.J.D., Pietra, S.A.D., Mercer, R.L.: The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist. 19(2), 263-311 (Jun 1993), http://dl.acm.org/citation.cfm?id=972470.972474
Multimodal attention for neural machine translation. O Caglayan, L Barrault, F Bougares, CoRR abs/1609.03976Caglayan, O., Barrault, L., Bougares, F.: Multimodal attention for neural machine translation. CoRR abs/1609.03976 (2016), http://arxiv.org/abs/1609.03976
Multilingual multi-modal embeddings for natural language processing. I Calixto, Q Liu, N Campbell, CoRR abs/1702.01101Calixto, I., Liu, Q., Campbell, N.: Multilingual multi-modal embeddings for natural language processing. CoRR abs/1702.01101 (2017), http://arxiv.org/abs/1702.01101
Re-evaluating the role of bleu in machine translation research. C Callison-Burch, M Osborne, P Koehn, Proceedings of EACL. EACLCallison-Burch, C., Osborne, M., Koehn, P.: Re-evaluating the role of bleu in machine trans- lation research. In: Proceedings of EACL. vol. 2006, pp. 249-256 (2006)
Recent advances in example-based machine translation. M Carl, A Way, Carl, M., Way, A.: Recent advances in example-based machine translation (2003)
Phrasal: A toolkit for statistical machine translation with facilities for extraction and incorporation of arbitrary model features. D Cer, M Galley, D Jurafsky, C D Manning, Proceedings of the NAACL HLT 2010 Demonstration Session. the NAACL HLT 2010 Demonstration SessionStroudsburg, PA, USAAssociation for Computational LinguisticsHLT-DEMO '10Cer, D., Galley, M., Jurafsky, D., Manning, C.D.: Phrasal: A toolkit for statistical machine translation with facilities for extraction and incorporation of arbitrary model features. In: Proceedings of the NAACL HLT 2010 Demonstration Session. pp. 9-12. HLT-DEMO '10, Association for Computational Linguistics, Stroudsburg, PA, USA (2010), http://dl. acm.org/citation.cfm?id=1855450.1855453
A hierarchical phrase-based model for statistical machine translation. D Chiang, 10.3115/1219840.1219873Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. pp. 263-270. ACL '05. the 43rd Annual Meeting on Association for Computational Linguistics. pp. 263-270. ACL '05Stroudsburg, PA, USAAssociation for Computational LinguisticsChiang, D.: A hierarchical phrase-based model for statistical machine translation. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguis- tics. pp. 263-270. ACL '05, Association for Computational Linguistics, Stroudsburg, PA, USA (2005). https://doi.org/10.3115/1219840.1219873, https://doi.org/10. 3115/1219840.1219873
On the properties of neural machine translation: Encoder-decoder approaches. K Cho, B Van Merrienboer, D Bahdanau, Y Bengio, CoRR abs/1409.1259Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: Encoder-decoder approaches. CoRR abs/1409.1259 (2014), http://arxiv. org/abs/1409.1259
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, C Gulcehre, F Bougares, H Schwenk, Y Bengio, Conference on Empirical Methods in Natural Language Processing. Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., Bengio, Y.: Learn- ing phrase representations using rnn encoder-decoder for statistical machine translation. In: Conference on Empirical Methods in Natural Language Processing (EMNLP 2014) (2014)
A character-level decoder without explicit segmentation for neural machine translation. J Chung, K Cho, Y Bengio, ACLChung, J., Cho, K., Bengio, Y.: A character-level decoder without explicit segmentation for neural machine translation. In: ACL (2016)
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K Cho, Y Bengio, NIPS 2014 Deep Learning and Representation Learning Workshop. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. Presented in NIPS 2014 Deep Learning and Representation Learning Workshop (2014)
Learning where to attend with deep architectures for image tracking. M Denil, L Bazzani, H Larochelle, N De Freitas, CoRR abs/1109.3737Denil, M., Bazzani, L., Larochelle, H., de Freitas, N.: Learning where to attend with deep architectures for image tracking. CoRR abs/1109.3737 (2011), http://arxiv.org/ abs/1109.3737
Cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. C Dyer, J Weese, H Setiawan, A Lopez, F Ture, V Eidelman, J Ganitkevitch, P Blunsom, P Resnik, Proceedings of the ACL 2010 System Demonstrations. pp. 7-12. ACLDemos '10. the ACL 2010 System Demonstrations. pp. 7-12. ACLDemos '10Stroudsburg, PA, USAAssociation for Computational LinguisticsDyer, C., Weese, J., Setiawan, H., Lopez, A., Ture, F., Eidelman, V., Ganitkevitch, J., Blun- som, P., Resnik, P.: Cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In: Proceedings of the ACL 2010 System Demonstra- tions. pp. 7-12. ACLDemos '10, Association for Computational Linguistics, Stroudsburg, PA, USA (2010), http://dl.acm.org/citation.cfm?id=1858933.1858935
Imagination improves multimodal translation. D Elliott, Á Kádár, CoRR abs/1705.04350Elliott, D., Kádár,Á.: Imagination improves multimodal translation. CoRR abs/1705.04350 (2017), http://arxiv.org/abs/1705.04350
Accurate evaluation of segment-level machine translation metrics. Y Graham, N Mathur, T Baldwin, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies. Denver, Colorado. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies. Denver, ColoradoGraham, Y., Mathur, N., Baldwin, T.: Accurate evaluation of segment-level machine transla- tion metrics. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies. Denver, Col- orado (2015)
Languageindependent model for machine translation evaluation with reinforced factors. A L F Han, D F Wong, L S Chao, L He, Y Lu, J Xing, X Zeng, Machine Translation Summit XIV. International Association for Machine TranslationHan, A.L.F., Wong, D.F., Chao, L.S., He, L., Lu, Y., Xing, J., Zeng, X.: Language- independent model for machine translation evaluation with reinforced factors. In: Machine Translation Summit XIV. pp. 215-222. International Association for Machine Translation (2013)
Improving neural networks by preventing co-adaptation of feature detectors. G E Hinton, N Srivastava, A Krizhevsky, I Sutskever, R R Salakhutdinov, arXiv:1207.0580arXiv preprintHinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Im- proving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
Attention-based multimodal neural machine translation. P Y Huang, F Liu, S R Shiang, J Oh, C Dyer, 10.18653/v1/W16-2360Huang, P.Y., Liu, F., Shiang, S.R., Oh, J., Dyer, C.: Attention-based multimodal neural machine translation (2016). https://doi.org/10.18653/v1/W16-2360, http:// aclanthology.coli.uni-saarland.de/pdf/W/W16/W16-2360.pdf
Attention-based multimodal neural machine translation. P Y Huang, F Liu, S R Shiang, J Oh, C Dyer, WMTHuang, P.Y., Liu, F., Shiang, S.R., Oh, J., Dyer, C.: Attention-based multimodal neural ma- chine translation. In: WMT (2016)
On using very large target vocabulary for neural machine translation. S Jean, K Cho, R Memisevic, Y Bengio, ACL 2015. Jean, S., Cho, K., Memisevic, R., Bengio, Y.: On using very large target vocabulary for neural machine translation. In: ACL 2015 (2014)
Google's multilingual neural machine translation system: Enabling zero-shot translation. M Johnson, M Schuster, Q V Le, M Krikun, Y Wu, Z Chen, N Thorat, F B Viégas, M Wattenberg, G Corrado, M Hughes, J Dean, CoRR abs/1611.04558Johnson, M., Schuster, M., Le, Q.V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Viégas, F.B., Wattenberg, M., Corrado, G., Hughes, M., Dean, J.: Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR abs/1611.04558 (2016), http: //arxiv.org/abs/1611.04558
Recurrent continuous translation models. N Kalchbrenner, P Blunsom, Association for Computational LinguisticsSeattleKalchbrenner, N., Blunsom, P.: Recurrent continuous translation models. Association for Computational Linguistics, Seattle (October 2013)
Europarl: A parallel corpus for statistical machine translation. P Koehn, MT summit. 5Koehn, P.: Europarl: A parallel corpus for statistical machine translation. In: MT summit. vol. 5, pp. 79-86 (2005)
P Koehn, Statistical machine translation. Koehn, P.: Statistical machine translation (2010)
Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Proceedings of ACL. ACLKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., Herbst, E.: Moses: Open source toolkit for statistical machine translation. In: Proceedings of ACL (2007)
Statistical machine translation. P Koehn, K Knight, 6245Koehn, P., Knight, K.: Statistical machine translation (Nov 24 2009), uS Patent 7,624,005
Six challenges for neural machine translation. P Koehn, R Knowles, arXiv:1706.03872arXiv preprintKoehn, P., Knowles, R.: Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872 (2017)
Statistical phrase-based translation. P Koehn, F J Och, D Marcu, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology1Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1. pp. 48-54.
10.3115/1073445.1073462NAACL '03. Stroudsburg, PA, USAAssociation for Computational LinguisticsNAACL '03, Association for Computational Linguistics, Stroudsburg, PA, USA (2003). https://doi.org/10.3115/1073445.1073462, https://doi.org/10.3115/1073445. 1073462
Learning to combine foveal glimpses with a third-order boltzmann machine. H Larochelle, G E Hinton, J D Lafferty, C K I Williams, J Shawe-Taylor, Advances in Neural Information Processing Systems. Zemel, R.S., Culotta, A.Curran Associates, Inc23Larochelle, H., Hinton, G.E.: Learning to combine foveal glimpses with a third-order boltzmann machine. In: Lafferty, J.D., Williams, C.K.I., Shawe-Taylor, J., Zemel, R.S., Culotta, A. (eds.) Advances in Neural Information Processing Systems 23, pp. 1243-1251. Curran Associates, Inc. (2010), http://papers.nips.cc/paper/ 4089-learning-to-combine-foveal-glimpses-with-a-third-order-boltzmann-machine. pdf
Modeling Source Syntax for Neural Machine Translation. J Li, D Xiong, Z Tu, M Zhu, M Zhang, G Zhou, ArXiv e-printsLi, J., Xiong, D., Tu, Z., Zhu, M., Zhang, M., Zhou, G.: Modeling Source Syntax for Neural Machine Translation. ArXiv e-prints (May 2017)
Demonstration of joshua: An open source toolkit for parsing-based machine translation. Z Li, C Callison-Burch, C Dyer, J Ganitkevitch, S Khudanpur, L Schwartz, W N G Thornton, J Weese, O F Zaidan, ACLDemos '09, Association for Computational Linguistics. Stroudsburg, PA, USAProceedings of the ACL-IJCNLPLi, Z., Callison-Burch, C., Dyer, C., Ganitkevitch, J., Khudanpur, S., Schwartz, L., Thorn- ton, W.N.G., Weese, J., Zaidan, O.F.: Demonstration of joshua: An open source toolkit for parsing-based machine translation. In: Proceedings of the ACL-IJCNLP 2009 Soft- ware Demonstrations. pp. 25-28. ACLDemos '09, Association for Computational Lin- guistics, Stroudsburg, PA, USA (2009), http://dl.acm.org/citation.cfm?id= 1667872.1667879
Learning character-level compositionality with visual features. F Liu, H Lu, C Lo, G Neubig, CoRR abs/1704.04859Liu, F., Lu, H., Lo, C., Neubig, G.: Learning character-level compositionality with visual features. CoRR abs/1704.04859 (2017), http://arxiv.org/abs/1704.04859
Effective approaches to attention-based neural machine translation. M Luong, H Pham, C D Manning, CoRR abs/1508.04025Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. CoRR abs/1508.04025 (2015), http://arxiv.org/abs/1508. 04025
Results of the wmt14 metrics shared task. M Machacek, O Bojar, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsMachacek, M., Bojar, O.: Results of the wmt14 metrics shared task. In: Proceedings of the Ninth Workshop on Statistical Machine Translation. pp. 293-301. Association for Computa- tional Linguistics, Baltimore, Maryland, USA (June 2014), http://www.aclweb.org/ anthology/W/W14/W14-3336
Results of the WMT13 metrics shared task. M Macháček, O Bojar, Proceedings of the Eighth Workshop on Statistical Machine Translation. the Eighth Workshop on Statistical Machine TranslationSofia, BulgariaAssociation for Computational LinguisticsMacháček, M., Bojar, O.: Results of the WMT13 metrics shared task. In: Proceedings of the Eighth Workshop on Statistical Machine Translation. pp. 45-51. Association for Computational Linguistics, Sofia, Bulgaria (August 2013), http://www.aclweb.org/ anthology/W13-2202
Synthesizing compound words for machine translation. A Matthews, E Schlinger, A Lavie, C Dyer, ACL. Matthews, A., Schlinger, E., Lavie, A., Dyer, C.: Synthesizing compound words for machine translation. In: ACL (1) (2016)
A coverage embedding model for neural machine translation. H Mi, B Sankaran, Z Wang, A Ittycheriah, CoRR abs/1605.03148Mi, H., Sankaran, B., Wang, Z., Ittycheriah, A.: A coverage embedding model for neural machine translation. CoRR abs/1605.03148 (2016), http://arxiv.org/abs/1605. 03148
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, CoRR abs/1301.3781Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR abs/1301.3781 (2013), http://arxiv.org/abs/1301.3781
R P Neco, M L Forcada, 10.1109/ICNN.1997.614693Asynchronous translations with recurrent neural nets. In: Neural Networks. 4International Conference onNeco, R.P., Forcada, M.L.: Asynchronous translations with recurrent neural nets. In: Neu- ral Networks,1997., International Conference on. vol. 4, pp. 2535-2540 vol.4 (Jun 1997). https://doi.org/10.1109/ICNN.1997.614693
Neural machine translation and sequence-to-sequence models: A tutorial. G Neubig, arXiv:1703.01619arXiv preprintNeubig, G.: Neural machine translation and sequence-to-sequence models: A tutorial. arXiv preprint arXiv:1703.01619 (2017)
Knowledge-based machine translation. S Nirenburg, Machine Translation. 41Nirenburg, S.: Knowledge-based machine translation. Machine Translation 4(1), 5-24 (1989), http://www.jstor.org/stable/40008396
Minimum error rate training for statistical machine translation. F J Och, Proceedings of ACL. ACLOch, F.J.: Minimum error rate training for statistical machine translation. In: Proceedings of ACL (2003)
Improved statistical alignment models. F J Och, H Ney, Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. the 38th Annual Meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsOch, F.J., Ney, H.: Improved statistical alignment models. In: Proceedings of the 38th An- nual Meeting on Association for Computational Linguistics. pp. 440-447. Association for Computational Linguistics (2000)
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W Jing Zhu, Papineni, K., Roukos, S., Ward, T., jing Zhu, W.: Bleu: a method for automatic evaluation of machine translation. pp. 311-318 (2002)
Generating alignments using target foresight in attention-based neural machine translation. J T Peter, A Nix, H Ney, The Prague Bulletin of Mathematical Linguistics. 1081Peter, J.T., Nix, A., Ney, H.: Generating alignments using target foresight in attention-based neural machine translation. The Prague Bulletin of Mathematical Linguistics 108(1), 27-36 (2017)
Continuous space language models for statistical machine translation. H Schwenk, D Dchelotte, J L Gauvain, Proceedings of the COLING/ACL on Main Conference Poster Sessions. pp. 723-730. COLING-ACL '06, Association for Computational Linguistics. the COLING/ACL on Main Conference Poster Sessions. pp. 723-730. COLING-ACL '06, Association for Computational LinguisticsStroudsburg, PA, USASchwenk, H., Dchelotte, D., Gauvain, J.L.: Continuous space language models for sta- tistical machine translation. In: Proceedings of the COLING/ACL on Main Conference Poster Sessions. pp. 723-730. COLING-ACL '06, Association for Computational Lin- guistics, Stroudsburg, PA, USA (2006), http://dl.acm.org/citation.cfm?id= 1273073.1273166
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, CoRR abs/1508.07909Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. CoRR abs/1508.07909 (2015), http://arxiv.org/abs/1508.07909
Radical embedding: Delving deeper to chinese radicals. X Shi, J Zhai, X Yang, Z Xie, C Liu, 10.3115/v1/P15-2098Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing2Association for Computational LinguisticsShi, X., Zhai, J., Yang, X., Xie, Z., Liu, C.: Radical embedding: Delving deeper to chi- nese radicals. In: Proceedings of the 53rd Annual Meeting of the Association for Com- putational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). pp. 594-598. Association for Computational Lin- guistics (2015). https://doi.org/10.3115/v1/P15-2098, http://aclanthology.coli. uni-saarland.de/pdf/P/P15/P15-2098.pdf
Beer: Better evaluation as ranking. M Stanojević, K Sima'an, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsStanojević, M., Sima'an, K.: Beer: Better evaluation as ranking. In: Proceedings of the Ninth Workshop on Statistical Machine Translation. pp. 414-419. Association for Computa- tional Linguistics, Baltimore, Maryland, USA (June 2014), http://www.aclweb.org/ anthology/W/W14/W14-3354
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in neural information processing systems. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in neural information processing systems. pp. 3104-3112 (2014)
Context gates for neural machine translation. Z Tu, Y Liu, Z Lu, X Liu, H Li, CoRR abs/1608.06043Tu, Z., Liu, Y., Lu, Z., Liu, X., Li, H.: Context gates for neural machine translation. CoRR abs/1608.06043 (2016), http://arxiv.org/abs/1608.06043
Coverage-based neural machine translation. Z Tu, Z Lu, Y Liu, X Liu, H Li, CoRR abs/1601.04811Tu, Z., Lu, Z., Liu, Y., Liu, X., Li, H.: Coverage-based neural machine translation. CoRR abs/1601.04811 (2016), http://arxiv.org/abs/1601.04811
Character: Translation edit rate on character level. W Wang, J T Peter, H Rosendahl, H Ney, WMTWang, W., Peter, J.T., Rosendahl, H., Ney, H.: Character: Translation edit rate on character level. In: WMT. pp. 505-510 (2016)
Word, subword or character? an empirical study of granularity in chinese-english nmt. Y Wang, L Zhou, J Zhang, C Zong, Machine Translation. Wong, D.F., Xiong, D.Singapore; SingaporeSpringerWang, Y., Zhou, L., Zhang, J., Zong, C.: Word, subword or character? an empirical study of granularity in chinese-english nmt. In: Wong, D.F., Xiong, D. (eds.) Machine Translation. pp. 30-42. Springer Singapore, Singapore (2017)
W Weaver, Translation. Machine Translation of Languages: Fourteen Essays. Weaver, W.: Translation. Machine Translation of Languages: Fourteen Essays (1955)
Niutrans: An open source toolkit for phrase-based and syntax-based machine translation. T Xiao, J Zhu, H Zhang, Q Li, Proceedings of the ACL 2012 System Demonstrations. the ACL 2012 System DemonstrationsStroudsburg, PA, USAAssociation for Computational Linguistics12Xiao, T., Zhu, J., Zhang, H., Li, Q.: Niutrans: An open source toolkit for phrase-based and syntax-based machine translation. In: Proceedings of the ACL 2012 System Demonstra- tions. pp. 19-24. ACL '12, Association for Computational Linguistics, Stroudsburg, PA, USA (2012), http://dl.acm.org/citation.cfm?id=2390470.2390474
M D Zeiler, arXiv:1212.5701Adadelta: an adaptive learning rate method. arXiv preprintZeiler, M.D.: Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)
Improving character-level japanese-chinese neural machine translation with radicals as an additional input feature. J Zhang, T Matsumoto, 10.1109/IALP.2017.83005722017 International Conference on Asian Language Processing (IALP). Zhang, J., Matsumoto, T.: Improving character-level japanese-chinese neural ma- chine translation with radicals as an additional input feature. In: 2017 Interna- tional Conference on Asian Language Processing (IALP). pp. 172-175 (Dec 2017). https://doi.org/10.1109/IALP.2017.8300572
| [] |
[
"Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks",
"Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks"
] | [
"Janarthanan Rajendran \nUniversity of Michigan\nUniversity of Michigan\nUniversity of Michigan\n\n",
"Jonathan K Kummerfeld \nUniversity of Michigan\nUniversity of Michigan\nUniversity of Michigan\n\n",
"Satinder Singh \nUniversity of Michigan\nUniversity of Michigan\nUniversity of Michigan\n\n"
] | [
"University of Michigan\nUniversity of Michigan\nUniversity of Michigan\n",
"University of Michigan\nUniversity of Michigan\nUniversity of Michigan\n",
"University of Michigan\nUniversity of Michigan\nUniversity of Michigan\n"
] | [] | For each goal-oriented dialog task of interest, large amounts of data need to be collected for end-to-end learning of a neural dialog system. Collecting that data is a costly and time-consuming process. Instead, we show that we can use only a small amount of data, supplemented with data from a related dialog task. Naively learning from related data fails to improve performance as the related data can be inconsistent with the target task. We describe a meta-learning based method that selectively learns from the related dialog task data. Our approach leads to significant accuracy improvements in an example dialog task. | 10.18653/v1/2021.nlp4convai-1.16 | [
"https://arxiv.org/pdf/2110.15724v1.pdf"
] | 240,288,918 | 2110.15724 | 9bd0f2046c633807bca47aa314bf9823acc28044 |
Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks
Janarthanan Rajendran
University of Michigan
University of Michigan
University of Michigan
Jonathan K Kummerfeld
University of Michigan
University of Michigan
University of Michigan
Satinder Singh
University of Michigan
University of Michigan
University of Michigan
Learning to Learn End-to-End Goal-Oriented Dialog From Related Dialog Tasks
For each goal-oriented dialog task of interest, large amounts of data need to be collected for end-to-end learning of a neural dialog system. Collecting that data is a costly and time-consuming process. Instead, we show that we can use only a small amount of data, supplemented with data from a related dialog task. Naively learning from related data fails to improve performance as the related data can be inconsistent with the target task. We describe a meta-learning based method that selectively learns from the related dialog task data. Our approach leads to significant accuracy improvements in an example dialog task.
Introduction
One key benefit of goal-oriented dialog systems that are trained end-to-end is that they only require examples of dialog for training. Avoiding the modular structure of pipeline methods removes the human effort involved in creating intermediate annotations for data to train the modules. The endto-end structure also enables automatic adaptation of the system, with different components of the model changing together. This flexibility is particularly valuable when applying the system to a new domain.
However, end-to-end systems currently require significantly more data, increasing the human effort in data collection. The most common method for training is Supervised Learning (SL) using a dataset of dialogs of human agents performing the task of interest (Bordes et al., 2017;Eric and Manning, 2017;Wen et al., 2017). To produce an effective model, the dataset needs to be large, high quality, and in the target domain. That means for each new dialog task of interest large amounts of new data has to be collected. The time and money involved in that collection process limits the potential application of these systems.
We propose a way to reduce this cost by selectively learning from data from related dialog tasks: tasks that have parts/subtasks that are similar to the new task of interest. Specifically, we describe a method for learning which related task examples to learn from. Our approach uses meta-gradients to automatically meta-learn a scalar weight ∈ (0, 1) for each of the related task data points, such that learning from the weighted related task data points improves the performance of the dialog system on the new task of interest. These weights are dynamically adjusted over the course of training in order to learn most effectively. We still learn from data for the target task, but do not need as much to achieve the same results.
To demonstrate this idea, we considered two experiments. First, we confirmed that the method can work in an ideal setting. We constructed a classification task where the related task data is actually from the same task, but with the incorrect label for 75% of examples, and there is an input feature that indicates whether the label is correct or not. Our approach is able to learn to ignore the misleading data, achieving close to the performance of a model trained only on the correct examples.
Second, we evaluated the approach on a personalized restaurant reservation task with limited training data. Here, the related task is also restaurant reservation, but without personalization and with additional types of interactions. We compare our approach to several standard alternatives, including multi-task learning and using the related data for pre-training only. Our approach is consistently the best, indicating its potential to effectively learn which parts of the related data to learn from and which to ignore. Successfully learning from available related task data can allow us to build end-to-end goal-oriented dialog systems for new tasks faster with reduced cost and human effort in data collection. arXiv:2110.15724v1 [cs.CL] 10 Oct 2021
Related Work
The large cost of collecting data for every new dialog task has been widely acknowledged, motivating a range of efforts. One approach is to transfer knowledge from other data to cope with limited availability of training dialog data for the new task of interest. For example Zhao et al. (2020) split the dialog model such that most of the model can be learned using ungrounded dialogs and plain text. Only a small part of the dialog model with a small number of parameters is trained with the dialog data available for the task of interest. In contrast, we explore how to learn from related grounded dialogs, and also without any specific constraints on the structure of the end-to-end dialog system architecture. Wen et al. (2016) pre-train the model with data automatically generated from different tasks and Lin et al. (2020) use pre-trained language models as initialization and then fine-tune the dialog model with data from the task of interest. These ideas are complementary to our approach as we make no assumptions about how the model was pre-trained.
Recently, there has been work that explored ways to automatically learn certain aspects of the transfer process using meta-learning. Xu et al. (2020) look at the problem of learning a joint dialog policy using Reinforcement Learning (RL) in a multi-domain setting which can then be transferred to a new domain. They decomposed the state and action representation into features that correspond to low level components that are shared across domains, facilitating cross-domain transfer. They also proposed a Model Agnostic Meta Learning (MAML Finn et al., 2017) based extension that learns to adapt faster to a new domain. Madotto et al. (2019), Mi et al. (2019), Qian and Yu (2019) and Dai et al. (2020) also look at multi-domain settings. They use MAML based meta-learning methods to learn an initialization that adapts fast with few dialog samples from a new task.
All of the papers above consider settings where there is access to a large set of training tasks. The meta-learning systems learn to transfer knowledge to a new test task by learning how to do transfer on different training tasks. While each task only has a limited amount of dialog data, they need a lot of tasks during training. In contrast, we look at a setting where the task from which we want to transfer knowledge from and the task that we want to transfer knowledge to are the only tasks that we have access to at training time. Any learning about how to transfer knowledge has to happen from just these two tasks. None of the above methods are applicable to this setting.
Learning a task, while simultaneously metalearning certain aspects of the learning process has been done successfully in some SL and RL settings recently. ; Wichrowska et al. (2017) use meta-learning to adapt hyperparemeters such as learning rate and and even learn entire optimizers themselves during training for SL tasks such as image classification. Given a single task, Zheng et al. (2018) successfully meta-learn intrinsic rewards that help the agent perform well on that task. Xu et al. (2018) use meta-gradients to learn RL training hyperparameters such as the discount factor and bootstrapping parameters. The metagradient technique used in our proposed method is closely related to Rajendran et al. (2020). They learn intrinsic rewards for an RL agent acting in given domain, such that learning with those intrinsic rewards improves the performance of the agent in the task of interest in a different domain.
While we use a meta-learning based method for learning the weights for the related task data points in this work, there are other techniques in the machine learning literature, especially in the computer vision literature, that can potentially be used to learn the weights. A large section of these recent techniques are based on learning an adversarially trained discriminator for estimating the weights of related image classification task data points (Zhao et al., 2018;Cao et al., 2018;Sankaranarayanan et al., 2018;Wang et al., 2019). Jiang and Zhai (2007) use a combination of several domain adaptation heuristics to assign weights and evaluate on NLP tasks. Moon and Carbonell (2017) cluster the related task data points and learn attention weights for the clusters. An interesting future direction would be to study which weighting methods are best suited for end-to-end learning of neural goaloriented dialog systems using related tasks and under what conditions.
Proposed Method
Intuition
Consider a scenario where we are building a restaurant reservation dialog system and have data collected in the past for a hotel reservation dialog system. The hotel reservation data could have parts that might be useful to learn from, e.g., greeting and obtaining a user's name/contact information. The data could also have parts that are inconsistent with the needs of the restaurant reservation system, e.g., hotel reservation might require the dialog system to ask for the user's duration of stay while the restaurant reservation might require the dialog system to ask for the particular day and time of table reservation. There could also be a lot of irrelevant information in the data that would be best to ignore for a learning system with limited capacity, e.g., answering questions about fitness facilities and the pool in a hotel.
Another type of scenario is when the new task is a modified version of a previous task. In this case, the previous task is an excellent source of related data, but will have critical differences. For example, the new system may need to ask users for their email address rather than a mailing address. To use the data effectively, the model needs to learn what to use and what to ignore.
Data from the related tasks could also provide rich information about different user behaviors and natural language in general of both users and agents. Tapping into and learning from the related tasks' data that is already available can potentially allow us to build dialog systems with improved performance on the new task of interest with only a limited amount of data collected, saving us time, effort and money in data collection.
Algorithm
Let T P be the new task of interest (primary task) for which we have collected a limited amount of training data. Let T R be the related task for which we have relatively large amounts of data already available. We are interested in building a dialog system for the task T P . The data points are pairs of the form context (c) and dialog system's next utterance (a), where the context has the history of the dialog so far, ending with the most recent user utterance. We learn a dialog model M parameterized by θ that takes as input the context c and predicts the next dialog system utterance a.
Each iteration of training is comprised of the following three major steps. 1) The dialog model is updated using a batch of data points from the primary task.
2) The dialog model is updated using a batch of data points from the related task, where each related task data point's training loss is weighted between (0, 1). 3) The related task data points' weights are updated. These three steps are repeated at each training iteration. We describe each step in detail below.
1) Updating the dialog model using primary task data points. We sample a batch of data points {. . . , (c P i , a P i ), . . .} from the primary task T P . Let L(M θ (c i ), a i ) represent the supervised learning prediction loss between M θ (c i ): the next utterance predicted by the dialog model and a i : the ground truth next utterance. Model M θ is updated using the supervised learning prediction loss of the primary task data points L P as shown below:
L P (θ) = i L M θ (c P i ), a P i (1) θ ← θ − α∇ θ L P (θ),(2)
where α is the learning rate and ∇ θ L P (θ) is the gradient of the loss L P (θ) with respect to θ.
2) Updating the dialog model using weighted related task data points. We sample a batch of data points {. . . , (c R i , a R i ), . . .} from the related task T R . The supervised learning prediction loss L(M θ (c i ), a i ) for each data point in the batch is weighed by a scalar weight w i ∈ (0, 1) corresponding to each of the data points. The scalar weight for each of the related task data point is obtained as a function of that particular data point. Let P parameterized by η be the module that outputs the weights. The weight for a related task data point is calculated as shown below:
w i (η) = σ(P η (c R i , a R i )),(3)
where σ is a sigmoid function used to normalize the output of P to (0, 1). Model M θ is updated using the weighted prediction loss of related task data points L R as shown below:
L R (θ, η) = i w i (η)L M θ (c R i ), a R i (4) θ ← θ − β∇ θ L R (θ, η),(5)
where β is the learning rate and ∇ θ L R (θ, η) represents the gradient of the loss L R (θ, η) with respect to θ. The weights w i allow for selectively using data points from the related task data for updating the dialog model.
3) Updating the related task data points' weights. In this key step of our proposed method, we update the related task data points' weights w i (η). The update increases the weights of related task data points that improve the dialog model's performance on the primary task when learned from and decreases the weights of those that degrade the dialog model's performance on the primary task. We first sample a batch of related task data points and simulate how the model parameters θ would change if we update the model with a batch of related task data points with the current assignment of weights provided by P η :
L R (θ, η) = i w i (η)L M θ (c R i ), a R i (6) θ = θ − β∇ θ L R (θ, η).(7)
We then evaluate how the updated model M θ performs on the primary task to decide how to change P η that assigned weights to the related task data points that resulted in M θ :
L P (θ ) = i L M θ (c P i ), a P i ,(8)
where L P (θ ) is the supervised learning loss of the updated model M θ on a new batch of data points sampled from the primary task. The parameters of P η are updated as shown below:
η ←η − γ∇ η L P (θ ) (9) =η − γ∇ η θ ∇ θ L P (θ ),(10)
where γ is the learning rate and ∇ η L P (θ ) represents the gradient of the loss L P (θ ) with respect to η. The gradient, ∇ η L P (θ ) is split into products of two gradients ∇ η θ and ∇ θ L P (θ ) using the chain rule. ∇ η θ can be calculated using meta-gradients as follows:
∇ η θ =∇ η (θ − β∇ θ L R (θ, η)) (11) =∇ η β∇ θ L R (θ, η) (12) =∇ η β∇ θ i w i (η)L(M θ (c R i ), a R i ) (13) =β i ∇ η w i (η)∇ θ L(M θ (c R i ), a R i ). (14)
Discussion
The proposed method learns a dialog model from the primary task data points and also selectively from the related task data points. The proposed method meta-learns, at different points in training of the dialog model, which related task data points to learn from (and also to what degree (0, 1)). The weight assigned to a particular related task data point can therefore vary across training. For simplicity, we described our proposed method with one update each using primary task data points, using weighted related task data points, and of related task data points' weights in each training iteration. But in practice, we make multiple updates to the related task data points' weights (η parameters) within each iteration. Also, for each η update we simulate how the model parameters θ change over multiple gradient updates (instead of just one as described in equations 6 and 7). This allows for a better estimate of how the updates using related task data points with the current parameters η affect the updated dialog model's performance on the primary task. Note that our proposed method is agnostic to the exact architecture of the model M and weight module P . Also, while we focus on settings with a single related task, the proposed method naturally extends to settings with more than one related task.
Experiments and Results
We first illustrate our proposed method on a simple image classification task with a hand designed related task that allows us to verify if the proposed method can learn meaningful weights. We then evaluate our proposed method on the task of personalized restaurant reservation.
MNIST Image Classification
This experiment illustrates in a very simple setting how our proposed method works. We set up the experiment with a clear indication of which related task data points should have high weights and which should have low weights. The primary task is the classification of hand-written digits from the MNIST dataset (LeCun et al., 2010). The related task is created by taking the primary task data and changing the label to an incorrect one for 75% of the training data. This means that of the 50,000 related task data points, 25% (12,500 data points) are useful for the primary task while 75% (37,500 data points) are not. We also add an input feature to every related task data point that indicates whether that data point's label is correct or not.
In this experiment, to focus on the effect of our Figure 1: Distributions of learned weights for data points at different points during training on the image classification task. The histograms in red (plain) and blue (stripped) correspond to the related task data points with incorrect and correct labels respectively. Our method successfully uses the indicator feature to assign weights that ignore incorrect points and learn from correct points.
learned related task data point weights on performance, we perform only the last two steps (steps 2 and 3) of the algorithm. In other words, no updates are made to the model using the primary task data, we only update the model using the weighted related task data and then update the weights of the related task data points. However, primary task data is still used in the calculation of the metagradients for updating the weights of the related task data points. In order for the dialog model to perform well on the primary task, the data points with incorrect labels in the related task data need to be assigned weights lower than the data points that have the correct label. We use logistic regression as our classification model (M θ ) and a perceptron with a sigmoid nonlinearity at the output for the weight generation module (P η ). The weight generation module takes as input the image, its label and a binary feature that indicates if that label is correct or not for that image, and produces a scalar weight between 0 and 1 as the output. Refer to Appendix A for more details of the architecture and training. Figure 1 shows the distribution of data points over the range of weights. The histogram in red (plain) and blue (striped) correspond to the related task data points with incorrect and correct labels respectively. Refer to Figures 3 and 4 in Appendix A for visualization of weights at other intermediate stages of training. Our proposed method with the meta-gradient based update to the weight generation module learns to give high weights to the data points with correct labels and low weights to the data points with incorrect labels. We observed similar weight assignment over multiple runs with different random seeds. In the last epoch of training, the average weight given to the related task data points with correct labels is 0.9747 ± 0.0004 and the average weight given to those with incorrect labels is 0.0074 ± 0.0002 (mean and standard deviation over 5 runs). Note that the method starts with random weights and updates the weights during training. The average weight across all the training epochs, given to the related task data points with correct labels is 0.8558 ± 0.0020 and to the related task data points with incorrect labels is 0.1508 ± 0.0041. Table 1 shows the performance of the classification model with different types of weighting for the related task data points. We compare with: 1 for All: Use all related task data equally.
Results
Random-Fixed: Assign a random weight to each related task data point at the start of the training.
Random-Changing: Each time a related task data point is sampled, use a new random weight.
Oracle: From the start, use a weight of 1 for correct data points and 0 for incorrect data points.
From the results, we observe that selectively learning from the related task data points with learned weights (row 4) performs much better than the methods that use all the data points uniformly (row 1) or assign random weights to the data points (rows 2 and 3). The proposed method's performance is very close to the oracle method that has access to the perfect weights for the related task data points from the start (row 5) and throughout training. We attribute this small gap mainly to the lingering effects of incorrect weights used for learning from the related task data points in the early stages of training in the proposed method. The visualization of the weights and the resulting performance indicate that the proposed method can indeed learn suitable weights using the meta-gradient update in this setting and lead to a performance very close to the best performance possible with perfect weights.
Personalized Restaurant Reservation
Personalizing dialog system responses based on the user that the dialog system is interacting with will be a key step in seamless integration of dialog systems into our everyday lives. Recognizing this, Joshi et al. (2017) proposed the first open dataset for training end-to-end dialog systems where the dialog system responses are based on the profile of the user. Their dataset is set in the domain of restaurant reservation, built as an extension of the bAbI dialog tasks from Bordes et al. (2017).
The bAbI dialog tasks are a testbed to evaluate the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. The dataset is generated by a restaurant reservation simulation where the final goal is to book a table. The simulator uses a Knowledge Base (KB) which contains information about restaurants. There are five tasks: Task 1 (Issuing API calls; by collecting relevant information from the user), Task 2 (Updating API calls; based on the information that the user wants to change), Task 3 (Displaying options; from the restaurants retrieved by the API call, suggesting restaurants in the order of their ratings), Task 4 (Providing extra information; if asked, providing the directions and/or contact information of the restaurant selected by the user) and Task 5 (Conducting full dialogs; combining tasks 1,2,3 and 4). In Joshi et al. (2017)'s extension of the bAbI dialog tasks (referred to as personalized-bAbI from here on), in addition to the goal of the bAbI dialog tasks, the dialog system should also use the additional user profile information provided, to personalize the response styles and reasoning over the Knowledge Base (KB). The user profile consists of the user's age (young, middle-aged, elderly), gender (male, female), dietary preference (vegetarian, non-vegetarian) and favorite food item (Fish and Chips, Biryani, etc). The style of the dialog system's response depends on the age and the gender of the user. In Task 3, from the restaurants retrieved through the API call, the dialog system now has to sort and suggest restaurants based not just on the restaurant's rating, but also based on the user's dietary preference and favorite food item. For this, the personalized-bAbI dialog task KB has additional information about restaurant type (vegetarian or non-vegetarian) and speciality (Fish and Chips, Biryani, etc). Figure 5 (Right) in Appendix B shows an example of Task 3 from the personalized-bAbI dialog tasks.
We use Task 3 from the personalized-bAbI dialog tasks as our primary task, and Task 5 of the bAbI dialog tasks as the related task. 100% (1000 dialogs) of the training dialogs of bAbI dialog Task 5 are available as the related task data. For the primary task, we simulate limited data availability by decreasing the number of training and validation dialogs. We look at three data settings: 5% (50 dialogs), 10% (100 dialogs) and 15% (150 dialogs) of the primary task training and validation dialogs. For testing, we use 100% (1000 dialogs) of the test dialogs from the personalized-bAbI dataset.
Let us look at the similarities and differences between the related task and the primary task. The related task has parts in its dialog, such as the greetings and getting information from the user, that are semantically similar to that of the primary task. They are not exactly the same due to the differences in response style (the style differs based on the user profile in the primary task). Due to the presence of different response styles, the vocabulary of the primary task is also much larger and different than that of the related task. The related task also has some parts that involve different output choices, such as the ordering of the restaurants to suggest to the user. In the primary task the ordering should be based on the restaurant's rating, user's dietary pref- Proposed Method 57.7 ± 1.6 64.6 ± 0.8 67.1 ± 0.6 Random-Fixed 50.7 ± 2.0 58.7 ± 0.8 61.2 ± 1.0 Random-Changing 52.3 ± 0.9 58.7 ± 1.0 59.8 ± 0.8 Table 2: Test results, % Per-turn retrieval accuracy (mean and standard deviation over 5 runs) in predicting the next dialog system utterance.
erence and favorite food, while in the related task it is based on only the restaurant's rating. There are also parts of the related task that are not relevant to the primary task. These includes the parts corresponding to Task 2 (updating API calls) and
Task 4 (providing extra information such as the restaurant's direction or contact information) of the related task. As noted earlier, our proposed method is agnostic to the dialog model architecture. In our experiments we use the same dialog model architecture as used in Joshi et al. (2017), end-to-end memory networks (Sukhbaatar et al., 2015), for both the dialog model M θ and the weight generation module P η . In the dialog model, the internal dialog state generated after attending over the dialog history is used to select the candidate response from the list of candidates. For the weight generation module, the internal dialog state generated after attending over the dialog history is used to generate the scalar (0, 1) weight. Refer to Appendix B for more details of the architecture and training. Table 2 shows the performance of our proposed method along with several other methods:
Results
Primary: Trained using only the primary task data.
Primary + Related Pre-Training: Pre-trained with related task data and then fine-tuned with primary task data.
Primary + Related: Trained using related task and primary task data points together.
Primary + Auxiliary Related (Multi-Task):
The dialog model has two prediction heads, one for the primary task, and one for the related task, with a shared end-to-end memory network body that generates the internal dialog state used for selecting the candidate response. This is similar to the conventional way of performing multi-task learning. This can also be interpreted as using related task prediction as an auxiliary task.
Proposed Method: Primary + Weighted Auxiliary Related : Identical to the previous approach, except that the prediction loss of the related task is weighted by the weights learned by our proposed method. We also show results for two variations with random weighting methods. Table 2 shows that in all the three data settings the conventional methods of using the related task data (rows 2,3,4) lead to a reduction in performance (negative transfer) compared to not using the related task data points at all (row 1: Primary).
The closest conventional method is row four, learning from both the primary task data and related task data simultaneously with a shared network body and separate prediction heads. The worst result is the second row, pre-training with the related task data points. We hypothesise that, due to the differences (vocabulary, contradictory and irrelevant sub-tasks) in the primary and related task, starting from the pre-trained network weights obtained using related task data leads to a worse local minimum during fine-tuning compared to starting from a randomly initialized network weights and training with primary task data alone.
The best result is our proposed method (row 5: Primary + Weighted Auxiliary Related), which weights the auxiliary related task update between (0, 1) and selectively learns from them. Our method scores ≈ 6.5% higher than the standard multi-task learning approach (row 4). We avoid negative transfer, improving over the first row by 3-6% depending on the amount of primary data available. The improvement is larger as more primary task data is available, indicating that the related task data (fixed in size) can be utilised better with more primary task data. While selectively learning can always help with avoiding negative transfer by lowering the weights for data points that lead to negative transfer, the improvement in performance (compared to not using related task data) possible by using the related task data points will depend on the relationship between the primary task and the related task.
To verify that our learned weights are meaningful, we also consider random weights. The last three rows compare the performance with different types of weighting for the related task data points. It is clear that random weighting does not lead to the improvement in performance that we observe when we learn the weights using our proposed method. Figure 2 shows histograms of the data points based on the assigned weights. Unlike the simple MNIST image classification (Section 4.1), here we do not know which related task data points should have high weights and which data points should not. The optimal weights for the related task data points at any given stage of training can be different from the optimal weights for them at a different stage of training, i.e., the optimal weights for the related task data points are non-stationary, as they depend on the current state (parameters of the dialog model) of the dialog system. For example, some data points of the related task which are quite different from the primary task might still be useful to learn from at the early stages of training to help with learning better representations for the vocabulary, and some data points that the dialog system has already learned from might get lower weights at later stages of learning so as to avoid overfitting and thereby helping with the prediction of other data points.
Conclusion
End-to-end learning of neural goal-oriented dialog systems requires large amounts of data for training. Collecting data is a costly and time consuming process. In this work we showed on an example dialog task we can utilise a related task's data to improve the performance of a new task of interest with only a limited amount of data. Our proposed method uses meta-learning to automatically learn which of the related task data points to selectively learn from. An important future work is to evaluate/extend the proposed method to more challenging and complex dialog datasets/tasks. Data useful for a dialog task of interest (related data) could be present in different formats. The related data could include, for example, natural language instructions on how to perform the task of interest, or a description of how the new dialog task of interest is different from a related dialog task. An interesting future direction is to investigate methods to successfully utilise such related data. Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, and Rui Yan. 2020. Low-resource knowledge-grounded dialogue generation. In International Conference on Learning Representations (ICLR).
Zeyu Zheng, Junhyuk Oh, and Satinder Singh. 2018. On learning intrinsic rewards for policy gradient methods. In Advances in Neural Information Processing Systems (NeurIPS).
A MNIST Image Classification
A.1 Architecture and Training Details
The MNIST dataset contains 60,000 training images and 10,000 testing images. Among the training images, we use 50,000 for training and 10,000 for validation. The primary task is the classification of hand-written digits from the MNIST dataset (LeCun et al., 2010). The related task is created by taking the primary task data and changing the label to an incorrect one for 75% of the training data. This means that of the 50,000 related task data points, 25% (12,500 data points) are useful for the primary task while 75% (37,500 data points) are not. The MNIST hand-written digit images are resized to 28 x 28 images and the pixel values are normalized to [0,1]. The images are flattened to a 1-D array of 784 features (28 x 28). We use a logistic regression as our classification model (M θ ) and a perceptron with sigmoid non-linearity at the output for the weight generation module (P η ). The weight generation module takes as input the image, its label and the indicator that tells if that label is correct or not for that image, and produces a scalar weight between (0, 1) as the output. The classification model has 7850 ((784(image) x 10(output label)) + 10(bias)) parameters and the weight generation module has 797(((784(image) + 10(label) + 2(indicator)) x 1(output weight)) + 1(bias)) parameters. At each iteration of training, we make one update to the classification model using weighted related task data and one update to the related task data points' weights using meta-gradients. For each of the meta-gradient update we simulate how the model parameters θ changes over one gradient update step using weighted related task data points. The training uses a batch size of 256, with Adam optimizer (learning rate = 0.001, epsilon = 1e-8). The training is run for a maximum of 15000 iterations and the validation data is used for model selection for testing. These experiments were run on a CPU laptop with 2.5 GHz Intel Core i5 pro-cessor and 8 GB RAM. It takes approximately 1 hour for each training run.
A.2 Visualization of learned related task data points' weights Figure 3 (left) and Figure 4 (left) shows the weights assigned for the different related task data points by our proposed method during different stages of training. The points in red (cross) and blue (dots) correspond to the weights of data points that have incorrect and correct labels respectively. Figure 3 (right) and Figure 4 (right) shows the histograms of number of data points in the different interval of weights. The histogram in red (plain) and blue (striped) correspond to the related task data points with incorrect and correct labels respectively.
B Personalized Restaurant Reservation
B.1 Example dialogs Figure 5 (Left) shows a simplified example of Task 5 from the bAbI dialog tasks, which is our related task. Figure 5 (Right) shows a simplified example of Task 3 from the personalized-bAbI dialog tasks, our primary task of interest.
B.2 Architecture and Training Details
In this experiment, we use Task 3 from the personalized-bAbI dialog tasks as our primary task, and Task 5 of the bAbI dialog tasks as the related task. 100% (1000 dialogs) of the training dialogs of bAbI dialog Task 5 are available as the related task data. For the primary task, we simulate limited data availability by decreasing the number of training and validation dialogs. We look at three data settings: 5% (50 dialogs), 10% (100 dialogs) and 15% (150 dialogs) of the primary task training and validation dialogs. For testing, we use 100% (1000 dialogs) of the test dialogs from the personalized-bAbI dataset. In this experiment we use the same dialog model architecture as used in Joshi et al. (2017), end-toend memory networks (Sukhbaatar et al., 2015). The sentences in the dialog are encoded using Bag of Words encoding. The encoded sentences, which are part of the dialog history, are stored in the memory and the query (last user utterance) embedding is used to attend over the memory (3 times) to get relevant information from the memory. The generated internal state is used to select the candidate response from the list of candidates. The entire network is trained end-to-end using cross-entropy loss of the candidate selection. We use an end-to-end memory network for the module P η (that produces the weights for each of the related task data points) as well. In this case, the internal state generated after attending over the memory is used to generate the scalar (0, 1) weight.
At each iteration of training, we make one update to the dialog model using primary task data, one update to the dialog model using weighted related task data and 10 updates to the related task data points' weights using meta-gradients. For each of the meta-gradient update we simulate how the model parameters θ changes over 5 gradient updates of weighted related task data points. We use the same hyper-parameters used by Joshi et al. (2017) for both our end-to-end memory networks: embedding size = 20, batch size = 32, optimizer = Adam (learning rate = 0.001, epsilon = 1e-8).
The parameters of the dialog model are made up of a word embedding matrix of size |Vocab| x 20 that encodes the dialog history and the current user utterance, a matrix of size 20 x 20 that transforms the selected memory embeddings to generate the internal dialog state, and a word embedding matrix of size |Vocab| x 20 for encoding the candidate responses. The parameters of the weight generation module are made up of a word embedding matrix of size |Vocab| x 20 that encodes the dialog history, the current user utterance, and the next dialog system utterance, a matrix of size 20 x 20 that transforms the selected memory embeddings to generate the internal dialog state, and a matrix of size 20 x 1 and a bias term of size 1 for transforming the internal dialog state to a scalar weight. The size of the vocabulary |Vocab| for different primary task data settings of 5%, 10% and 15% are 4129, 4657, and 4981 respectively. The training is run for 4000 epochs (of the related task data points) and the primary task validation dataset is used for model selection for testing. These experiments were run using GeForce GTX 1080 Ti GPUs. It takes approximately 15 hours for each training run.
B.3 Visualization of learned related task data points' weights The weights assigned for the different related task data points by our proposed method during different stages of training. The points in red (cross) and blue (dots) correspond to the weights of the related task data points that have incorrect and correct labels respectively. Right: Histograms of the number of data points in the different interval of weights. The histograms in red (plain) and blue (striped) correspond to the related task data points with incorrect and correct labels respectively. Refer to Figure 4 for Part 2/2. The weights assigned for the different related task data points by our proposed method during different stages of training. The points in red (cross) and blue (dots) correspond to the weights of the related task data points that have incorrect and correct labels respectively. Right: Histograms of the number of data points in the different interval of weights. The histograms in red (plain) and blue (striped) correspond to the related task data points with incorrect and correct labels respectively. Refer to Figure 3 for Part 1/2.
Figure 5 (
5Left) in Appendix B shows an example of Task 5 from the bAbI dialog tasks.
Figure 2 :
2Personalized Restaurant Reservation. Histograms of the number of related task data points in the different interval of weights.
For visualization of weights at more intermediate stages of training see Figures 6 and 7 in Appendix B.
Figure 6 (
6left) and Figure 7 (left) show the weights assigned for the different related task data points by our proposed method during different stages of training. Figure 6 (right) and Figure 7 (right) shows the histograms of number of data points in the different interval of weights.
Figure 3 :
3(Part 1/2) MNIST image classification. Left:
Figure 4 :
4(Part 2/2) MNIST image classification. Left:
Figure 5 :
5A user (in green) chats with a dialog system (in blue) to book a table at a restaurant. Left: (Related Task) An example dialog from bAbI dialog Task 5. Right: (Primary Task) An example dialog from Personalized-bAbI dialog Task 3.
Figure 6 :
6(Part 1/2) Personalized Restaurant Reservation. Left: The weights assigned for the different related task data points by our proposed method during different stages of training. Right: Histograms of the number of data points in the different interval of weights. Refer toFigure 7for Part 2/2.
Figure 7 :
7(Part 2/2) Personalized Restaurant Reservation. Left: The weights assigned for the different related task data points by our proposed method during different stages of training. Right: Histograms of the number of data points in the different interval of weights. Refer toFigure 6for Part 1/2.
Table 1: MNIST Test results when using our constructed related task data in various ways, including an oracle method that only learns from correct related data. Mean and standard deviation are over 5 runs. Our approach effectively learns to use the related data, ignoring the examples with incorrect labels.Weighting Method
Accuracy (%)
1 for All
21.63 ± 3.81
Random-Fixed
20.81 ± 4.46
Random-Changing
20.40 ± 4.42
Learned (Proposed Method) 87.86 ± 0.17
Oracle
90.32 ± 0.33
Learning end-to-end goal-oriented dialog. Antoine Bordes, Y-Lan Boureau, Jason Weston, International Conference on Learning Representations. ICLRAntoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. International Conference on Learning Representa- tions (ICLR).
Partial adversarial domain adaptation. Zhangjie Cao, Lijia Ma, Mingsheng Long, Jianmin Wang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Zhangjie Cao, Lijia Ma, Mingsheng Long, and Jianmin Wang. 2018. Partial adversarial domain adaptation. In Proceedings of the European Conference on Com- puter Vision (ECCV).
Learning lowresource end-to-end goal-oriented dialog for fast and reliable system deployment. Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin Li, Jian Sun, Xiaodan Zhu, 10.18653/v1/2020.acl-main.57Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL). the 58th Annual Meeting of the Association for Computational Linguistics (ACL)Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin Li, Jian Sun, and Xiaodan Zhu. 2020. Learning low- resource end-to-end goal-oriented dialog for fast and reliable system deployment. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics (ACL).
Keyvalue retrieval networks for task-oriented dialogue. Mihail Eric, D Christopher, Manning, Proceedings of the 18th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 18th Annual Meeting of the Special Interest Group on Discourse and DialogueSIG-DIALMihail Eric and Christopher D Manning. 2017. Key- value retrieval networks for task-oriented dialogue. Proceedings of the 18th Annual Meeting of the Spe- cial Interest Group on Discourse and Dialogue (SIG- DIAL).
Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Proceedings of International Conference on Machine Learning (ICML). International Conference on Machine Learning (ICML)Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of International Con- ference on Machine Learning (ICML).
Instance weighting for domain adaptation in NLP. Jing Jiang, Chengxiang Zhai, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL). the 45th Annual Meeting of the Association of Computational Linguistics (ACL)Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In Pro- ceedings of the 45th Annual Meeting of the Associa- tion of Computational Linguistics (ACL).
Personalization in goal-oriented dialog. K Chaitanya, Fei Joshi, Boi Mi, Faltings, Workshop on Conversational AI, Advances in Neural Information Processing Systems (NeurIPS). Chaitanya K. Joshi, Fei Mi, and Boi Faltings. 2017. Personalization in goal-oriented dialog. Workshop on Conversational AI, Advances in Neural Informa- tion Processing Systems (NeurIPS).
Mnist handwritten digit database. Yann Lecun, Corinna Cortes, C J Burges, ATT LabsOnlineYann LeCun, Corinna Cortes, and CJ Burges. 2010. Mnist handwritten digit database. ATT Labs [On- line]. Available: http://yann.lecun.com/exdb/mnist.
MinTL: Minimalist transfer learning for task-oriented dialogue systems. Zhaojiang Lin, Andrea Madotto, Pascale Genta Indra Winata, Fung, 10.18653/v1/2020.emnlp-main.273Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingEMNLPZhaojiang Lin, Andrea Madotto, Genta Indra Winata, and Pascale Fung. 2020. MinTL: Minimalist trans- fer learning for task-oriented dialogue systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
Personalizing dialogue agents via meta-learning. Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, Pascale Fung, 10.18653/v1/P19-1542Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). the 57th Annual Meeting of the Association for Computational Linguistics (ACL)Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. 2019. Personalizing dialogue agents via meta-learning. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics (ACL).
Meta-learning for low-resource natural language generation in task-oriented dialogue systems. Fei Mi, Minlie Huang, Jiyong Zhang, Boi Faltings, 10.24963/ijcai.2019/437Proceedings of the 28th International Joint Conference on Artificial Intelligence, (IJCAI). the 28th International Joint Conference on Artificial Intelligence, (IJCAI)Fei Mi, Minlie Huang, Jiyong Zhang, and Boi Faltings. 2019. Meta-learning for low-resource natural lan- guage generation in task-oriented dialogue systems. In Proceedings of the 28th International Joint Con- ference on Artificial Intelligence, (IJCAI).
Completely heterogeneous transfer learning with attention -what and what not to transfer. Seungwhan Moon, Jaime Carbonell, 10.24963/ijcai.2017/349Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI). the 26th International Joint Conference on Artificial Intelligence (IJCAI)Seungwhan Moon and Jaime Carbonell. 2017. Com- pletely heterogeneous transfer learning with atten- tion -what and what not to transfer. In Proceedings of the 26th International Joint Conference on Artifi- cial Intelligence (IJCAI).
Domain adaptive dialog generation via meta learning. Kun Qian, Zhou Yu, 10.18653/v1/P19-1253Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). the 57th Annual Meeting of the Association for Computational Linguistics (ACL)Kun Qian and Zhou Yu. 2019. Domain adaptive dialog generation via meta learning. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics (ACL).
Vivek Veeriah, Honglak Lee, and Satinder Singh. 2020. How should an agent practice?. Janarthanan Rajendran, Richard Lewis, 10.1609/aaai.v34i04.5995Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI). 34th AAAI Conference on Artificial Intelligence (AAAI)Janarthanan Rajendran, Richard Lewis, Vivek Veeriah, Honglak Lee, and Satinder Singh. 2020. How should an agent practice? In Proceedings of 34th AAAI Conference on Artificial Intelligence (AAAI).
Generate to adapt: Aligning domains using generative adversarial networks. S Sankaranarayanan, Y Balaji, C D Castillo, R Chellappa, 10.1109/CVPR.2018.00887IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). S. Sankaranarayanan, Y. Balaji, C. D. Castillo, and R. Chellappa. 2018. Generate to adapt: Aligning domains using generative adversarial networks. In IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR).
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, Advances in Neural Information Processing Systems (NeurIPS). Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Sys- tems (NeurIPS).
Characterizing and avoiding negative transfer. Z Wang, Z Dai, B Póczos, J Carbonell, 10.1109/CVPR.2019.01155IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Z. Wang, Z. Dai, B. Póczos, and J. Carbonell. 2019. Characterizing and avoiding negative transfer. In IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR).
Multi-domain neural network language generation for spoken dialogue systems. Milica Tsung-Hsien Wen, Nikola Gašić, Lina M Mrkšić, Pei-Hao Rojas-Barahona, David Su, Steve Vandyke, Young, 10.18653/v1/N16-1015Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)Tsung-Hsien Wen, Milica Gašić, Nikola Mrkšić, Lina M. Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016. Multi-domain neural network language generation for spoken dia- logue systems. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL-HLT).
A networkbased end-to-end trainable task-oriented dialogue system. David Tsung-Hsien Wen, Nikola Vandyke, Milica Mrkšić, Lina M Gasic, Pei-Hao Rojas Barahona, Stefan Su, Steve Ultes, Young, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL). the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)Tsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gasic, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network- based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL).
. Olga Wichrowska, Niru Maheswaranathan, Olga Wichrowska, Niru Maheswaranathan,
Learned optimizers that scale and generalize. Matthew W Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Proceedings of the International Conference on Machine Learning (ICML). the International Conference on Machine Learning (ICML)Nando de Freitas, and Jascha Sohl-DicksteinMatthew W. Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl- Dickstein. 2017. Learned optimizers that scale and generalize. In Proceedings of the International Conference on Machine Learning (ICML).
Understanding short-horizon bias in stochastic meta-optimization. Yuhuai Wu, Mengye Ren, Renjie Liao, Roger Grosse, International Conference on Learning Representations. ICLRYuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. 2018. Understanding short-horizon bias in stochastic meta-optimization. In International Con- ference on Learning Representations (ICLR).
Yumo Xu, Chenguang Zhu, Baolin Peng, Michael Zeng, arXiv:2006.02588Meta dialogue policy learning. arXiv preprintYumo Xu, Chenguang Zhu, Baolin Peng, and Michael Zeng. 2020. Meta dialogue policy learning. arXiv preprint arXiv:2006.02588.
Meta-gradient reinforcement learning. Zhongwen Xu, David Hado P Van Hasselt, Silver, Advances in Neural Information Processing Systems (NeurIPS). Zhongwen Xu, Hado P van Hasselt, and David Silver. 2018. Meta-gradient reinforcement learning. In Ad- vances in Neural Information Processing Systems (NeurIPS).
Adversarial multiple source domain adaptation. Han Zhao, Shanghang Zhang, Guanhang Wu, M F José, Joao P Moura, Geoffrey J Costeira, Gordon, Advances in Neural Information Processing Systems (NeurIPS). Han Zhao, Shanghang Zhang, Guanhang Wu, José M. F. Moura, Joao P Costeira, and Geoffrey J Gor- don. 2018. Adversarial multiple source domain adaptation. In Advances in Neural Information Pro- cessing Systems (NeurIPS).
| [] |
[
"K-vec: A New Approach for Aligning Parallel Texts",
"K-vec: A New Approach for Aligning Parallel Texts"
] | [
"Pascale Fung fung@cs.columbia.edu \nComputer Science Department\nAT&T Bell Laboratories\nColumbia University New York\n600 Mountain Ave. Murray Hill10027, 07974NY, NJUSA, USA\n",
"Kenneth Ward Church \nComputer Science Department\nAT&T Bell Laboratories\nColumbia University New York\n600 Mountain Ave. Murray Hill10027, 07974NY, NJUSA, USA\n"
] | [
"Computer Science Department\nAT&T Bell Laboratories\nColumbia University New York\n600 Mountain Ave. Murray Hill10027, 07974NY, NJUSA, USA",
"Computer Science Department\nAT&T Bell Laboratories\nColumbia University New York\n600 Mountain Ave. Murray Hill10027, 07974NY, NJUSA, USA"
] | [
"Shemtov"
] | Various methods have been proposed for aligning texts in two or more languages such as the Canadian Parliamentary Debates (Hansards). Some of these methods generate a bilingual lexicon as a by-product. We present an alternative alignment strategy which we call K-vec, that starts by estimating the lexicon. For example, it discovers that the English word fisheries is similar to the French p~ches by noting that the distribution of fisheries in the English text is similar to the distribution of p~ches in the French. K-vec does not depend on sentence boundaries. | 10.3115/991250.991328 | null | 541,539 | cmp-lg/9407021 | 3491aa4a9a66ba3d1603230a70d82c7479666a7d |
K-vec: A New Approach for Aligning Parallel Texts
WuCopyright Wu1990, 1991, 1993. 1993. 1993. 1993. 1991. 1993. 1992. 1993. 1990. 1993. 1991. 1993. 1993. 1992. 1990
Pascale Fung fung@cs.columbia.edu
Computer Science Department
AT&T Bell Laboratories
Columbia University New York
600 Mountain Ave. Murray Hill10027, 07974NY, NJUSA, USA
Kenneth Ward Church
Computer Science Department
AT&T Bell Laboratories
Columbia University New York
600 Mountain Ave. Murray Hill10027, 07974NY, NJUSA, USA
K-vec: A New Approach for Aligning Parallel Texts
Shemtov
Isabelle; Kupiec; MatsumotoWu1990, 1991, 1993. 1993. 1993. 1993. 1991. 1993. 1992. 1993. 1990. 1993. 1991. 1993. 1993. 1992. 1990
Various methods have been proposed for aligning texts in two or more languages such as the Canadian Parliamentary Debates (Hansards). Some of these methods generate a bilingual lexicon as a by-product. We present an alternative alignment strategy which we call K-vec, that starts by estimating the lexicon. For example, it discovers that the English word fisheries is similar to the French p~ches by noting that the distribution of fisheries in the English text is similar to the distribution of p~ches in the French. K-vec does not depend on sentence boundaries.
Motivation
There have been quite a number of recent papers on parallel text: Brown et al (1990Brown et al ( , 1991Brown et al ( , 1993, Chen (1993), Church (1993), , Dagan et al (1993), Church (1991, 1993), Isabelle (1992), Kay and Rgsenschein (1993), Klavans and Tzoukermann (1990), Kupiec (1993), Matsumoto (1991), Ogden and Gonzales (1993), Shemtov (1993), Simard et al (1992), Warwick-Armstrong and Russell (1990), Wu (to appear). Most of this work has been focused on European language pairs, especially English-French. It remains an open question how well these methods might generalize to other language pairs, especially pairs such as English-Japanese and English-Chinese.
In previous work , we have reported some preliminary success in aligning the English and Japanese versions of the AWK manual (Aho, Kernighan, Weinberger (1980)), using charalign (Church, 1993), a method that looks for character sequences that are the same in both the source and target. The charalign method was designed for European language pairs, where cognates often share character sequences, e.g., government and gouvernement. In general, this approach doesn't work between languages such as English and Japanese which are written in different alphabets. The AWK manual happens to contain a large number of examples and technical words that are the same in the English source and target Japanese.
It remains an open question how we might be able to align a broader class of texts, especially those that are written in different character sets and share relatively few character sequences. The K-vec method attempts to address this question.
The K-vec Algorithm
K-vec starts by estimating the lexicon. Consider the example: fisheries --~ p~ches. The K-vec algorithm will discover this fact by noting that the distribution of fisheries in the English text is similar to the distribution of p~ches in the French.
The concordances for fisheries and p~ches are shown in Tables 1 and 2 (at the end of this paper). 1 1. These tables were computed from a small fragment of the Canadian Hansards that has been used in a number of other studies: Church (1993) and Simard et al (1992). The English text has 165,160 words and the French text has 185,615 words.
There are 19 instances of fisheries and 21 instances of p~ches. The numbers along the left hand edge
show where the concordances were found in the texts. We want to know whether the distribution of numbers in Table 1 is similar to those in Table 2, and if so, we will suspect that fisheries and p~ches are translations of one another. A quick look at the two tables suggests that the two distributions are probably very similar, though not quite identical. 2
We use a simple representation of the distribution of fisheries and p~ches. The English text and the French text were each split into K pieces. Then we determine whether or not the word in question appears in each of the K pieces. Thus, we denote the distribution of fisheries in the English text with a K-dimensional binary vector, VU, and similarly, we denote the distribution of p~ches in the French text with a K-dimensional binary vector, Vp. The i th bit of Vf indicates whether or not Fisheries occurs in the i th piece of the English text, and similarly, the ith bit of Vp indicates whether or not p~ches occurs in the i th piece of the French text.
If we take K be 10, the first three instances of fisheries in Table 1 fall into piece 2, and the remaining 16 fall into piece 8. Similarly, the first 4 instances of pgches in Table 2 fall into piece 2, and the remaining 17 fall into piece 8. Thus, VT= Vp = <2 0,0,1,0,0,0,0,0,1,0 > Now, we want to know if VT is similar to Vp, and if we find that it is, then we will suspect that fisheries ---> p~ches. In this example, of course, the vectors are identical, so practically any reasonable similarity statistic ought to produce the desired result. As can be seen in the concordances in Table 3, for K=10, the vector is <1, 1, 0, 1, 1,0, 1, 0, 0, 0>. By almost any measure of similarity one could imagine, this vector will be found to be quite different from the one for fisheries, and therefore, we will correctly discover that fisheries is not the translation of lections.
fisheries is not file translation of lections
To make this argument a little more precise, it might help to compare the contingency matrices in Tables 5 and 6. The contingency matrices show: (a) the number of pieces where both the English and French word were found, (b) the number of pieces where just the English word was found, (c) the number of pieces where just the French word was found, and (d) the number of peices where neither word was found. In general, if the English and French words are good translations of one another, as in Table 5, then a should be large, and b and c should be small. In contrast, if the two words are not good translations of one another, as in Table 6, then a should be small, and b and c should be large.
Mutual Information
Intuitively, these statements seem to be true, but we need to make them more precise. One could have chosen quite a number of similarity metrics for this purpose. We use mutual information:
prob ( VI, Vp ) log2 prob(Vf) prob(Vp )
That is, we want to compare the probability of seeing fisheries and p~ches in the same piece to chance. The probability of seeing the two words in the same piece is simply:
a prob(Vf, Vp)-a+b+c+d
The marginal probabilities are:
a+b prob(Vf)-a+b+c+d a+c prob(Vp) = a+b+c+d For fisheries --~ p~ches, prob(Vf, Vp) =prob(Vf) =prob(Vp) =0.2.
Thus, the mutual information is log25 or 2.32 bits, meaning that the joint probability is 5 times more likely than chance. In contrast, for fisheries ~ lections, prob ( V f, V p ) = O, prob(Vf) =0.5 and prob(Vp) = 0.4. Thus, the mutual information is log 2 0, meaning that the joint is infinitely less likely than chance. We conclude that it is quite likely that fisheries and p~ches are translations of one another, much more so than fisheries and lections.
Significance
Unfortunately, mutual information is often unreliable when the counts are small. For example, there are lots of infrequent words. If we pick a pair of these words at random, there is a very large chance that they would receive a large mutual information value by chance. For example, let e be an English word that appeared just once and letfbe a French word that appeared just once. Then, there a non-trivial chance (-~) that e andf will appear is in the same piece, as shown in Table 7. If this should happen, the mutual information estimate would be very large, i.e., logK, and probably misleading. In order to avoid this problem, we use a t-score to filter out insignificant mutual information values.
prob ( Vf, Vp ) -prob (Vf) prob ( Vp ) t=
1 prob (Vf,gp) Using the numbers in Table 7, t=l, which is not significant. (A t of 1.65 or more would be significant at the p > 0.95 confidence level.)
Similarly, if e and f appeared in just two pieces 1 each, then there is approximately a ~ chance that they would both appear in the same two pieces, and then the mutual information score would be quite log,, ~--, but we probably wouldn't believe it high, Z. because the t-score would be only "~-. By this definition of significance, we need to see the two words in at least 3 different pieces before the result would be considered significant.
This means, unfortunately, that we would reject fisheries --+ p~ches because we found them in only two pieces. The problem, of course, is that we don't have enough pieces. When K=10, there simply isn't enough resolution to see what's going on. At K=100, we obtain the contingency matrix shown in Table 8, and the t-score is significant (t=2.1). Ideally, we would like to apply the K-vec algorithm to all pairs of English and French words, but unfortunately, there are too many such pairs to consider. We therefore limited the search to pairs of words in the frequency range: 3-10. This heuristic makes the search practical, and catches many interesting pairs)
Results
This algorithm was applied to a fragment of the Canadian Hansards that has been used in a number of other studies: Church (1993) and Simard et al (1992). The 30 significant pairs with the largest mutual information values are shown in Table 9.
As can be seen, the results provide a quick-anddirty estimate of a bilingual lexicon. When the pair is not a direct translation, it is often the translation of a collocate, as illustrated by acheteur ~ Limited and Santd -~ Welfare. (Note that some words in Table 9 are spelled with same way in English and French; this information is not used by the K-vec algorithm).
Using a scatter plot technique developed by called dotplot, we can visulize the alignment, as illustrated in Figure 1. The source text (Nx bytes) is concatenated to the target text (Ny bytes) to form a single input sequence of Nx+Ny bytes. A dot is placed in position i,j whenever the input token at position i is the same as the input token at position j.
The equality constraint is relaxed in Figure 2. A dot is placed in position i,j whenever the input token at position i is highly associated with the input token at position j as determined by the mutual information score of their respective Kvecs. In addition, it shows a detailed, magnified and rotated view of the diagonal line. The alignment program tracks this line with as much precision as possible.
3. The low frequency words (frequency less then 3) would have been rejected anyways as insignificant.
Conclusions
The K-vec algorithm generates a quick-and-dirty estimate of a bilingual lexicon. This estimate could be used as a starting point for a more detailed alignment algorithm such as word_align . In this way, we might be able to apply word_align to a broader class of language combinations including possibly English-Japanese and English-Chinese.
Currently, word_align depends on charalign (Church, 1993) to generate a starting point, which limits its applicability to European languages since char_align was designed for language pairs that share a common alphabet.
References
Aho, Kernighan, Weinberger (1980) "The AWK Programming
Language," Addison-Wesley, Reading, Massachusetts, USA. private sector is quite weak. 1,ct us turn now to fisheries, an industry which as most important 1o The fishermen would like to see the l)epartment of Fisheries and Oceans put more effort towards the p s in particular. The budget of the Department of Fisheries and Oceans has been reduced to such ate ' habitation ' ' trom which to base his trade in fisheries and filrs. He brought wilh him the first ase .just outside of my riding. The Department of Fisheries and Oceans provides employmeut for many and all indications are that the riclmess ot' its fisheries resource will enable it to maintain its taxpayer. The role of file federal Department of Fisheries and Oceans is central to the concerns of is the new Chainnan of the Standing Committee on Fisheries and Oceans. I am sure he will bring a w ortunity to discuss it with me as a member of the Fisheries Committee. The Hon. Member asked what he proposal has been submitted to the Minister of Fisheries and Oceans ( Mr. Siddon ) which I hope ch as well as on his selection as Chairman of the Fisheries Committee. I have workexl with Mr. Come his intense interest and expertise in the area of fisheries. It seems most appropriate, given that r from Eastern Canada and the new Chairman of the Fisheries and Oceans Committee. We know that the d Oceans Committee. We know that the Minister of Fisheries and Oceans ( Mr. Siddon ), should we s ows the importance of research and development to fisheries and oceans. Is he now ready to tell the research and development component in the area of fisheries and oceans at Bedford, in order that th
,pp. 121-142. Klavans, J., and Tzoukermann, E., (1990), "TheBICORD System," COL1NG-90, pp 174-179. Kupiec, J. (1993) "An Algorithm for Finding Noun Phrase Correspondences in Bilingual Corpora," ACL-93, pp. 17-22. Matsumoto, Y., Ishimoto, It., Utsuro, T. and Nagao, M. (1993) "Structural Matching of Parallel Texts," ACL-93, pp. 23-30.
Table 4 :
4A contingency matrix
French
English
a
b
c
d
Table 5 :
5fisheries vs. pgches
p~ches
fisheries
2
0
0
8
Table 6 :
6fisheries vs.lections
lections
fisheries
0
2
4
4
Table 7 :
7f
e
1
0
0
9
Table 8 :
8K=100p~ches
fisheries
5
0
1
94
How do we choose K? As we have seen, if we
choose too small a K, then the mutual information
values will be unreliable. However, we can only
increase K up to a point. If we set K to a
ridiculously large value, say the size of the English
text, then an English word and its translations are
likely to fall in slightly different pieces due to
random fluctuations and we would miss the signal.
For this work, we set K to the square root of the
size of the corpus.
K should be thought of as a scale parameter. If we
use too low a resolution, then everything turns into
a blur and it is hard to see anything. But if we use
too high a resolution, then we can miss the signal if
Table 9 :
9K-vec results
French
English
3.2
Beauce
Beauce
3.2
Comeau
Comeau
3.2
1981
1981
3.0
Richmond
Richmond
3.0
Rail
VIA
3.0
p~ches
Fisheries
2.8
Deans
Deans
2.8
Prud
Prud
2.8
Prud
homme
2.7
acheteur
Limited
2.7
Communications
Communications
2.7
MacDonald
MacDonald
2.6
Mazankowski
Mazankowski
2.5
croisi~re
nuclear
2.5
Sant6
Welfare
2.5
39
39
2.5
Johnston
Johnston
2.5
essais
nuclear
2.5
Universit6
University
2.5
bois
lumber
2.5
Angus
Angus
2.4
Angus
VIA
2.4
Saskatoon
University
2.4
agriculteurs
farmers
2.4
inflation
inflation
2.4
James
James
2.4
Vanier
Vanier
2.4
Sant6
Health
2.3
royale
languages
2.3
grief
grievance
Table 1 :
1Concordances for fisheriesShemtov, H. (1993) "Text Alignment in a Tool for Translating Revised Documents," EACL,pp. 449- 453. Simard, M., Foster, G., and Isabelle, P. (1992) "Using Cognates to Align Sentences in BilingualCorpora," Fourth International Conference on Theoretical and Methodological Issues in Machine Translation (TMl-92), Montreal, Canada. Warwick-Armstrong, S. and G. Russell (1990) "Bilingual Concordancing and Bilingual Lexicography," Euralex. , D. (to appem') "Aligning Parallel English-Chinese Text Statistically with LexicaI Criteria," ACL-94. Mr. Speaker, my question is for tile Minister of Fisheries and Oceans. Allegations have been made of the stocks ? I-ton. Thomas Siddon ( Minister of Fisheries and Oceans ): Mr. Speaker, 1 tell the calculation on which the provincial Department of Fisheries makes this allegation and I find that itWu
A Statistical Approach to Machine Translation. P Brown, J Cocke, S Della Pietra, V Della Pietra, F Jelinek, J Lafferty, R Mercer, P Roossin, Computational Linguistics. 16Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer, and P. Roossin, (1990) "A Statistical Approach to Machine Translation," Computational Linguistics, vol. 16, pp. 79-85.
Aligning Sentences in Parallel Corpora. P Brown, J Lai, R Mercer, ACL- 91Brown, P., Lai, J., and Mercer, R. (1991) "Aligning Sentences in Parallel Corpora," ACL- 91.
The mathematics of machine translation: parameter estimation. P Brown, S Della Pietra, V Della Pietra, R Mercer, Computational Linguistics. Brown, P., Della Pietra, S., Della Pietra, V., and Mercer, R. (1993), "The mathematics of machine translation: parameter estimation," Computational Linguistics, pp. 263-312.
Aligning Sentences in Bilingual Corpora Using Lexical information. S Chen, ACL-93. Chen, S. (1993) "Aligning Sentences in Bilingual Corpora Using Lexical information," ACL-93, pp. 9-16.
Char_align: A Program for Aligning Parallel Texts at the Character Level. K Church, ACL-93. Church, K. (1993) "Char_align: A Program for Aligning Parallel Texts at the Character Level," ACL-93, pp. 1-8.
Aligning Parallel Texts: Do Methods Developed for English-French Generalize to Asian Languages. K Church, I Dagan, W Gale, P Fung, J Helfman, B Satish, Pacific Asia Conference on Formal and Computational Linguistics. Church, K., Dagan, I., Gale, W., Fung, P., Helfman, J., Satish, B. (1993) "Aligning Parallel Texts: Do Methods Developed for English-French Generalize to Asian Languages?" Pacific Asia Conference on Formal and Computational Linguistics.
Dotplot: a Program for Exploring Self-Similarity in Millions of Lines of Text and Code. K Church, J Helfman, The Journal of Computational and Graphical Statistics. 22Church, K. and Helfman, J. (1993) "Dotplot: a Program for Exploring Self-Similarity in Millions of Lines of Text and Code," The Journal of Computational and Graphical Statistics, 2:2, pp. 153-174.
Robust Word Alignment for Machine Aided Translation. I Dagan, K Church, W Gale, Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, available from the ACL. the Workshop on Very Large Corpora: Academic and Industrial Perspectives, available from the ACL8Dagan, I., Church, K., and Gale, W. (1993) "Robust Word Alignment for Machine Aided Translation," Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, available from the ACL, pp. I-8.
Identifying Word CoiTespondences in Parallel Text. W Gale, K Church, Fourth Darpa Workshop on Speech and Natural Language. AsilomarGale, W., and Church, K. (1991) "Identifying Word CoiTespondences in Parallel Text," Fourth Darpa Workshop on Speech and Natural Language, Asilomar.
A Program for Aligning Sentences in Bilingual Corpora. W Gale, K Church, Computational Linguistics. also presented at ACL-91Gale, W., and Church, K. (1993) "A Program for Aligning Sentences in Bilingual Corpora," Computational Linguistics, also presented at ACL- 91.
Bi-Textual Aids for Translators. P Isabelle, Proceedings of the Eigth Annual Conference of the UW Centre for the New OED and Text Research, available from the UW Centre for the New OED and Text Research. the Eigth Annual Conference of the UW Centre for the New OED and Text Research, available from the UW Centre for the New OED and Text ResearchWaterloo, Ontario, CanadaUniversity of WaterlooIsabelle, P. (1992) "Bi-Textual Aids for Translators," in Proceedings of the Eigth Annual Conference of the UW Centre for the New OED and Text Research, available from the UW Centre for the New OED and Text Research, University of Waterloo, Waterloo, Ontario, Canada.
The Proper Place of Men and. M Kay, Kay, M. (1980) "The Proper Place of Men and
| [] |
[
"A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features",
"A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features"
] | [
"Fardin Saad ",
"Hasan Mahmud ",
"MdMohammad Ridwan Alamin Kabir ",
"Paresha Shaheen ",
"MdKamrul Farastu ",
"Hasan "
] | [] | [] | A language agnostic approach to recognizing emotions from speech remains an incomplete and challenging task. In this paper, we performed a step-by-step comparative analysis of Speech Emotion Recognition (SER) using Bangla and English languages to assess whether distinguishing emotions from speech is independent of language. Six emotions were categorized for this study, such as -happy, angry, neutral, sad, disgust, and fear. We employed three Emotional Speech Sets (ESS), of which the first two were developed by native Bengali speakers in Bangla and English languages separately. The third was a subset of the Toronto Emotional Speech Set (TESS), which was developed by native English speakers from Canada. We carefully selected language-independent prosodic features, adopted a Support Vector Machine (SVM) model, and conducted three experiments to carry out our proposition. In the first experiment, we measured the performance of the three speech sets individually, followed by the second experiment, where different ESS pairs were integrated to analyze the impact on SER. Finally, we measured the recognition rate by training and testing the model with different speech sets in the third experiment. Although this study reveals that SER in Bangla and English languages is mostly language-independent, some disparities were observed while recognizing emotional states like disgust and fear in these two languages. Moreover, our investigations revealed that non-native speakers convey emotions through speech, much like expressing themselves in their native tongue. | null | [
"https://arxiv.org/pdf/2111.10776v3.pdf"
] | 248,811,282 | 2111.10776 | cd6b34233e9fc9ec8ed01a202cf6ac5667fa64ba |
A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features
Fardin Saad
Hasan Mahmud
MdMohammad Ridwan Alamin Kabir
Paresha Shaheen
MdKamrul Farastu
Hasan
A Case Study on the Independence of Speech Emotion Recognition in Bangla and English Languages using Language-Independent Prosodic Features
1Emotional Speech SetLanguage Independent FeaturesNative/Non-native SpeakersProsodic FeaturesSpeech Emotion RecognitionSupport Vector Machines
A language agnostic approach to recognizing emotions from speech remains an incomplete and challenging task. In this paper, we performed a step-by-step comparative analysis of Speech Emotion Recognition (SER) using Bangla and English languages to assess whether distinguishing emotions from speech is independent of language. Six emotions were categorized for this study, such as -happy, angry, neutral, sad, disgust, and fear. We employed three Emotional Speech Sets (ESS), of which the first two were developed by native Bengali speakers in Bangla and English languages separately. The third was a subset of the Toronto Emotional Speech Set (TESS), which was developed by native English speakers from Canada. We carefully selected language-independent prosodic features, adopted a Support Vector Machine (SVM) model, and conducted three experiments to carry out our proposition. In the first experiment, we measured the performance of the three speech sets individually, followed by the second experiment, where different ESS pairs were integrated to analyze the impact on SER. Finally, we measured the recognition rate by training and testing the model with different speech sets in the third experiment. Although this study reveals that SER in Bangla and English languages is mostly language-independent, some disparities were observed while recognizing emotional states like disgust and fear in these two languages. Moreover, our investigations revealed that non-native speakers convey emotions through speech, much like expressing themselves in their native tongue.
Introduction
Humans have been intuitively using speech as the most natural and preferred means of communication. However, speech signals are mostly non-stationary processes, containing multiple components that vary in time and frequency. Since these signals occur naturally, they are erratic in nature [1]. Furthermore, these signals carry a lot of information and, at the same time, convey an individual's emotional status. An emotional speech expresses the patterns of rhythm and intonation, often referred to as prosody in a language [2], [3]. The prosodic cues in speech signals are also known as para-linguistic features since they are linked with speech segments properties such as syllables, words, and sentences [3]. Zeng et al. [4] demonstrated prosodic features to contain the most critical and exclusive emotional information. Consequently, numerous research in Speech Emotion Recognition (SER) have used prosodic features to identify emotional states [3].
SER directs the identification of emotional states of a person from their speech [5]. It facilitates the measurement of acoustic cues from speech as a standard for emotion recognition [6], which led the path for researching the most comprehensive features that contribute greatly to this cause. In line with this, Pell et al. [7] emphasized the significance of vocal features for emotion recognition from speech. Several studies were conducted to investigate the salient vocal features. Amongst them, prosodic features were found to give better if not the same emotion recognition accuracy than human judges [3], [8], [9]. Apart from this, SER systems are useful in several applications such as e-learning, where the emotional attributes of the pupils can be identified to regulate the teaching style of the tutors [10], [11]. Rapid commercialization of speech emotion recognition can be seen in employee mood identification [12], interactive games [13], call centers [14] to decipher customer queries and complaints, psychiatric aids, among others.
Feature extraction and selection from speech impart great importance in successfully identifying emotional states. However, selecting a large number of features breeds various complexities, which eventually results in classification error [3], [15]. Furthermore, languages can differ in many ways with regards to their grammatical and morphological properties [15], making SER ambiguous. The variations in dialects of a language can occur due to the usage of words, accents, or how people arrange their speech. These variations can be credited to certain social factors, culture, or geographical distance [16]. Moreover, factors such asgender, age, background, vocal features, etc., of a speaker highly influence the expression of different emotions [2], [17], [18]. Due to these multifarious factors, emotion recognition from speech remains an arduous task [19]. Hence, there may be particularly pivotal factors that influence speech emotion recognition [20]. So, it becomes essential for us to work with vocal features which are independent of the nuances of language.
From previous research, we found that prosodic features [3], [4], [21] such as fundamental frequency or pitch-related features containing pitch mean, pitch median, pitch standard deviation, and energy-related features like intensity carry a lot of emotional information [22], [23]. In contrast, the spectral features such as MFCC related features depend on phonemes, and thus the style of an utterance [23]. Therefore, it can be inferred that spectral features are language-dependent [23] features, whereas prosodic features are language-independent [21], [22].
Schull et al. [24] enforced wrapper-based search along with the dynamic base contour to develop a feature set from intensity, pitch, formants, and MFCCs. The best emotion recognition rate has been accomplished using MFCC features with SVM. Rajoo et al. [20] used MFCCs, formants, energy, and fundamental frequency or pitch as acoustics cues for speech emotion recognition. Borchert et al. [3], [25] used prosodic vocal features and quality features such as formants, spectral energy distribution, jitter, and shimmer, to classify emotions. For deducing the salient set of vocal features, Kostoulas et al. [26] and Anagnostopoulos et al. [27] employed a subset of correlated features. Despite improving the emotion classification rate, these features are not language-independent since most used spectral features on a single emotional corpus.
Thus, to construct a Speech Emotion Recognition system, it is vital to select prosodic features such as fundamental frequency or pitch and energy-related features and select a classifier independent of these features. Noroozi et al. [15] presented a system for achieving a set of language-independent features. They observed that pitch and energy-related features were language-independent when they adopted a Support Vector Machine (SVM) classifier. While exploring language-independent features via feature selection, Shaukat et al. [22] discerned that fundamental frequency or pitch, formants, intensity or energy, harmonicity, loudness, duration, etc., were language-independent features. Subsequently, they achieved a higher performance rate when using SVM for their classification method. Lieke et al. [28] used 3 prosodic cues onto speech by Spanish learners of Dutch to judge whether it improves native Dutch speakers' perception of accentedness and comprehensibility. Luengo et al. [8] illustrated that six prosodic features incorporated with an SVM classifier could achieve an emotion recognition rate almost equal to GMM models with 512 spectral MFCC features emphasizing the significance of prosodic features in distinguishing emotions. Rao et al. [29] extracted and employed local and global prosodic features with SVM at different speech segments to recognize emotions from the Telegu emotional speech corpus. Bhatti et al. [21] used 17 prosodic features, which included pitch mean, pitch median, pitch standard deviation, etc., as language-independent features for identifying emotional states from speech.
In this study, we aim to juxtapose and compare Bangla and English languages to measure the language independency of SER using language-independent prosodic features and native/non-native speakers. We used 6 emotional states such as -happy, angry, neutral, sad, disgust, and fear [1], [2], [20], [23], in English and Bangla languages across three Emotional Speech Sets (ESS), such as -1) the English TESS (E-TESS), a subset of the Toronto ESS (TESS) [30], for the native English language, 2) the Bangla ESS (BESS) for the native Bangla language, developed by native Bangla speakers in accordance with the development process of TESS [31], and 3) similar to BESS, the English ESS (EESS) for the nonnative English language, developed by the same native Bangla speakers, or in other words non-native English speakers. We constructed a feature set through careful selection of language-independent features such as pitch mean, pitch median, pitch standard deviation and intensity [1], [15], [21]- [23] and analyzed its influence on SER. These 4 features are the most commonly used and salient prosodic features in almost all the SER systems that used prosodic features with SVM classifier [1], [3], [8], [9], [15], [21]- [23]. Furthermore, we avoided choosing too many features to preclude misclassification, correlated features, and overlap. Since our objective is not to increase the performance of our SER system instead to analyze the nuances in Bangla and English languages, we adopted SVM [32] for classifying the emotions mentioned earlier. However, we also employed traditional classifiers for SER, such as -Hidden Markov Model (HMM), Gaussian Mixture Model (GMM), Artificial Neural Networks (ANN), and k-Nearest Neighbor (kNN) [3] only to compare the emotion recognition rate of these classifiers with SVM.
In Section 2, we thoroughly discussed about our proposed approach, followed by feature extraction and selection, in Sections 3. In the subsequent sections, we have elaborated on our classification approach (Section 4), investigated the language independence of Speech Emotion Recognition for Bangla and English languages (Section 5), and discussed its scope and future possibilities (Section 6).
Emotional Speech Set Acquisition and Development
Due to the lack of any significant emotional speech corpora for Bangla language, we have developed a Bangla ESS (BESS), in accordance with the Toronto ESS (TESS) [30], as shown in Fig. 1, with assistance from 11 native Bangla speakers (64% Male, 36% Female) who are undergraduate students of the affiliated institution. Similarly, the same native speakers, or in other words, the same non-native English speakers, participated in the development of the English ESS (EESS). The 11 native Bangla speakers were carefully selected for this study based on their fluency and proficiency in both Bangla and English languages. We identified 6 of the most common emotional states used in SER, such ashappy, angry, neutral, sad, disgust, and fear for our speech corpora [2], [15], [20], [23], [31]. The 11 speakers were asked to elicit emotional speech, simulating these emotions. Approximately 30-40 hours were spent for recording each emotional state, among which 5-6 hours were allotted for rehearsal and calibration of the speaker's portrayal of each emotion. At least one male and one female speaker were involved in recording the audio samples of each emotion.
Each audio sample, containing the phrase, "Say the word…", followed by a monosyllabic noun [30], [31], was recorded within 1-2 seconds. For instance, it can be observed from Fig. 1 that a sentence, used in the dataset was, "Say the word Read." The monosyllabic nouns were selected based on the existing speech intelligibility such as the Northwestern University Auditory Test-Number 6 (NU-6) [33]. While developing the Bangla dataset, the same sentence, "Say the word Read" in English, was translated to Bangla, which read as, "Poro shobdo ti bolo", as shown in Fig. 1. It can be discerned from Fig. 2 that the verbatim translation of Bangla words such as -"Poro", "shobdo ti", and "bolo" to its corresponding English words are "Read", "the word", and "Say", respectively. For comprehensive understanding, Bangla words used in the Bangla speech sample were mapped to their corresponding English words, and in this manner, BESS and EESS were developed. Although the Auditory Test-Number 6 (NU-6) housed 200 words [31], we carefully selected 50 words whose Bangla translation would adhere to the lexical and semantic properties of speech intelligibility. Therefore, for the 2 datasets (BESS and EESS) and for the 6 emotions (happy, angry, neutral, sad, disgust, and fear) with 50 audio speech samples each, we have a total of 2 datasets × 6 emotions × 50 speech samples = 600 audio samples, 300 samples per dataset (BESS and EESS).
On the other hand, TESS was developed by 2 Canadian actors, of whom one was younger (26 years of age), and the other was older (64 years of age), containing a total of 2800 audio samples (200 NU-6 words x 7 emotions x 2 actors) [31]. However, as established earlier, to ensure preservation of the speech intelligibility for Bangla and English languages, audio samples containing the same 50 words as BESS or EESS, were retrieved from TESS for the 6 emotions to create the dataset, English TESS (E-TESS) as a subset of TESS, containing 300 samples. The workflow of generating E-TESS is depicted in Fig. 3. In this manner, 3 datasets were assembled for our experiments, which will be addressed as Bangla Emotional Speech Set (BESS), English Emotional Speech Set (EESS), and English TESS (E-TESS), as shown in Fig. 1 and Fig. 3.
Feature Extraction and Selection
Features dictate the performance of a Speech Emotion Recognition system, and identifying emotional states require various features [20]. For the expression of emotions, speech features contribute in particular ways [34]. However, the most salient features can improve the performance of a model exponentially. Misclassification and overlaps of emotional states by an SER system are mostly due to inadequate and ineffective choices of features [35]. In our work, the vocal features for both BESS and EESS were extracted using signal processing techniques, implemented in the open-source software package, "Praat", developed by the University of Amsterdam [36]. This process is briefly elaborated on in the following section. For this study, we used language-independent prosodic features, such aspitch median, pitch mean, pitch standard deviation, and intensity.
• Pitch or fundamental frequency [37], [38] is frequently deployed in Speech Emotion Recognition Systems [3], [15], [20], [39], [40]. It represents the vibration frequency of the vocal cords during sound production. • Pitch-related features such as pitch mean and pitch median are widely used in SER systems [1], [21], [23] and are regarded as language-independent features [15], [21]- [23]. • Pitch standard deviation is another prosodic feature which carries vital emotional information [21], [23] and is used as one of the acoustic feature [21], [41]. • Energy or intensity refers to the loudness of sound. This feature is also used quite frequently in SER systems [16], [42]. It is deemed to be a language-independent prosodic feature [15], [22].
Speech Normalization and Feature Analysis using Praat Software
The audio samples were scrupulously recorded with a high-quality microphone at a sampling rate of 44 kHz, or in other words 44100 samples per second per channel [36] were recorded for maximum audio quality. A quiet room was selected for the recording sessions to reduce background noise. No filters were applied on the audio recordings to avoid distortion and to ensure preservation of the original intensity of the speech signals.
The sample audio recordings of BESS and EESS for the phrases "Poro shobdo ti bolo" and "Say the word Read", eliciting the emotional state, happy, are illustrated in Fig. 4a and Fig.4b, respectively. As demonstrated in Fig. 2, these phrases are translations of each other.
In each of Fig. 4a and Fig. 4b, the audio signals are divided into 2 parts, such as -1) the time-domain audio signal, depicted in the upper portion of each figure, with the x-axis representing time and the y-axis representing the amplitude of the signal, and 2) a frequency vs time plot of the corresponding audio signal, depicted in the lower portion of each figure [36]. If observed carefully, the amplitude of the Bangla audio signal in Fig. 4a, varies compared to that of the English audio signal in Fig. 4b. The peaks of amplitude for the Bangla audio sample can be segregated into individual words such as "Poro", "shobdo", "ti", and "bolo". Similarly, the English audio sample in Fig. 4b, can be separated into individual words such as "Say", "the", "word", and "Read". Upon further inspection and keeping in mind that both audio samples were recorded for the happy emotion, it can be discerned that there are intermittent signal gaps for the Bangla audio sample (Fig. 4a). However, such manifestation cannot be inferred for the English audio sample (Fig. 4b).
Essentially, in the English language, the sentence pattern is set asthe subject, then the verb, and finally the object [43] whereas, in Bangla, it follows the patternthe subject, then the object, and finally the verb [44]. Furthermore, from the verbatim translation of Bangla to English words in Fig. 2, it is evident that both languages adhere to a different sentence pattern. The variation in the speech signals for English and Bangla languages is mainly due to these nuances. However, as mentioned earlier, since we are only using prosodic features, which are language-independent [21], [22], the style of utterance does not matter, and these nuances do not hamper the classification of emotions from varying speech signals.
Considering the two prosodic features, pitch standard deviation and intensity for the emotional states, happy and disgust, 2 feature distribution plots, combining these features for all the 3 datasets (BESS, EESS, and E-TESS), are depicted in Fig. 5a and Fig. 5b, respectively. In the feature distribution plot for the happy emotion (Fig. 5a), the features for the 3 datasets appear to be clustered together. This observation further clarifies that the prosodic features are independent of the nuances of language, and therefore, it may be stated, "for Bangla and English languages, the emotional state 'happy' is language and speaker (native/non-native) independent". On the other hand, from the feature distribution plot of the "disgust" emotion ( Fig. 5b), the two prosodic features (pitch standard deviation and intensity) for the E-TESS dataset do not overlap with those of the BESS and the EESS datasets whereas, they appear to be overlapping for latter two datasets. Therefore, from this illustration, it may be surmised, "for Bangla and English languages, the emotional state 'disgust' is language and speaker (native/non-native) dependent". The validity of these two statements will be further investigated in Section 5.
Classifier
Support Vector Machine (SVM) [32] is one of the conventional classifiers deployed in SER Systems [3], [6], [18], [34], [39], [45]- [48]. The models that deploy SVM classifier, designate new training examples to any one category, making this binary linear classifier, a non-probabilistic one, in the process. Using a phenomenon known as kernel trick, SVMs can effectively execute non-linear classifications alongside linear ones. As mentioned earlier, the language-independent vocal features [21] that are selected for SER, have a better recognition rate when SVM is used [15], [22]. Conventionally known as a separating hyperplane, a Support Vector Machine may be denoted as a discriminative classifier. Nevertheless, we used various traditional SER classifiers, such as -HMM, GMM, ANN, and kNN [3] for evaluating their performance on the 3 datasets (BESS, EESS, E-TESS). As depicted in Table 1, the average emotion classification rate using an SVM classifier supersedes the other traditional classifiers (HMM, GMM, kNN, and ANN) of SER in all the datasets. Therefore, in this study, we adopted a generic SVM kernel analogous to the classifiers deployed in SER to generate a non-linear hyperplane for recognizing the 6 emotional states (happy, angry, neutral, sad, disgust, and fear). Additionally, we avoided using any deep learning-based classifiers due to their overfitting complexities and preliminary processing of the ESSs.
Experiments and Result Analysis
As mentioned earlier, each of the 3 datasets, namely, BESS, EESS, and E-TESS had a total of 300 audio speech samples, i.e., 50 audio samples for each of the 6 emotions. Considering these datasets, 3 types of experiments were conducted for investigating the language independent nature of SER, and eventually unearthing other disparities corresponding to emotions and native/non-native speakers, such as -1) on individual speech sets, 2) on integrated speech sets, and 3) by training the model using one speech set and testing it with a different speech set. We elaborate on these 3 experiments in the following sub-sections.
Experiment 1: Individual Speech Set
In this experiment, the SVM classifier was trained and tested on each of the 3 datasets individually, with an 80-20, train to test split of the 300 audio samples (240 training and 60 testing samples Table 2. It is evident from the literature [20] that when using a second language, speakers are prone to feel less strongly due to fewer recollections and deep-rooted memories. Therefore, a probable reason for a lower recognition rate of the model for EESS may be attributed to the fact that this dataset was developed by the 11 Bangle native speakers, who despite being fluent in English, could not express their emotions in English as effectively as they could have in Bangla.
Experiment 2: Integrated Speech Set
In this experiment, 2 different Integrated ESS (IESS) were formed by concatenating the audio samples of one dataset with another, such as -1) IESS-1, formed by integrating BESS and EESS, and 2) IESS-2, formed by integrating BESS and E-TESS. Each IESS had a total of 600 audio samples, which at 80-20, train to test split ratio (480 training and 120 testing samples) were used for training and testing the SVM classifier. For IESS-1, with BESS and EESS, having individual recognition rate of 88.3% and 81.7%, respectively, an overall accuracy of 85% was observed. On the other hand, IESS-2 had an overall classification rate of 83.3% while each of the datasets, BESS and E-TESS, had a performance rate of 75% and 91.7%, respectively. However, the classification rate of this experiment decreased with respect to the first one. The recognition rates of the SVM classifier for the IESS-1 and IESS-2, are summarized in Table 3 and Table 4, respectively.
Since both datasets, BESS and EESS, were developed by native Bangla speakers, the higher overall emotion recognition rate for IESS-1 compared to that for IESS-2, perhaps suggests that non-native speakers tend to express their emotions in English, likewise their native tongue. Furthermore, the higher recognition rate for BESS compared to that for EESS may be attributed to the native speaker's inability to naturally express their emotions in English [20]. Again, since the datasets, BESS and E-TESS, were developed by Bangla and English native speakers, respectively, higher recognition rate for E-TESS compared to that for BESS leads us to believe that there may be certain differences in language, governing how different emotions are expressed. Table 3: Recognition Rate of SVM for the Integrated Emotional Speech Set -1 (IESS-1), consisting of BESS and EESS.
Emotional Speech Set (ESS) Recognition Rate (%) Emotion Performance (%) happy angry neutral sad disgust fear
Experiment 3: Distinct Speech Set for Training and Testing
Considering the 3 datasets (BESS, EESS, E-TESS), the final experiment involved training the SVM classifier by one speech set and separately testing it with the remaining two. For instance, as illustrated in Table 5, the model was firstly trained using BESS, followed by separately testing it with the remaining datasets (EESS and E-TESS). Similarly, other combinations of the datasets were considered, and emotion recognition rates were recorded. With BESS as the training dataset, the model achieved recognition rates of 76.7% and 55%, for EESS and E-TESS datasets, respectively. However, with E-TESS as the training dataset, the recognition rate of the model for both BESS and EESS was 45%.
As observed from Table 5, the classification rate of this experiment decreased even further compared to the previous two, which may be credited to the failure of recognizing the emotional states, "disgust" and "fear" at all, for the following combinations -1) BESS as training, E-TESS as testing, 2) E-TESS as training, EESS as testing, and 3) E-TESS as training, BESS as testing. These findings lead us to believe that the prosodic cues for the emotions, "disgust" and "fear", may be different in Bangla and English when expressed by their respective native speakers. Furthermore, there might be a possibility that these emotions are influenced by the culture or background of the speaker [2], [15]. However, the remaining emotional states were moderately recognized for this pair of training and testing datasets. On the other hand, the recognition rate of the model for EESS was recorded to be comparatively higher with BESS as the training set compared to E-TESS (Table 5), fairly recognizing all the six emotions, even though the emotions, "disgust" and "fear" are expressed differently in English and Bangla languages when expressed by their native speakers. This outcome is conclusive because, to reiterate, both BESS and EESS were developed by native Bangla speakers, and from the prior experiment, we deduced that non-native speakers might convey emotions as if they were expressing themselves in their native language. Hence, it is intuitive that this combination of datasets (BESS as training, EESS as testing) will result in a reasonable recognition rate. The result of this experiment also reinforces our claim that the non-native speakers tend to express their emotions in English, likewise their native tongue. These claims are further corroborated in the following sections.
Binary Classification of Emotions
This experiment includes substantiating the claims formulated for the emotions, "disgust" and "fear", which were found to have different expressions in Bangla and English languages when spoken by their native speakers. The third experiment is replicated in this instance, which involved training the SVM classifier by one speech set and separately testing it with the remaining two. However, instead of multi-class classification, binary classification for each of the six emotions is carried out. For example, considering the emotion, happy, any instance of any dataset, labelled as either disgust or fear, or any of the remaining emotions, can be equivalently labelled as, not happy. In this manner, for each dataset, there will be only two emotions, for example, "happy" -"not happy", "angry" -"not angry", "neutral" -"not neutral", "sad" -"not sad", "disgust" -"not disgust", and "fear" -"not fear".
Each of the training sets contains an equal number of labelled training samples for the binary emotions. For instance, the emotion, happy, has 50 audio samples and the emotion, not happy, also includes 50 audio samples (5 remaining emotions × 10 audio samples per emotion). This procedure is replicated for the rest of the emotions for each dataset. After this, the SVM model is trained on the binary labelled speech set and tested with a different binary labelled speech set for the same emotion. Testing was conducted with subsets of 30 audio samples from the testing dataset, and the overall recognition rate was recorded.
As delineated in Table 6 and Table 7, the results from this binary classification further substantiate and validate the claims purported in experiment3. The emotions, disgust and fear were completely un-recognized with BESS as the training and EESS as the testing datasets. Furthermore, with E-TESS as the training and either of the two remaining speech sets (BESS and EESS) as the testing datasets, the emotions disgust and fear, were left unrecognized as well. The confusion matrix for the "disgust" emotion (with E-TESS as the training and EESS as the testing datasets), as shown in Table 8, demonstrates that no true positives were recorded for this emotion, in which case the precision and the recall of the model becomes 0. Consequently, the F1-score, a weighted average of precision and recall, will be 0 as well. Therefore, it may be concluded that the emotions disgust and fear are expressed differently in Bangla and English languages. In contrast, these emotions were fairly recognized with BESS and EESS as the training and the testing datasets, respectively, which corroborates the claim that non-native speakers of a language tend to express their emotions, likewise their native tongue. Across all the experiments, the emotional states: happy, angry, neutral, and sad were identified regardless of language and native/non-native speakers. However, the emotions: disgust and fear brought about some discrepancies. Considering the two hypotheses formed in section 3.1, from these suites of experiments, it can be inferred that for Bangla and English languages, the emotional state happy is perhaps language and speaker (native/non-native) independent while the emotional states, disgust and fear are perhaps language and speaker (native/non-native) dependent. Additionally, non-native speakers are found to convey emotions analogous to their expressions in their native language.
Conclusion
In this study, we developed 3 datasets namely, Bangla Emotional Speech Set (BESS), English Emotional Speech Set (EESS), and English TESS (E-TESS) to evaluate the language independence of Speech Emotion Recognition (SER), featuring 6 emotions, such as -happy, angry, neutral, sad, disgust, and fear [1], [2], [15], [20], [23], [31], in English and Bangla languages through language-independent vocal feature selection. The datasets BESS and EESS were developed by 11 native Bangla speakers and E-TESS was developed as a subset of the Toronto Emotional Speech Set (TESS) while preserving the speech intelligibility functions across the datasets.
We coordinated 3 experiments, where we deployed SVM for classifying emotions, as it is one of the most widely used classifiers for SER systems. Although the performance of the model varied across the 3 experiments, the reliability of recognizing different emotions was convincing. The results from the first experiment revealed that native speakers tend to express their emotions better in their native language than non-native speakers of that language. From the second and the third experiments, it was observed that non-native speakers of a language have a keen proclivity of expressing their emotions, likewise their native language. Nonetheless, these experiments further point out that there may be certain differences in languages that govern the expression of different emotions.
This claim was eventually solidified by the end of the third experiment when the emotional states, disgust and fear were revealed to be language and speaker dependent. The factors behind this contrast may be credited to cultural, background, environmental or communal differences. However, this study also demonstrated that the emotional states such as happy, angry, neutral, and sad were moderately recognized irrespective of language and native/non-native speakers. Therefore, we can deduce that there may be language-specific differences for certain, if not all emotions.
The findings direct us towards the conclusion that SER in Bangla and English is mostly language independent. However, some disparity exists in the emotional states owing to cultural, environmental, or communal factors. Hence, we hope that the community will find the results of this study applicable towards achieving comprehensive control over SER regardless of language. In the future, we intend to extend our analysis of language independence of SER by including more languages, speakers, and language independent features.
Fig. 1 .
1Construction of Bangla Emotional Speech Set (BESS) and English Emotional Speech Set (EESS) datasets by native speakers fluent and proficient in both Bangla and English.
Fig. 2 .
2Verbatim translation of Bangla words to the corresponding English words.
Fig. 3 .
3Formation of the English TESS (E-TESS) dataset as a subset of the existing Toronto Emotional Speech Set (TESS).
Fig. 4 .
4Audio samples for "happy" Emotion for -(a) the phrase "Poro shobdo ti bolo" of the Bangla Emotional Speech Set (BESS) (b) the phrase "Say the word Read" of the English Emotional Speech Set (EESS).
Fig. 5 .
5Feature (Pitch Standard deviation vs Intensity) distribution plot of the datasets BESS, EESS, and E-TESS for -(a) the happy emotion (b) the disgust emotion.
Table 1 :
1Recognition Rate of different SER classifiers for Bangla (BESS), English (EESS) and English TESS (E-TESS) datasets.Emotional Speech Set (ESS)
Emotion Recognition Rate (%)
HMM GMM kNN ANN SVM
Bangla Emotional Speech Set (BESS)
45
46.7
81.3
68.3
88.3
English Emotional Speech Set (EESS)
38.3
42
75
71.7
85
English TESS (E-TESS)
58.3
56.7
93.3
86.6
93.3
From this result, it can be inferred that SER works well only when one language is involved. However, for EESS, the model has a lower recognition rate compared to E-TESS. The recognition rates of the SVM model for individual ESS and different emotions are summarized in). The overall emotion recognition rate across the
datasets BESS, EESS, and E-TESS were recorded to be 88.3%, 85%, and 93.3%, respectively. The average performance
of the 6 emotional states happy, angry, neutral, sad, disgust, and fear were 83.3%, 90%, 96.7%, 93.3%. 86.7%, and 83.3%,
respectively.
Table 2 :
2Experiment 1 -Recognition rate of SVM for the individual Emotional Speech Set (ESS).Emotional Speech Set (ESS)
Overall
Recognition Rate (%)
Emotion Recognition Rate (%)
happy angry neutral sad disgust fear
Bangla Emotional Speech Set (BESS)
88.3
90
100
90
90
90
70
English Emotional Speech Set (EESS)
85
70
80
100
100
80
80
English TESS (E-TESS)
93.3
90
90
100
90
90
100
Overall Performance
88.87
83.3
90
96.7
93.3
86.7
83.3
Table 4 :
4Recognition rate of SVM for the Integrated Emotional Speech Set -2 (IESS-2), consisting of BESS and E-
TESS.
Emotional Speech Set (ESS)
Recognition Rate (%)
Emotion Performance (%)
happy angry neutral sad disgust fear
Only Bangla (BESS)
75
50
100
80
90
70
60
Only English TESS (E-TESS)
91.7
100
80
100
70
100
100
Bangla & English TESS (IESS-2)
83.3
75
90
90
80
85
80
Table 5 :
5Experiment 3 -Recognition rate of SVM when using distinct Emotional Speech Set for training and testing.Training Dataset Testing Dataset
Recognition
Rate (%)
Emotion Performance (%)
happy angry neutral sad disgust fear
Bangla (BESS)
English (EESS)
76.7
60
90
80
90
60
80
English TESS
(E-TESS)
55
100
80
80
70
0
0
English TESS
(E-TESS)
English (EESS)
45
50
100
50
70
0
0
Bangla (BESS)
45
50
100
40
80
0
0
Table 6 :
6Experiment 3.1 -Recognition rate of SVM in Binary Classification of the emotionshappy, angry, and neutral.Training Dataset
Testing Dataset
Binary Classification Emotion Performance (%)
happy
not
happy
overall angry
not
angry
overall
neutra
l
not
neutral
overall
Bangla (BESS)
English (EESS)
53
93
88.33
77
94
90
86
97
95
English TESS
(E-TESS)
71
91
86.67
78
96
93.33 100
100
100
English TESS
(E-TESS)
English (EESS)
52
89
81.67
74
92
88.33
60
92
86.67
Bangla (BESS)
53
91
85
76
95
91.67
53
91
85
Table 7 :
7Experiment 3.1 -Recognition rate of SVM in Binary Classification of the emotionssad, disgust, and fear.Training Dataset
Testing Dataset
Binary Classification Emotion Performance (%)
sad
not
sad
overall disgust
not
disgust
overall fear
not
fear
overall
Bangla (BESS)
English (EESS)
86 96 93.33
57
94
90
77 97
95
English TESS
(E-TESS)
60 90 91.67
0
90
81.67 0
79
65
English TESS
(E-TESS)
English (EESS)
17 79 66.67
0
89
80
0
92
85
Bangla (BESS)
15 83
70
0
86
75
0
88 78.33
Table 8 :
8Confusion Matrix for Disgust Emotion (Training Dataset: English TESS, Testing Dataset: English).Predicted Label
Disgust
Not
Disgust
True
Label
Disgust
0
4
Not
Disgust
2
24
AcknowledgmentsThe authors express their heartfelt gratitude to the participants for their valuable time and effort for making this study possible.Declaration Of InterestsThe authors do not declare any potential conflict of interest that may alter the outcomes of this study in any manner and approve this version of the manuscript for publication.
Analysis of speech features for emotion detection: A review. R S Sudhakar, M C , 10.1109/ICCUBEA.2015.135Proceedings -1st International Conference on Computing, Communication, Control and Automation, ICCUBEA 2015. -1st International Conference on Computing, Communication, Control and Automation, ICCUBEA 2015R. S. Sudhakar and M. C. Anil, "Analysis of speech features for emotion detection: A review," in Proceedings - 1st International Conference on Computing, Communication, Control and Automation, ICCUBEA 2015, 2015, pp. 661-664, doi: 10.1109/ICCUBEA.2015.135.
Emotion recognition from Assamese speeches using MFCC features and GMM classifier. A B Kandali, A Routray, T K Basu, 10.1109/TENCON.2008.4766487A. B. Kandali, A. Routray, and T. K. Basu, "Emotion recognition from Assamese speeches using MFCC features and GMM classifier," 2008, doi: 10.1109/TENCON.2008.4766487.
Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. M B Akçay, K Oğuz, 10.1016/j.specom.2019.12.001Speech Communication. 116M. B. Akçay and K. Oğuz, "Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers," Speech Communication, vol. 116. pp. 56-76, 2020, doi: 10.1016/j.specom.2019.12.001.
A survey of affect recognition methods: Audio, visual and spontaneous expressions. Z Zeng, M Pantic, G I Roisman, T S Huang, 10.1145/1322192.1322216Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07. the 9th International Conference on Multimodal Interfaces, ICMI'07Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, "A survey of affect recognition methods: Audio, visual and spontaneous expressions," in Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07, 2007, pp. 126-133, doi: 10.1145/1322192.1322216.
. R W Picard, Affective Computing. MIT pressR. W. Picard, Affective Computing. MIT press, 2000.
Machine learning approach for emotion recognition in speech. M Gjoreski, H Gjoreski, A Kulakov, 10.31449/inf.v38i4.719Inform. 384M. Gjoreski, H. Gjoreski, and A. Kulakov, "Machine learning approach for emotion recognition in speech," Inform., vol. 38, no. 4, pp. 377-384, 2014, doi: 10.31449/inf.v38i4.719.
Factors in the recognition of vocally expressed emotions: A comparison of four languages. M D Pell, S Paulmann, C Dara, A Alasseri, S A Kotz, 10.1016/j.wocn.2009.07.005J. Phon. 374M. D. Pell, S. Paulmann, C. Dara, A. Alasseri, and S. A. Kotz, "Factors in the recognition of vocally expressed emotions: A comparison of four languages," J. Phon., vol. 37, no. 4, pp. 417-435, 2009, doi: 10.1016/j.wocn.2009.07.005.
Automatic emotion recognition using prosodic parameters. I Luengo, E Navas, I Hernáez, J Sánchez, 10.21437/interspeech.2005-3249th European Conference on Speech Communication and Technology. I. Luengo, E. Navas, I. Hernáez, and J. Sánchez, "Automatic emotion recognition using prosodic parameters," in 9th European Conference on Speech Communication and Technology, 2005, pp. 493-496, doi: 10.21437/interspeech.2005-324.
Speech emotion recognition using hidden Markov models. A Nogueiras, A Moreno, A Bonafonte, J B Mariño, EUROSPEECH 2001 -SCANDINAVIA -7th European Conference on Speech Communication and Technology. Accessed41A. Nogueiras, A. Moreno, A. Bonafonte, and J. B. Mariño, "Speech emotion recognition using hidden Markov models," in EUROSPEECH 2001 -SCANDINAVIA -7th European Conference on Speech Communication and Technology, 2001, vol. 41, no. 4, pp. 2679-2682, Accessed: Feb. 06, 2022. [Online]. Available: https://www.academia.edu/17870007/Speech_emotion_recognition_using_hidden_Markov_models.
Survey on speech emotion recognition: Features, classification schemes, and databases. M El Ayadi, M S Kamel, F Karray, 10.1016/j.patcog.2010.09.020Pattern Recognit. 443M. El Ayadi, M. S. Kamel, and F. Karray, "Survey on speech emotion recognition: Features, classification schemes, and databases," Pattern Recognit., vol. 44, no. 3, pp. 572-587, 2011, doi: 10.1016/j.patcog.2010.09.020.
The relationship between task difficulty and emotion in online computer programming tutoring (abstract only). J B Wiggins, J F Grafsgaard, K E Boyer, E N Wiebe, J C Lester, 10.1145/2538862.2544298J. B. Wiggins, J. F. Grafsgaard, K. E. Boyer, E. N. Wiebe, and J. C. Lester, "The relationship between task difficulty and emotion in online computer programming tutoring (abstract only)," Mar. 2014, pp. 721-721, doi: 10.1145/2538862.2544298.
Determinants and consequences of employee displayed positive emotions. W.-C Tsai, 10.1177/014920630102700406J. Manage. 274W.-C. Tsai, "Determinants and consequences of employee displayed positive emotions," J. Manage., vol. 27, no. 4, pp. 497-512, Aug. 2001, doi: 10.1177/014920630102700406.
EmoVoice -A framework for online recognition of emotions from voice. T Vogt, E André, N Bee, 10.1007/978-3-540-69369-7_21Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 5078T. Vogt, E. André, and N. Bee, "EmoVoice -A framework for online recognition of emotions from voice," in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008, vol. 5078 LNCS, pp. 188-199, doi: 10.1007/978-3-540-69369-7_21.
Recognition of emotions in interactive voice response systems. S Yacoub, S Simske, X Lin, J Burns, EUROSPEECH 2003 -8th European Conference on Speech Communication and Technology. S. Yacoub, S. Simske, X. Lin, and J. Burns, "Recognition of emotions in interactive voice response systems," in EUROSPEECH 2003 -8th European Conference on Speech Communication and Technology, 2003, pp. 729-732, Accessed: Feb. 06, 2022. [Online]. Available: http://hpl.americas.hp.net/techreports/2003/HPL-2003-136.pdf.
A Study of Language and Classifierindependent Feature Analysis for Vocal Emotion Recognition. F Noroozi, M Marjanovic, A Njegus, S Escalera, G Anbarjafari, F. Noroozi, M. Marjanovic, A. Njegus, S. Escalera, and G. Anbarjafari, "A Study of Language and Classifier- independent Feature Analysis for Vocal Emotion Recognition," arxiv.org, p. 1, 2018, Accessed: Feb. 06, 2022. [Online]. Available: https://arxiv.org/abs/1811.08935.
Evidence for cultural dialects in vocal emotion expression: Acoustic classification within and across five nations. P Laukka, D Neiberg, H A Elfenbein, 10.1037/a0036048Emotion. 143P. Laukka, D. Neiberg, and H. A. Elfenbein, "Evidence for cultural dialects in vocal emotion expression: Acoustic classification within and across five nations," Emotion, vol. 14, no. 3, pp. 445-449, 2014, doi: 10.1037/a0036048.
Recognizing emotion from speech based on age and gender using hierarchical models. F A Shaqra, R Duwairi, M Al-Ayyoub, 10.1016/j.procs.2019.04.009Procedia Computer Science. edia Computer Science151F. A. Shaqra, R. Duwairi, and M. Al-Ayyoub, "Recognizing emotion from speech based on age and gender using hierarchical models," in Procedia Computer Science, 2019, vol. 151, pp. 37-44, doi: 10.1016/j.procs.2019.04.009.
How aging affects the recognition of emotional speech. S Paulmann, M D Pell, S A Kotz, 10.1016/j.bandl.2007.03.002Brain Lang. 1043S. Paulmann, M. D. Pell, and S. A. Kotz, "How aging affects the recognition of emotional speech," Brain Lang., vol. 104, no. 3, pp. 262-269, 2008, doi: 10.1016/j.bandl.2007.03.002.
Handbook of human-computer interaction. M G H , ElsevierM. G. H. (ed.), Handbook of human-computer interaction. Elsevier, 2014.
Influences of languages in speech emotion recognition: A comparative study using Malay, English and Mandarin languages. R Rajoo, C C Aun, 10.1109/ISCAIE.2016.7575033ISCAIE 2016 -2016 IEEE Symposium on Computer Applications and Industrial Electronics. R. Rajoo and C. C. Aun, "Influences of languages in speech emotion recognition: A comparative study using Malay, English and Mandarin languages," in ISCAIE 2016 -2016 IEEE Symposium on Computer Applications and Industrial Electronics, 2016, pp. 35-39, doi: 10.1109/ISCAIE.2016.7575033.
A neural network approach for human emotion recognition in speech. M W Bhatti, Y Wang, L Guan, 10.1109/iscas.2004.1329238Proceedings -IEEE International Symposium on Circuits and Systems. -IEEE International Symposium on Circuits and Systems2M. W. Bhatti, Y. Wang, and L. Guan, "A neural network approach for human emotion recognition in speech," in Proceedings -IEEE International Symposium on Circuits and Systems, 2004, vol. 2, doi: 10.1109/iscas.2004.1329238.
Exploring Language-Independent Emotional Acoustic Features via Feature Selection. A Shaukat, K Chen, A. Shaukat and K. Chen, "Exploring Language-Independent Emotional Acoustic Features via Feature Selection," Sep. 2010, Accessed: Feb. 06, 2022. [Online]. Available: http://arxiv.org/abs/1009.0117.
Hidden Markov model-based speech emotion recognition. B Schuller, G Rigoll, M Lang, 10.1109/ICME.2003.1220939Proceedings -IEEE International Conference on Multimedia and Expo. -IEEE International Conference on Multimedia and Expo1B. Schuller, G. Rigoll, and M. Lang, "Hidden Markov model-based speech emotion recognition," in Proceedings -IEEE International Conference on Multimedia and Expo, 2003, vol. 1, pp. I401-I404, doi: 10.1109/ICME.2003.1220939.
Evolutionary feature generation in speech emotion recognition. B Schuller, S Reiter, G Rigoll, 10.1109/ICME.2006.2625002006 IEEE International Conference on Multimedia and Expo, ICME 2006 -Proceedings. B. Schuller, S. Reiter, and G. Rigoll, "Evolutionary feature generation in speech emotion recognition," in 2006 IEEE International Conference on Multimedia and Expo, ICME 2006 -Proceedings, 2006, vol. 2006, pp. 5-8, doi: 10.1109/ICME.2006.262500.
Emotions in speech -Experiments with prosody and quality features in speech for use in categorical and dimensional emotion recognition environments. M Borchert, A Düsterhöft, 10.1109/NLPKE.2005.1598724Proceedings of 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering, IEEE NLP-KE'05. 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering, IEEE NLP-KE'05M. Borchert and A. Düsterhöft, "Emotions in speech -Experiments with prosody and quality features in speech for use in categorical and dimensional emotion recognition environments," in Proceedings of 2005 IEEE International Conference on Natural Language Processing and Knowledge Engineering, IEEE NLP-KE'05, 2005, vol. 2005, pp. 147-151, doi: 10.1109/NLPKE.2005.1598724.
Enhancing emotion recognition from speech through feature selection. T Kostoulas, T Ganchev, A Lazaridis, N Fakotakis, 10.1007/978-3-642-15760-8_43Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics). 6231T. Kostoulas, T. Ganchev, A. Lazaridis, and N. Fakotakis, "Enhancing emotion recognition from speech through feature selection," Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 6231 LNAI, pp. 338-344, 2010, doi: 10.1007/978-3-642-15760-8_43.
Sound processing features for speaker-dependent and phrase-independent emotion recognition in Berlin database. C N Anagnostopoulos, E Vovoli, 10.1007/B137171_43Inf. Syst. Dev. Towar. a Serv. Provis. Soc. C. N. Anagnostopoulos and E. Vovoli, "Sound processing features for speaker-dependent and phrase-independent emotion recognition in Berlin database," Inf. Syst. Dev. Towar. a Serv. Provis. Soc., pp. 413-421, 2009, doi: 10.1007/B137171_43.
The interplay of prosodic cues in the L2: How intonation, rhythm, and speech rate in speech by Spanish learners of Dutch contribute to L1 Dutch perceptions of accentedness and comprehensibility. L Van Maastricht, T Zee, E Krahmer, M Swerts, 10.1016/j.specom.2020.04.003Speech Commun. 133L. van Maastricht, T. Zee, E. Krahmer, and M. Swerts, "The interplay of prosodic cues in the L2: How intonation, rhythm, and speech rate in speech by Spanish learners of Dutch contribute to L1 Dutch perceptions of accentedness and comprehensibility," Speech Commun., vol. 133, pp. 81-90, 2021, doi: 10.1016/j.specom.2020.04.003.
Emotion recognition from speech using global and local prosodic features. K S Rao, S G Koolagudi, R R Vempada, 10.1007/S10772-012-9172-2Int. J. Speech Technol. 162K. S. Rao, S. G. Koolagudi, and R. R. Vempada, "Emotion recognition from speech using global and local prosodic features," Int. J. Speech Technol., vol. 16, no. 2, pp. 143-160, Jun. 2013, doi: 10.1007/S10772-012-9172-2.
Toronto emotional speech set (TESS) Collection. K Dupuis, K , K. Dupuis and K. M, "Toronto emotional speech set (TESS) Collection." 2015, Accessed: Feb. 06, 2022. [Online].
Recognition of emotional speech for younger and older talkers: Behavioural findings from the toronto emotional speech set. K Dupuis, M , Kathleen Pichora-Fuller, Canadian Acoustics -Acoustique Canadienne. 393K. Dupuis and M. Kathleen Pichora-Fuller, "Recognition of emotional speech for younger and older talkers: Behavioural findings from the toronto emotional speech set," in Canadian Acoustics -Acoustique Canadienne, 2011, vol. 39, no. 3, pp. 182-183.
Support-vector networks. C Cortes, V Vapnik, 10.1007/bf00994018Mach. Learn. 203C. Cortes and V. Vapnik, "Support-vector networks," Mach. Learn., vol. 20, no. 3, pp. 273-297, Sep. 1995, doi: 10.1007/bf00994018.
An expanded test for speech discrimination utilizing CNC monosyllabic words. T W Tillman, R Carhart, SAM-TR-66-55Northwestern University Auditory Test. 6Tech. Rep. SAM-TR.T. W. Tillman and R. Carhart, "An expanded test for speech discrimination utilizing CNC monosyllabic words. Northwestern University Auditory Test No. 6. SAM-TR-66-55.," Tech. Rep. SAM-TR., pp. 1-12, 1966, Accessed: Feb. 06, 2022. [Online]. Available: https://apps.dtic.mil/sti/citations/AD0639638.
Recognition of emotions from speech using excitation source features. S G Koolagudi, S Devliyal, B Chawla, A Barthwaf, K S Rao, 10.1016/j.proeng.2012.06.394Procedia Engineering, 2012. edia Engineering, 201238S. G. Koolagudi, S. Devliyal, B. Chawla, A. Barthwaf, and K. S. Rao, "Recognition of emotions from speech using excitation source features," in Procedia Engineering, 2012, vol. 38, pp. 3409-3417, doi: 10.1016/j.proeng.2012.06.394.
Features and classifiers for emotion recognition from speech: a survey from. C N Anagnostopoulos, T Iliou, I Giannoukos, 10.1007/s10462-012-9368-5Artif. Intell. Rev. 432C. N. Anagnostopoulos, T. Iliou, and I. Giannoukos, "Features and classifiers for emotion recognition from speech: a survey from 2000 to 2011," Artif. Intell. Rev., vol. 43, no. 2, pp. 155-177, Feb. 2015, doi: 10.1007/s10462-012- 9368-5.
Praat: Doing Phonetics by Computer. 10.1097/aud.0b013e31821473f7Ear Hear. 322266"Praat: Doing Phonetics by Computer," Ear Hear., vol. 32, no. 2, p. 266, 2011, doi: 10.1097/aud.0b013e31821473f7.
Algorithms and devices for pitch determination of speech signals. W J Hess, 10.1159/000261664Phonetica. 394-5W. J. Hess, "Algorithms and devices for pitch determination of speech signals," Phonetica, vol. 39, no. 4-5, pp. 219-240, 1982, doi: 10.1159/000261664.
Audio-vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in different noises. S H Lee, T Y Hsiao, G S Lee, 10.1016/j.heares.2015.02.005Hear. Res. 324S. H. Lee, T. Y. Hsiao, and G. S. Lee, "Audio-vocal responses of vocal fundamental frequency and formant during sustained vowel vocalizations in different noises," Hear. Res., vol. 324, pp. 1-6, 2015, doi: 10.1016/j.heares.2015.02.005.
Speech emotion recognition using support vector machine. Y Pan, P Shen, L Shen, 10.30534/ijeter/2020/43842020Int. J. Smart Home. 62Y. Pan, P. Shen, and L. Shen, "Speech emotion recognition using support vector machine," Int. J. Smart Home, vol. 6, no. 2, pp. 101-108, 2012, doi: 10.30534/ijeter/2020/43842020.
Psychoacoustic abilities as predictors of vocal emotion recognition. E Globerson, N Amir, O Golan, L Kishon-Rabin, M Lavidor, 10.3758/s13414-013-0518-xAttention, Perception, Psychophys. 758E. Globerson, N. Amir, O. Golan, L. Kishon-Rabin, and M. Lavidor, "Psychoacoustic abilities as predictors of vocal emotion recognition," Attention, Perception, Psychophys., vol. 75, no. 8, pp. 1799-1810, Nov. 2013, doi: 10.3758/s13414-013-0518-x.
Developmental change and cross-domain links in vocal and musical emotion recognition performance in childhood. R Allgood, P Heaton, 10.1111/bjdp.12097Br. J. Dev. Psychol. 333R. Allgood and P. Heaton, "Developmental change and cross-domain links in vocal and musical emotion recognition performance in childhood," Br. J. Dev. Psychol., vol. 33, no. 3, pp. 398-403, Sep. 2015, doi: 10.1111/bjdp.12097.
The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood. G Chronaki, J A Hadwin, M Garner, P Maurage, E J S Sonuga-Barke, 10.1111/bjdp.12075Br. J. Dev. Psychol. 332G. Chronaki, J. A. Hadwin, M. Garner, P. Maurage, and E. J. S. Sonuga-Barke, "The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood," Br. J. Dev. Psychol., vol. 33, no. 2, pp. 218-236, Jun. 2015, doi: 10.1111/bjdp.12075.
Subject-verb-object -Wikipedia. "Subject-verb-object -Wikipedia." https://en.wikipedia.org/wiki/Subject-verb-object (accessed Feb. 06, 2022).
Subject-object-verb -Wikipedia. "Subject-object-verb -Wikipedia." https://en.wikipedia.org/wiki/Subject-object-verb (accessed Feb. 06, 2022).
Dimensionality reduction for emotional speech recognition. P Fewzee, F Karray, Proceedings -2012. -2012P. Fewzee and F. Karray, "Dimensionality reduction for emotional speech recognition," in Proceedings -2012
10.1109/SocialCom-PASSAT.2012.83ASE/IEEE International Conference on Privacy, Security, Risk and Trust and 2012 ASE/IEEE International Conference on Social Computing, SocialCom/PASSAT 2012. ASE/IEEE International Conference on Privacy, Security, Risk and Trust and 2012 ASE/IEEE International Conference on Social Computing, SocialCom/PASSAT 2012, 2012, pp. 532-537, doi: 10.1109/SocialCom- PASSAT.2012.83.
Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. B Schuller, A Batliner, S Steidl, D Seppi, 10.1016/j.specom.2011.01.011Speech Commun. 539B. Schuller, A. Batliner, S. Steidl, and D. Seppi, "Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge," Speech Commun., vol. 53, no. 9-10, pp. 1062-1087, 2011, doi: 10.1016/j.specom.2011.01.011.
On the impact of non-speech sounds on speaker recognition. A Janicki, 10.1007/978-3-642-32790-2_69Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 7499A. Janicki, "On the impact of non-speech sounds on speaker recognition," in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, vol. 7499 LNAI, pp. 566-572, doi: 10.1007/978-3-642-32790-2_69.
Recognition of human emotion from a speech signal based on plutchik's model. D Kamińska, A Pelikant, 10.2478/v10177-012-0024-4Int. J. Electron. Telecommun. 582D. Kamińska and A. Pelikant, "Recognition of human emotion from a speech signal based on plutchik's model," Int. J. Electron. Telecommun., vol. 58, no. 2, pp. 165-170, 2012, doi: 10.2478/v10177-012-0024-4.
| [] |
[
"Sentiment Analysis in the News",
"Sentiment Analysis in the News"
] | [
"Alexandra Balahur abalahur@dlsi.ua.es \nDepartment of Software and Computing Systems Ap. de Correos 99\nUniversity of Alicante\nE-03080AlicanteSpain\n",
"Ralf Steinberger ralf.steinberger@jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n",
"Mijail Kabadjov mijail.kabadjov@jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n",
"Vanni Zavarella vanni.zavarella@ext.jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n",
"Erik Van Der Goot erik.van-der-goot@jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n",
"Matina Halkia matina.halkia@jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n",
"Bruno Pouliquen bruno.pouliquen@jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n",
"Jenya Belyaeva jenya.belyaeva@ext.jrc.ec.europa.eu \nEuropean Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly\n"
] | [
"Department of Software and Computing Systems Ap. de Correos 99\nUniversity of Alicante\nE-03080AlicanteSpain",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly",
"European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)\nT.P. 267, Via Fermi2749 21027IspraVAItaly"
] | [] | Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articlesauthor, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which (a) we test the relative suitability of various sentiment dictionaries and (b) we attempt to separate positive or negative opinion from good or bad news. In the experiments described here, we tested whether or not subject domain-defining vocabulary should be ignored. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance. | null | [
"https://arxiv.org/pdf/1309.6202v1.pdf"
] | 17,446,675 | 1309.6202 | 996978a9fe73a34906f2187301f0a0334cced862 |
Sentiment Analysis in the News
Alexandra Balahur abalahur@dlsi.ua.es
Department of Software and Computing Systems Ap. de Correos 99
University of Alicante
E-03080AlicanteSpain
Ralf Steinberger ralf.steinberger@jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Mijail Kabadjov mijail.kabadjov@jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Vanni Zavarella vanni.zavarella@ext.jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Erik Van Der Goot erik.van-der-goot@jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Matina Halkia matina.halkia@jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Bruno Pouliquen bruno.pouliquen@jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Jenya Belyaeva jenya.belyaeva@ext.jrc.ec.europa.eu
European Commission -Joint Research Centre IPSC -GlobeSec -OPTIMA (OPensource Text Information Mining and Analysis)
T.P. 267, Via Fermi2749 21027IspraVAItaly
Sentiment Analysis in the News
Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articlesauthor, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which (a) we test the relative suitability of various sentiment dictionaries and (b) we attempt to separate positive or negative opinion from good or bad news. In the experiments described here, we tested whether or not subject domain-defining vocabulary should be ignored. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance.
Introduction
Most work on opinion mining has been carried out on subjective text types such as blogs and product reviews. Authors of such text types typically express their opinion freely. The situation is different in news articles: many newspapers (with the exception of a few tabloids that are monitored by EMM) at least want to give an impression of objectivity so that journalists will often refrain from using clearly positive or negative vocabulary. They may resort to other means to express their opinion, such as embedding statements in a more complex discourse or argument structure, they may omit some facts and highlight others, they may quote other persons who say what they feel, etc. Automatically identifying sentiment that is not expressed lexically is rather difficult, but lexically expressed opinion can be found in news texts, even if it is less frequent than in product or film reviews. Another difference between reviews and news is that reviews frequently are about a relatively concrete object (referred to as the "target"), while news articles may span larger subject domains, more complex event descriptions and a whole range of targets (e.g. various, even opposing, politicians). Unpublished in-house experiments on document-level sentiment analysis (counting stronger and weaker positive and negative words in the whole article) led us to believe that it is very important to clearly identify the target of any sentiment expressed and to restrict the analysis to the immediate context of the target . We have also observed that automatic opinion mining systems usually identify negative opinion values about entities when these were mentioned in the context of negative news, such as, for instance, the outbreak of the world financial crisis in 2008. This negative spike is mostly independent of the role of an entity in the events, i.e. the sentiment value towards a person may be negative even if this person is attempting to act positively in the event. For these reasons, we have focused in our recent opinion mining experiments, presented here, on considering smaller and larger word windows around entities, and we have attempted to separate positive and negative sentiment from good and bad news.
The EMM News Data
The EMM applications NewsBrief and MedISys categorise the news into one or more of several hundred subject domain classes, including, for instance, natural disasters, security, finance, nuclear issues, various diseases, organisations, countries, regions, specific conflicts, etc. Categorisation is achieved by (often user-defined) Boolean search word expressions or by using lists of search words with varying (positive or negative) weights and a threshold ). These category-defining word lists will thus contain terms such as "disaster", "tsunami" and "crisis", etc., which are likely to also be found in lists of sentiment vocabulary. The idea we followed up in our experiments is to exclude those category-defining words from our sentiment analysis that are part of the category definitions of the subject domains with which the news article was tagged. The category definitions may not contain all content words that are also sentiment vocabulary and a more complete hand-produced list might be more efficient. However, the advantage of using the existing category definitions is that they are all ready-made for dozens of languages, making it simple to use the same method for sentiment analysis in many more languages without much effort, should the approach be successful. From the news in 13 languages, an average 3165 reported speech quotations per day are automatically extracted (Pouliquen et al., 2007). The person issuing the quotation is extracted, and so is any entity that is being mentioned inside the quotation. In the experiments presented here, we test our methods on these automatically extracted quotations, although nothing would stop us from applying them to any other text segment. The reason for using quotations is that the text in quotes is usually more subjective than the other parts of news articles. We also know for quotes who the person is that made the statement (referred to as the source of the opinion statement) andif the speaker makes reference to another entity within the quotationwe have a clue about the possible target (or object) of the sentiment statement. Although at this point we only employ the presented algorithm on quotes, the main objective of our research is to determine the best approach to detecting sentiment in the news in general. Such an algorithm can subsequently be employed in all news texts, not only quotes.
Related Work
Subjectivity analysis is defined by Wiebe (1994) as the "linguistic expression of somebody"s opinions, sentiments, emotions, evaluations, beliefs and speculations". In her definition, the author was inspired by the work of the linguist Ann Banfield (Banfield, 1982), who defines as subjective the "sentences that take a character"s point of view (Uspensky, 1973)" and that present private states (Quirk, 1985) (i.e. states that are not open to objective observation or verification) of an experiencer, holding an attitude, optionally towards an object. define opinion mining as a recent discipline at the crossroads of information retrieval and computational linguistics which is concerned not with the topic a document is about, but with the opinion it expresses. (Dave et al., 2003), define an opinion mining system as one that is able to "process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good)." Opinion mining, in this context, aims therefore at extracting and analysing judgements on various aspects of given products. A similar paradigm is given by (Hu and Liu, 2004), which the authors entitle feature-based opinion mining. (Kim and Hovy, 2005) define opinion as a quadruple (Topic, Holder, Claim, Sentiment), in which the Holder believes a Claim about the Topic, and in many cases associates a Sentiment, such as good or bad, with the belief. The authors distinguish among opinions with sentiment and opinions without sentiment and between directly and indirectly expressed opinions with sentiment. In other approaches, capturing favourability versus unfavourability, support versus opposition, criticism versus appreciation, liking versus disliking, even bad versus good news classification were considered to be sentiment analysis.
However, at the moment of annotating sentiment in newspaper articles, we have seen that combining all these aspects together did not help to clarify what the task was and how annotation should be done. Even in the case of quotes, which are short pieces of text where the source was known and the possible targets were identified, expressions of opinion that needed some kind of interpretation or knowledge of the situation fell short of agreement, due to personal convictions, background and so on.
Experiments and evaluation
Redefining the task
To clarify the task of opinion mining from news, we selected a collection of 1592 quotes (reported speech) from newspaper articles in English, whose source and target were known (their extraction patterns are designed with that scope) which we set out to annotate. A histogram of the quotes" length is shown in Figure 1. The first experiments had an inter-annotator agreement of under 50%. Specifying that just the sentiment on the target should be annotated and separated from the good and bad news that was described led to an increase in the agreement up to 60%. We realised that by delimiting a few aspects, the task became much clearer. These aspects included not using one"s background knowledge or interpreting what is said. The original data set we decided to annotate contained 1592 quotes extracted from news in April 2008. The average final agreement was 81%, between 3 pairs of two annotators each. The result of the annotation guidelines and labelling process was a corpus in which we agreed what sentiment was and what it was not. The number of agreed sentiment-containing quotes was one third of the total number of agreed quotes, showing that only clear, expressly stated opinion was marked, i.e. opinions that required no subjective interpretation from the annotator"s part. The result of our labelling showed that in the case of newspapers, it is mandatory to distinguish between three different "components": the author, the reader and the text itself ( Figure 2). While the author might convey certain opinions, by omitting or stressing upon some aspect of the text and by thus inserting their own opinion towards the facts, the spotting of such phenomena is outside the aim of sentiment analysis as we have defined it. Instead, such phenomena should be analysed as part of work on perspective determination or news bias research. From the reader's point of view, the interpretations of the text can be multiple and they depend on the personal background knowledge, culture, social class, religion etc.
as far as what is normal (expected) and what is not are concerned. Lastly, the opinion stated strictly in the text is the one that one should concentrate on at this level, being expressed directly or indirectly, by the source, towards the target, with all the information needed to draw this conclusion on polarity present in the text. From the author and the reader"s perspective and not from the text"s pure informational point of view, opinion is conveyed through facts that are interpretable by the emotion they convey. However, emotions are not universal in their meaning. They are determined socially, culturally and historically. There are general emotions, but most of the times they relate to the norms, their significance and the cultural environment. Emotions imply an evaluation, which is both cognitive and affective, of a behaviour, with respect to a norm and the mutual expectation it raises. Some norms are common sense and are accepted and understood by all. Normative expectations link the behaviour (reaction) to a meaning and on this ground, by the understanding it is given. From the reader"s point of view, sentiment analysis would be defined as the assessment of a "target", based on its characteristics and factual information related to it, according to whether or not the results of the assessments are "according to" or "against" the "norm" (their personal understanding and approval of what is "good" and "bad" in a certain situation). From the author"s point of view, news bias or perspective determination should be concerned with discovering the ways in which expression of facts, word choice, omissions, debate limitations, story framing, selection and use of sources of quotes and the quote boundaries, for example, conveys a certain sentiment or not. The sentiment content of the text, finally, is what is expressly stated, and not what is left to be understood between the lines. Our effort focuses on detecting this last aspect.
Experiments
In order to measure the impact of our defined task, we performed different experiments on the set of 1292 quotes on which agreement has been reached. Out of these 1292, the target was successfully identified by the sentiment analysis system in 1114 quotes (direct mentions of the target through the name or its title). The baseline we compare against is the percentage of quotes pertaining to the largest class of quotesobjective, which represents 61% of our corpus. According to the approach we settled on, we wanted to make sure that: a) we estimate the opinion on the target of the quote (by computing the opinion in windows of words between the mentions of the entity), b) we eliminate the bad versus good news content (by eliminating those words which are both sentiment-bearing words and words that are part of EMM category definitions, from now on called category words). Given that we are faced with the task of classifying opinion in a general context, we employed a simple, yet efficient approach, presented in . At the present moment, there are different lexicons for affect detection and opinion mining. In order to have a more extensive database of affect-related terms, in the following experiments we used WordNet Affect (Strapparava and Valitutti, 2004), SentiWordNet , MicroWNOp (Cerini et al, 2007). Additionally, we used an in-house built resource of opinion words with associated polarity, which we denote by JRC Tonality. Each of the employed resources was mapped to four categories, which were given different scores: positive (1), negative (-1), high positive (4) and high negative (-4). The score of each of the quotes was computed as sum of the values of the words identified around the mentions of the entity that was the target of the quote, either directly (using the name), or by its title (e.g. Gordon Brown can be referred to as "Gordon", as "Brown" or as "the British prime-minister") 1 . The experiments conducted used different windows around the mentions of the target, by computing a score of the opinion words identified and eliminating the words that were at the same time opinion words and category words (e.g. crisis, disaster). Table 2 presents an overview of the results obtained using different window sizes and eliminating or not the category words in terms of accuracy (number of quotes that the system correctly classified as positive, negative or neutral, divided by the total number of quotes). As it can be seen, the different lexicons available performed dramatically different and the impact of eliminating the alert words was significant for some resources or none for others, i.e. in those cases where there were no category words that coincided with words in the respective lexicon. Table 2: Accuracy obtained using different lexicons, window sizes and alerts
As we can see from the difference in the results between the opinion mining process applied to the whole text and applied only to text spans around named entities, computing sentiment around the mentions of the entity in smaller window sizes performs better than computing the overall sentiment of texts where the entities are mentioned. From our experiments, we could notice that some resources have a tendency to over-classify quotes as negative (WordNet Affect) and some have the tendency to over-classify quotes as positive (SentiWordNet). We have performed evaluations using combinations of these four lexicons. The best results we obtained were using the combination of JRC Tonality and MicroWN, on a window of 6 words; in this case, the accuracy we obtained was 82%. As we can see, the majority of the resources used did not pass the baseline (61%), which shows that large lexicons do not necessarily mean an increase in the performance of systems using them.
Error analysis
Subsequently to the evaluation, we have performed an analysis of the cases where the system fails in correctly classifying the sentiment of the phrase or incorrectly classifying it as neutral. The largest percentage of failures is represented by quotes which are erroneously classified as neutral, because no sentiment words are present to account for the opinion in an explicit manner (e.g. "We have given X enough time", "He was the one behind all these atomic policies", "These revelations provide, at the very least, evidence that X has been doing favours for friends", "We have video evidence that activists of the X are giving out food products to voters") or the use of idiomatic expressions to express sentiment (e.g. "They have stirred the hornet"s nest"). Errors in misclassifying sentences as positive instead of negative or vice-versa were given by the use of irony (e.g. "X seemed to offer a lot of warm words, but very few plans to fight the recession"). Finally, quotes were misclassified as positive or negative (when they should in fact be neutral) because of the presence of a different opinion target in the context (e.g. "I"ve had two excellent meetings with X", "At the moment, Americans seem willing to support Y in his effort to win the war", "everyone who wants Y to fail is an idiot, because it means we"re all in trouble", "The chances of this strategy announced by X are far better than the purely military strategy of the past...") or the use of anaphoric references to the real target. All these problems require the implementation of specific methods to tackle them. Thus, firstly, the opinion lexicons should be extended to contain concepts which implicitly imply an assessment of the target because they are concepts we employ in our everyday lives (e.g. "hunger, food, approval"). Secondly, expressions that are frequently used in a language to describe "good" and "bad" situations have to be added to the opinion lexicon (e.g. "stir the hornet"s nest", "take the bull by the horns"). Irony is difficult to detect in text; however, when dealing with a larger context, the polarity of such pieces of text could be determined in relation to that of the surrounding sentences. Further on, we are researching on methods to determine the target of the opinion using Semantic Roles; thus, the judgement on the opinion expressed can be improved. Finally, resolving co-reference using a standard tool should in theory lead to a higher performance of the opinion mining system. However, in practice, from our preliminary experiments, the performance of the opinion mining system decreases when employing anaphora resolution tool.
Conclusions and future work
In this paper, we summarised our insights regarding sentiment classification for news and applied different methods to test the appropriateness of different resources and approaches to the task defined. We have seen that there is a need to clearly define, before the annotation is done, what the source and the target of the sentiment are, subsequently separate the good and bad news content from the good and bad sentiment expressed on the target and, finally, annotate only clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. We have furthermore seen that there are three different possible views on newspaper articlesauthor, reader and textand they have to be addressed differently at the time of analysing sentiment. We have performed experiments in this direction, by using categories to separate good and bad news content from the opinionated parts of the text. We also evaluated our approach using different lexicons in diverse combinations, and word windows.
We have shown that this simple approach produces good results when the task is clearly defined. Future work includes evaluating the impact of using negation and valence shifters and the use of other methods that have been proven efficient, such as machine learning using similarity with annotated corpora or syntactic patterns (Riloff and Wiebe, 2003). We also plan to extend the lexica used with different concepts that are intrinsically referring to a positive or negative situation and include target detection. Last, but not least, we are assessing methods to extend the lexicons for additional languages and subsequently compare opinion trends across sources and time.
Figure 1 :
1Histogram of the quotes" length
Figure 2 :
2The three components of text opinion
Table 1 :
1Results of the data annotation
For the full details on how the names and corresponding titles are obtained, please see.
Opinion Mining on Newspaper Quotations. A Balahur, R Steinberger, E Van Der Goot, B Pouliquen, M Kabadjov, Proceedings of the workshop 'Intelligent Analysis and Processing of Web News Content' (IAPWNC). the workshop 'Intelligent Analysis and Processing of Web News Content' (IAPWNC)Milano, ItalyBalahur, A., Steinberger, R., Van der Goot, E., Pouliquen, B., Kabadjov, M. (2009). Opinion Mining on Newspaper Quotations. Proceedings of the workshop 'Intelligent Analysis and Processing of Web News Content' (IAPWNC), held at the 2009 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology. Milano, Italy, 2009.
Rethinking Opinion Mining in News: from Theory to Practice and Back. A Balahur, R Steinberger, Proceedings of the 1st Workshop on Opinion Mining and Sentiment Analysis. the 1st Workshop on Opinion Mining and Sentiment AnalysisSatellite to CAEPIABalahur, A., Steinberger, R. (2009). Rethinking Opinion Mining in News: from Theory to Practice and Back. In Proceedings of the 1st Workshop on Opinion Mining and Sentiment Analysis, Satellite to CAEPIA 2009.
Unspeakable sentences: Narration and Representation in the Language of Fiction. A Banfield, Kegan Routledge, Paul, Banfield, A. (1982). Unspeakable sentences: Narration and Representation in the Language of Fiction. Routledge and Kegan Paul.
News bias of online headlines across languages. The study of conflict between Russia and Georgia. E Belyaeva, E Van Der Goot, Rhetorics of the Media. Conference Proceedings Lodz University Publishing House. Belyaeva, E., Van Der Goot, E. (2009). News bias of online headlines across languages. The study of conflict between Russia and Georgia. August 2008. Rhetorics of the Media. Conference Proceedings Lodz University Publishing House.
Language resources and linguistic theory: Typology, second language acquisition, English linguistics, chapter Micro-WNOp: A gold standard for the evaluation of automatically compiled lexical resources for opinion mining. S Cerini, V Compagnoni, A Demontis, M Formentelli, G Gandini, Franco Angeli Editore. Cerini, S. , V. Compagnoni, A. Demontis, M. Formentelli and G. Gandini. (2007). Language resources and linguistic theory: Typology, second language acquisition, English linguistics, chapter Micro-WNOp: A gold standard for the evaluation of automatically compiled lexical resources for opinion mining. Franco Angeli Editore, Milano, IT. 2007.
Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. K Dave, S Lawrence, D M Pennock, Proceedings of WWW. WWWDave, K., Lawrence, S., Pennock, D.M. (2003). Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of WWW, pp. 519-528, 2003.
SentiWordNet: A Publicly Available Resource for Opinion Mining. A Esuli, F Sebastiani, Proceedings of the 6th International Conference on Language Resources and Evaluation. the 6th International Conference on Language Resources and EvaluationLREC 2006, ItalyEsuli, A. and F. Sebastiani. (2006). SentiWordNet: A Publicly Available Resource for Opinion Mining. In Proceedings of the 6th International Conference on Language Resources and Evaluation, LREC 2006, Italy. 2006.
Detecting the bias in media with statistical learning methods Text Mining: Theory and Applications. A Esuli, F Sebastiani, Proceedings of the 6th International Conference on Language Resources and Evaluation Fortuna Blaž, Carolina Galleguillos and Nello Cristianini. the 6th International Conference on Language Resources and Evaluation Fortuna Blaž, Carolina Galleguillos and Nello CristianiniBantam BooksEmotional IntelligenceEsuli, A. and Sebastiani, F. (2006). SentiWordNet: A publicly available resource for opinion mining. In Proceedings of the 6th International Conference on Language Resources and Evaluation Fortuna Blaž, Carolina Galleguillos and Nello Cristianini. (2009). Detecting the bias in media with statistical learning methods Text Mining: Theory and Applications, Taylor and Francis Publisher, 2009. Goleman, D. (1995). Emotional Intelligence. Bantam Books.
Determining the Sentiment of Opinions. S.-M Kim, E Hovy, Proceedings of COLING. COLINGKim, S.-M. and Hovy, E. (2004). Determining the Sentiment of Opinions. In Proceedings of COLING 2004.
Automatic Detection of Quotations in Multilingual News. Pouliquen Bruno, Ralf Steinberger & Clive, Best, 27-29.09Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP'2007). the International Conference Recent Advances in Natural Language Processing (RANLP'2007)Borovets, BulgariaPouliquen Bruno, Ralf Steinberger & Clive Best (2007). Automatic Detection of Quotations in Multilingual News. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP'2007), pp. 487-492. Borovets, Bulgaria, 27-29.09.2007.
Automatic Construction of Multilingual Name Dictionaries. Pouliquen, R Steinberger, Advances in Neural Information Processing Systems Series (NIPS). Cyril Goutte, Nicola Cancedda, Marc Dymetman & George FosterMIT PressLearning Machine TranslationPouliquen, B and Steinberger, R. (2009). Automatic Construction of Multilingual Name Dictionaries. In Cyril Goutte, Nicola Cancedda, Marc Dymetman & George Foster (eds.): Learning Machine Translation. pp. 59-78. MIT Press -Advances in Neural Information Processing Systems Series (NIPS).
A Comprehensive Grammar of the English Language. R Quirk, Longman Publishing HouseQuirk, R. (1985). A Comprehensive Grammar of the English Language. Longman Publishing House.
A cultural-psychological analysis of emotions. C Ratner, Culture and Psychology. 6Ratner, C. (2000). A cultural-psychological analysis of emotions. Culture and Psychology, (6).
Learning Extraction Patterns for Subjective Expressions. E Riloff, J Wiebe, EMNLP-03Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. the 2003 Conference on Empirical Methods in Natural Language ProcessingRiloff, E. and Wiebe, J. (2003). Learning Extraction Patterns for Subjective Expressions. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP-03).
An Introduction to the Europe Media Monitor Family of Applications. R Steinberger, B Pouliquen, E Van Der Goot, Information Access in a Multilingual World -Proceedings of the SIGIR 2009 Workshop. Fredric Gey, Noriko Kando & Jussi KarlgrenBoston, USASteinberger, R., Pouliquen, B., Van der Goot, E. (2009). An Introduction to the Europe Media Monitor Family of Applications. In: Fredric Gey, Noriko Kando & Jussi Karlgren (eds.): Information Access in a Multilingual World -Proceedings of the SIGIR 2009 Workshop (SIGIR-CLIR'2009), pp. 1-8. Boston, USA. 23 July 2009.
Semeval 2007 task 14: Affective text. C Strapparava, R Mihalcea, Proceedings of ACL. ACLStrapparava, C. and Mihalcea, R. (2007). Semeval 2007 task 14: Affective text. In Proceedings of ACL 2007.
WordNet-Affect: an affective extension of WordNet. C Strapparava, A Valitutti, Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004). the 4th International Conference on Language Resources and Evaluation (LREC 2004)LisbonStrapparava, C. and Valitutti, A. (2004) WordNet-Affect: an affective extension of WordNet. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, May 2004, pp. 1083-1086. 2004.
A Poetics of Composition. B Uspensky, University of California PressBerkeley, CaliforniaUspensky, B. (1973). A Poetics of Composition. University of California Press, Berkeley, California.
The Interaction Society: Theories Practice and Supportive Technologies. M Wiberg, Idea Group IncWiberg, M. (2004). The Interaction Society: Theories Practice and Supportive Technologies. Idea Group Inc.
Tracking point of view in narrative. J Wiebe, Computational Linguistics. 20Wiebe, J. (1994). Tracking point of view in narrative. Computational Linguistics, 20.
| [] |
[
"WSD ALGORITHM BASED ON A NEW METHOD OF VECTOR-WORD CONTEXTS PROXIMITY CALCULATION VIA -FILTRATION",
"WSD ALGORITHM BASED ON A NEW METHOD OF VECTOR-WORD CONTEXTS PROXIMITY CALCULATION VIA -FILTRATION"
] | [
"A N Kirillov \nInstitute of Applied Mathematical Research\nKarelian Research Centre of the Russian Academy of Sciences\n\n",
"N B Krizhanovskaya \nInstitute of Applied Mathematical Research\nKarelian Research Centre of the Russian Academy of Sciences\n\n",
"A A Krizhanovsky \nInstitute of Applied Mathematical Research\nKarelian Research Centre of the Russian Academy of Sciences\n\n"
] | [
"Institute of Applied Mathematical Research\nKarelian Research Centre of the Russian Academy of Sciences\n",
"Institute of Applied Mathematical Research\nKarelian Research Centre of the Russian Academy of Sciences\n",
"Institute of Applied Mathematical Research\nKarelian Research Centre of the Russian Academy of Sciences\n"
] | [
"Transactions of Karelian Research Centre RAS Труды Карельского научного центра РАН"
] | The problem of word sense disambiguation (WSD) is considered in the article. Set of synonyms (synsets) and sentences with these synonyms are taken. It is necessary to automatically select the meaning of the word in the sentence. 1285 sentences were tagged by experts, namely, one of the dictionary meanings was selected by experts for target words. To solve the WSD problem, an algorithm based on a new method of vector-word contexts proximity calculation is proposed. A preliminary -filtering of words is performed, both in the sentence and in the set of synonyms, in order to achieve higher accuracy. An extensive program of experiments was carried out. Four algorithms are implemented, including the new algorithm. Experiments have shown that in some cases the new algorithm produces better results. The developed software and the tagged corpus have an open license and are available online. Wiktionary and Wikisource are used. A brief description of this work can be viewed as slides (https://goo.gl/9ak6Gt). A video lecture in Russian about this research is available online (https://youtu.be/-DLmRkepf58).K e y w o r d s: | 10.17076/mat829 | [
"https://arxiv.org/pdf/1805.09559v2.pdf"
] | 43,942,835 | 1805.09559 | 948c5f545d08fa893e877446b5ed5ff913e300b6 |
WSD ALGORITHM BASED ON A NEW METHOD OF VECTOR-WORD CONTEXTS PROXIMITY CALCULATION VIA -FILTRATION
2018. 2018
A N Kirillov
Institute of Applied Mathematical Research
Karelian Research Centre of the Russian Academy of Sciences
N B Krizhanovskaya
Institute of Applied Mathematical Research
Karelian Research Centre of the Russian Academy of Sciences
A A Krizhanovsky
Institute of Applied Mathematical Research
Karelian Research Centre of the Russian Academy of Sciences
WSD ALGORITHM BASED ON A NEW METHOD OF VECTOR-WORD CONTEXTS PROXIMITY CALCULATION VIA -FILTRATION
Transactions of Karelian Research Centre RAS Труды Карельского научного центра РАН
72018. 201810.17076/mat829synonymsynsetcorpus linguisticsword2vecWikisourceWSDRusVectoresWiktionary
The problem of word sense disambiguation (WSD) is considered in the article. Set of synonyms (synsets) and sentences with these synonyms are taken. It is necessary to automatically select the meaning of the word in the sentence. 1285 sentences were tagged by experts, namely, one of the dictionary meanings was selected by experts for target words. To solve the WSD problem, an algorithm based on a new method of vector-word contexts proximity calculation is proposed. A preliminary -filtering of words is performed, both in the sentence and in the set of synonyms, in order to achieve higher accuracy. An extensive program of experiments was carried out. Four algorithms are implemented, including the new algorithm. Experiments have shown that in some cases the new algorithm produces better results. The developed software and the tagged corpus have an open license and are available online. Wiktionary and Wikisource are used. A brief description of this work can be viewed as slides (https://goo.gl/9ak6Gt). A video lecture in Russian about this research is available online (https://youtu.be/-DLmRkepf58).K e y w o r d s:
Introduction
The problem of word sense disambiguation (WSD) is a real challenge to computer scientists and linguists. Lexical ambiguity is widespread and is one of the obstructions in natural language processing.
In our previous work "Calculated attributes of synonym sets" [6], we have proposed the geometric approach to mathematical modeling of synonym set (synset) using the word vector representation. Several geometric characteristics of the synset words were suggested (synset interior, synset word rank and centrality). They are used to select the most significant synset words, i.e. the words whose senses are the nearest to the sense of the synset.
The topic related to polysemy, synonyms, filtering and WSD is continued in this article. Let us formulate the mathematical foundations for solving the problems of computational linguistics in this article.
Using the approach proposed in the paper [2], we present the WSD algorithm based on a new context distance (proximity) calculation via -filtration. The experiments show the advantages of the proposed distance over the traditional average vectors similarity measure of distance between contexts.
New -proximity between finite sets
It is quite evident that the context distance choice is one of the crucial factors influencing WSD algorithms. Here, in order to classify discrete structures, namely contexts, we propose a new approach to context proximity based on Hausdorff metric and symmetric difference of sets: △ = ( ∪ ) ∖ ( ∩ ). .., 2 } the sets of vectors 1 , 2 corresponding to the words 1 , 2 . Recall that generally in WSD procedures, the distance between words is measured by similarity function, which is a cosine of angle between vectors representing words:
( 1 , 2 ) = ( 1 , 2) || 1|||| 2|| , where ( 1 , 2 )
is a scalar (inner) product of vectors 1 , 2 , and || || is a norm of vector, = 1, 2. In what follows, 1]. Thus, the less distance the more similarity. Keeping in mind the latter remark, we introduce the following -proximity of vector contexts 1 , 2 . Given 0, construct the sets
( 1 , 2 ) ∈ [−1,( 1 , 2 , ) = { , : ∈ 1 , ∈ 2 , ( , ) }. ( 1 , 2 , ) = ( 1 ∪ 2 ) ∖ ( 1 , 2 )
. Supposing that plays the role of a metric, then ( 1 , 2 , ) is analogous to the expression ( ) ∪ ( ) in the definition of the Hausdorff distance.
Denote by | | the power of a set ⊂ ,
R + = { : 0, ∈ R}.
Definition 1. The -proximity of contexts 1 , 2 is the function
( 1 , 2 , ) = | ( 1 , 2 , )| | 1 ∪ 2 | . It is clear that ( 1 , 2 , ) ∈ [0, 1].
We also define the following function.
Definition 2. The˜-proximity of contexts 1 , 2 is the functioñ
( 1 , 2 , ) = | ( 1 , 2 , )| 1 + | ( 1 , 2 , )| ,
describing the ratio of "near" and "distant" elements of sets.
The definition implies that min˜( 1 , 2 , ) = 0, max˜( 1 , 2 , ) = | 1 ∪ 2 |. The presence of 1 in the denominator permits to avoid zero denominator when | ( 1 , 2 , )| = 0.
The ubiquitous distance between contexts 1 , 2 is based on the similarity of average vectors:
( 1 , 2 ) = ( 1 , 2 ). But the example (Fig. 3) shows that for two geometrically distant and not too similar structures ( 1 , 2 ) = 1, that is the similarity takes the maximum value.
Example
Consider the sets = { 1 , 2 , 3 },
= { 1 } pictured in Fig. 3, where 1 + 3 = − → 0 , 2 = 1 . Then, ( , ) = ( 1 3 ( 1 + 2 + 3 ), 1 ) = ( 2 , 1 ) = 1,˜( , , ) = 2 3 , ( , , ) = 1 2 .
The equality of average vectors does not mean the coincidence of and , which are rather different (Fig. 3). In what follows, we introduce a procedure of -filtration, the idea of which is borrowed from the paper [2].
The synset filtration is the formation of a so called candidate set which consists of those synonyms whose similarity with the words from a sentence is higher than a similarity threshold .
The first average algorithm 1, described below, uses average vectors of words of sentences and average vectors of the candidate set of synonyms in synsets.
This algorithm contains the following lines. Line 1. Calculate the average vector of words of the sentence
7 ( ) = ⎧ ⎨ ⎩ 1 ( ) ∑︀ ∈ ( ) , if ( ) > 0 − → 0 , if ( ) = 0
the similarity of average vectors of the sentence and the k -th filtered synset: Result: the target word * has the sense corresponding to the * -th synset * . Remark: in the case = 0, we denote this algorithm as 0 -algorithm. In this case, the traditional averaging of similarity is used.
Note.
0 -algorithm was used in our experiments, it was implemented in Python. 1
-algorithm example
A simple example and figures 4-6 will help to understand how this 0 -algorithm works.
Take some dictionary word 2 with several senses and several synonym sets (for example, 1 and 2 ) and the sentence with this word (Fig. 4). The task is to select a meaning (synset) of 2 (that is the target word is * 2 ) used in the sentence via the 0 -algorithm.
Let us match the input data and the symbols used in the 0 -algorithm. The word "служить" (sluzhit') corresponds to the vector 2 . corresponding to words of the sentence , the vertex 2 was excluded since it corresponds to the target word * 2 , and (2) the target word * 2 with two synsets 1 and 2 ( Fig. 4), (3) vertices (vectors correspond to words) of the first synset are Fig. 6. Similarity between the mean value of vectors of the sentence and the first synonym set is lower than the similarity with the second synset, that is
{ 1 1 , 2 1 } and the second synset -{ 1 2 , 2 2 }( 1 , ) < ( 2 , )
. Thus, the second sense of the target word * 2 (the second synset 2 ) will be selected in the sentence by 0 -algorithm There is a dictionary article about this word in the Wiktionary, see Fig. 4 (a parsed database of Wiktionary is used in our projects). 2 Two synonym sets of this Wiktionary entry are denoted by 1 and 2 . Mean values of the vectors corresponding to synonyms in these synsets will be denoted as 1 and 2 , and is the mean vector of all vectors corresponding to words in the sentence containing the word "служить" (sluzhit').
Average algorithm with sentence and synonyms -filtration ( )
This algorithm 2 is a modification of algorithm 1. The filtration of a sentence is added to synset filtration. Namely, we select a word from the sentence for which the similarity with at least one synonym from the synset is higher than the similarity threshold . Then, we average the set of selected words forming the set of candidates from the sentence. Let us explain algorithm 2 line by line.
Lines 2-5. Given > 0, let us construct the set of words of the sentence filtered by synonyms of the k -th synset
( ) = { ∈ : ∃ ∈ , ( , ) > , ̸ = * , ̸ = * }
Denote by
( ) = | ( )| the power of the set ( ). Line 6. Calculate the average vector of words of the filtered sentence
( ) = 1 ( ) ∑︁ ∈ ( )
If ( ) = 0, then let ( ) be equal to the zero vector.
Lines 7-8. Construct filtered sets of synonyms
( ) = { ∈ : ∃ ∈ , ( , ) > , ̸ = * , ̸ = * }.
Denote by ( ) = | ( )| the power of the k -th filtered synonym set. Line 9. Calculate for ( ) > 0 the average vector of the k -th synset of candidates
( ) = 1 ( ) ∑︁ ∈ ( ) .
If ( ) = 0, then ( ) equals to the zero vector.
Line 10. Calculate the similarity of the average vectors of the filtered sentence and the k -th filtered synset
( ) = ( ( ),( ))
.
Lines 12-13. Suppose =1,..., { ( )} = * ( ), i.e. * ∈ {1, ..., } is the number of the largest ( ).
If * is not unique then take another > 0 and repeat the procedure from line 2.
Result: the target word * in the sentence has the sense corresponding to the * -th synset * . This algorithm was implemented in Python. 3 Algorithm 2: Average algorithm with sentence and synonyms -filtration ( ) Data: * -vector of the target word * with senses (synsets), ∈ , -sentence with the target word * , * ∈ , { } -synsets of the target word, that is ∋ * , = 1, . Result: * ∈ {1, . . . , } is the number of the sense of the word * in the sentence .
1 do 2 take > 0 foreach synset of the target word 3 foreach ∋ * do
construct the set of words of the sentence filtered by synonyms of the k -th synset :
4 ( ) = { ∈ : ∃ ∈ , ( , ) > , ̸ = * , ̸ = * } 5 ( ) = | ( )|, number of candidates of the sentence;
the average vector of sentence candidates:
6 ( ) = ⎧ ⎨ ⎩ 1 ( ) ∑︀ ∈ ( ) , if ( ) > 0 − → 0 , if ( ) = 0
-filtration of the synset by the sentence :
7 ( ) = { ∈ : ∃ ∈ , ( , ) > , ̸ = * , ̸ = * } 8 ( ) = | ( )|,9 ( ) = ⎧ ⎨ ⎩ 1 ( ) ∑︀ ∈ ( ) , if ( ) > 0 − → 0 , if ( ) = 0
the similarity of the average vectors of the sentence and the k -th filtered synset:
10 ( ) = ( ( ), ( )) 11 end 12 * ( ) = max =1,..., { ( )} ⇒ * ∈ {1, . . . , } , * is the number of the largest ( ) 13 while * is not unique -algorithm based on -dilatation
The algorithm 3 ( -algorithm) is based on the function˜( , , ) (see previous section "New -proximity between finite sets" on page 150), where = , that is k -th synset, and = , where is a sentence. The algorithm includes the following steps.
Lines 2-4. Given > 0, let us construct the ( ) set of "near" words of the k -th synset and the sentence . Line 5. Denote by ( ) the set of "distant" words
( ) = ( ∪ ) ∖ ( ).
Line 6. Calculate˜( ) as the ratio of "near" and "distant" elements of the sets
( ) = | ( )| 1 + | ( )| .
Lines 8-9. Suppose =1,...,˜( ) =˜*( ). If * is not unique, then take another > 0 and repeat the procedure from line 2. Result: the target word * has the sense corresponding to the * -th synset * . An example of constructing C and D sets is presented in Fig. 7 and Table. It uses the same source data as for the 0 -algorithm, see Fig. 5.
Remark. This algorithm is applicable to the -function described in the previous section 3 as well. This algorithm was implemented in Python. 4 More details for this example (Fig. 7) are presented in Table, which shows and sets with different and values of the˜-function.
Bold type of word-vertices in Table indicates new vertices. These new vertices are captured by a set of "near" vertices and these vertices are excluded from the set of "distant" vertices with each subsequent dilatation extension with each subsequent . For example, in the transition from 1 to 2 the set 2 ( 1 ) loses the vertex 3 . During this transition 1 → 2 the set 2 ( 2 ) gets the same vertex 3 in comparison with the set 2 ( 1 ).
In Fig. 8, the function˜1( ) shows the proximity of the sentence and the synset 1 , the function˜2( ) -the proximity of and the synset 2 . It can be seen in Figure 8 that with decreasing , the value of˜2( ) grows faster thañ
1 ( ).
Therefore, the sentence is closer to the second synset 2 . The same result can be seen in the previous Fig. 7. show that the sentence is closer to the second synset 2 An example of the -algorithm treating the word 2 , which has two synsets 1 , 2 and the sentence , where 2 ∈ , see Fig. 4. The number of the algorithm iteration corresponds to the index of . Let the series of be ordered so that 1 = 0 > 1 > 2 > ... > 7 = −1. It is known that | 1 ∪ 1 ∖ 2 | = | ∖ 2 | = 6, that is the total number of words in the synsets and in the sentence are constants.
2 ( ) 2 ( ) | 2 | | 2 |˜2( ) ( ) = | ( )| 1+| ( )| 0 ∅ 1 , 3 , 4 , 5 , 1 2 , 2 2 0 6 0.0 1 1 , 2 2 3 , 4 , 5 , 1 2 2 4 2 5 2 1 , 2 2 , 3 4 , 5 , 1 2 3 3 3 4 3 1 , 2 2 , 3 , 1 2 4 , 5 4 2 4 3 1 ( ) 1 ( ) | 1 | | 1 |˜1( ) 4 2 1 , 4 1 1 , 1 , 3 , 5 2 4 2 5 2 ( ) 2 ( ) | 2 | | 2 |˜2( ) 5 1 , 2 2 , 3 , 1 2 , 4 , 5 , ∅ 6 0 6 1 ( ) 1 ( ) | 1 | | 1 |˜1( ) 6 2 1 , 4 , 1 1 1 , 3 , 5 3 3 3 4
Experiments
Web of tools and resources
This section describes the resources used in our research, namely: Wikisource, Wiktionary, WCorpus and RusVectores.
The developed WCorpus 5 system includes texts extracted from Wikisource and provides the user with a text corpus analysis tool. This system is based on the Laravel framework (PHP programming language). MySQL database is used. 6 Wikisource. The texts of Wikipedia have been used as a basis for several contemporary corpora [5]. But there is no mention of using texts from Wikisource in text processing. Wikisource is an open online digital library with texts in many languages. Wikisource sites contains 10 millions of texts 7 in more than 38 languages. 8 Russian Wikisource (the database dump as of February 2017) was used in our research.
Texts parsing. The texts of Wikisource were parsed, analysed and stored to the WCorpus database. Let us describe this process in detail. The database dump containing all texts of Russian Wikisource was taken from "Wikimedia Downloads" site. 9 These Wikisource database files were imported into the local MySQL database titled "Wikisource Database" in Fig. 9, where "WCorpus Parser" is the set of WCorpus PHP-scripts which analyse and parse the texts in the following three steps.
1. First, the title and the text of an article from the Wikisource database are extracted (560 thousands of texts). One text corresponds to one page on Wikisource site. It may be small (for example, several lines of a poem), medium (chapter or short story), or huge size (e.g. the size of the page with the novella "The Eternal Husband" written by Fyodor Dostoyevsky is 500 KB). Text preprocessing includes the following steps:
• Texts written in English and texts in Russian orthography before 1918 were excluded; about 12 thousands texts were excluded. • Service information (wiki markup, references, categories and so on) was removed from the text. • Very short texts were excluded. As a result, 377 thousand texts were extracted. • Texts splitting into sentences produced 6 millions of sentences. • Sentences were split into words (1.5 millions of unique words). 4. Lastly, lemmas, wordforms, sentences and relations between words and sentences were stored to WCorpus database (Fig. 9).
In our previous work "Calculated attributes of synonym sets" [6] we also used neural network models of the great project RusVectores 11 , which is a kind of a word2vec tool based on Russian texts [9].
Context similarity algorithms evaluation
In order to evaluate the proposed WSD algorithms, several words were selected from a dictionary, then sentences with these words were extracted from the corpus and tagged by experts.
Nine words
Only polysemous words which have at least two meanings with different sets of synonyms are suitable for our evaluation of WSD algorithms.
The following criteria for the selection of synonyms and sets of synonyms from Russian Wiktionary were used:
1. Only single-word synonyms are extracted from Wiktionary. This is due to the fact that the RusVectores neural network model "ruscorpora_2017_1_600_2" used in our research does not support multiword expressions.
2. If a word has meanings with equal sets of synonyms, then these sets were skipped because it is not possible to discern different meanings of the word using only these synonyms without additional information.
10 https://packagist.org/packages/componavt/phpmorphy 11 http://rusvectores.org/en/ 12 http://whinger.krc.karelia.ru/soft/wikokit/index.html 13 https://github.com/componavt/piwidict 14 See information about the subcorpus in the section "Sentences of three Russian writers" on page 158.
A list of polysemous words was extracted from the parsed Russian Wiktionary 12 using PHP API piwidict 13 (Fig. 9).
Thus, 9 polysemous Russian words (presented in the subcorpus 14 ) were selected by experts from this Wiktionary list, namely: "бездна" (bezdna), "бросать" (brosat'), "видный" (vidnyy), "до-нести" (donesti), "доносить" (donosit'), "занятие" (zanyatiye), "лихой" (likhoy), "отсюда" (otsyuda), "удачно" (udachno). The tenth word "служить" (sluzhit') was left out of consideration, because there are 1259 of 1308 sentences with this frequent word to be tagged by experts in the future (Fig. 10).
Sentences of three Russian writers
The sentences which contain previously defined 9 words were to be selected from the corpus and tagged by experts. But the Wikisource corpus was too huge for this purpose. So, in our research a subcorpus of Wikisource texts was used. These are the texts written by Fyodor Dostoevsky, Leo Tolstoy and Anton Chekhov.
Analysis of the created WCorpus database with texts of three writers shows that the subcorpus contains: 15
• 2635 texts;
• 333 thousand sentences;
• 215 thousand wordforms;
• 76 thousand lemmas; Texts of this subcorpus contain 1285 sentences with these 9 words, wherein 9 words have in total 42 synsets (senses). It was developed A graphical user interface (webform) of the WCorpus system ( Fig. 10) was developed, where experts selected one of the senses of the target word for each of the 1285 sentences.
This subcorpus database with tagged sentences and linked synsets is available online [7].
Text processing and calculations
These 1285 sentences were extracted from the corpus. Sentences were split into tokens. Then wordforms were extracted. All the wordforms were lowercase and lemmatized. Therefore, a sentence is a bag of words. Sentences with only one word were skipped.
The phpMorpy lemmatizer takes a wordform and yields possible lemmas with the corresponding part of speech (POS). Information on POS of a word is needed to work with the RusVectores' prediction neural network model "ruscorpora_2017_1_600_2", because to get a vector it is necessary to ask for a word and POS, for example "serve_VERB". Only nouns, verbs, adjectives and adverbs remain in a sentence bag of words, other words were skipped.
The computer program (Python scripts) which works with the WCorpus database and RusVectores was written and presented in the form of the project wcorpus.py at GitHub. 16 The source code in the file synset_selector.py 17 implements three algorithms described in the article, namely:
• 0 -algorithm implemented in the function selectSynsetForSentenceByAverageSimilarity();
• -algorithm -function selectSynsetForSen-tenceByAlienDegree();
• -algorithm -function selectSynsetForSen-tenceByAverageSimilarityModified().
These three algorithms calculated and selected one of the possible synsets for each of 1285 sentences.
Two algorithms ( and ) have an input parameter of , therefore, a cycle with a step of 0.01 from 0 to 1 was added, which resulted in 100 iterations for each sentence.
Then, answers generated by the algorithms were compared with the synsets selected by experts.
The number of sentences with the sense correctly tagged by the -algorithm for nine Russian words presented in Fig. 11.
The legend of this figure lists target words with numbers in brackets ( , ), where is the number of sentences with these words, is the number of senses.
The curves for the words "ЗАНЯТИЕ" ("ZANYATIYE", cyan solid line with star points) and "ОТСЮДА" ("OTSYUDA", green solid line with triangle points) are quite high for some , because (1) there are many sentences with these words (352 and 308) in our subcorpus, (2) these words have few meanings (3 and 2). If a word has more meanings, then the algorithm yields even poorer results. It is visible in the normalised data (Fig. 12), where examples with good results are "ОТСЮДА" (OTSYUDA) and "ЛИХОЙ" (LIKHOY, pink dash dot line with diamond points) with 2 meanings; the example "БРОСАТЬ" (BROSAT', red bold dotted line) with 9 meanings has the worst result (the lowest dotted curve).
Comparison of three algorithms
Let us compare three algorithms by summing the results for all nine words. Fig. 13 contains the following curves:
• 0 -algorithm -long dash blue line;
• -algorithm -solid red line;
• -algorithm -dash yellow line.
The 0 -algorithm does not depend on . It showed mediocre results.
The -algorithm yields better results than -algorithm when > 0.15. The -algorithm showed the best results on the interval [0.15; 0.35]. Namely, more than 700 sentences (out of 1285 human-tagged sentences) were properly tagged with the -algorithm on this interval (Fig. 13).
Comparison of four algorithms as applied to nine words
Let us compare the results of running four algorithms for each word separately (Fig. 14):
• 0 -algorithm -long dash blue line with triangle points;
• -algorithm -solid red line with square points;
• -algorithm -dash yellow line with circle points;
• "Most frequent meaning" -green dashed line with X marks.
The simple "most frequent meaning" algorithm was added to compare the results. This algorithm does not depend on the variable , it selects the meaning (synset) that is the most frequent in our corpus of texts. In Fig. 14 this algorithm corresponds to a green dashed line with X marks.
The results of the "most frequent meaning" algorithm and 0 -algorithm are similar (Fig. 14).
The -algorithm is the absolute champion in this competition, that is for each word there exists an such that the -algorithm outperforms other algorithms (Fig. 14).
Let us explain the calculation of the curves in Fig. 14.
For the 0 -algorithm and the "most frequent meaning" algorithm, the meaning (synset) is calculated for each of the nine words on the set of 1285 sentences. Thus, 1285 · 2 calculations were performed.
And again, the -algorithm and the -algorithm depend on the variable . But how can the results be shown without the axis? If at least one value of gives a positive result, then we suppose that the WSD problem was correctly solved for this sentence by the algorithm.
The value on the Y axis for the selected word (for -algorithm and -algorithm) is equal to the sum of such correctly determined sentences (with different values of ) in Fig. 14. Perhaps it would be more correct to fix corresponding to the maximum number of correctly determined sentences. Then, the result will not be so optimistic.
To show the complexity of comparing and evaluating -algorithms (that is, algorithms that depend on ), let us try to analyze the results of the -algorithm, shown in Fig 15. The percentage (proportion) of correctly determined 1285 sentences for 9 words by the -algorithm, where the variable changes from 0 to 1 in increments of 0.01, is presented in Fig. 15. Thus, 1285 · 100 calculations were performed. These proportions are distributed over a set of possible calculated results from 0% (no sentence is guessed) to 100% (all sentences are guessed) for each of nine words.
This Figure 15 does not show which -values produce better or poorer results, although it could be seen in Figures 11-13. But the Figure does show the set and the quality of the results obtained with the help of the -algorithm. For example, the word "лихой" (likhoy) with 22 sentences and 100 different has only 8 different outcomes of the -algorithm, seven of which lie in the region above 50%, that is, more than eleven sentences are guessed at any . For example, the word "бросать" (brosat') has the largest number of meanings in our data set, it has 9 synonym sets in our dictionary and 11 meanings in Russian Wiktionary. 18 Аll possible results of the -algorithm for this word are distributed in the range of 10-30%. The maximum share of guessed sentences is 30.61%. Note that this value is achieved when = 0.39, and this is clearly shown in Figure 12, see the thick dotted line.
All calculations, charts drawn from experimental data and results of the experiments are available online in Google Sheets [8].
Conclusions
The development of the corpus analysis system WCorpus 19 was started. 377 thousand texts were extracted from Russian Wikisource, processed and uploaded to this corpus.
Context-predictive models of the RusVectores project are used to calculate the distance between lemmas. Scripts in Python were developed to process RusVectores data, see the wcorpus.py project on the GitHub website.
The WSD algorithm based on a new method of vector-word contexts proximity calculation is proposed and implemented. Experiments have shown that in a number of cases the new algorithm shows better results.
The future work is matching Russian lexical resources (Wiktionary, WCorpus) to Wikidata objects [11].
Fig. 1 .
1The set △ is the shaded part of circles Recall the notion of Hausdorff metric. Consider a metric space ( , ) where is a set, is a metric in . Define the -dilatation + of a set ⊂+ = ∪{ ( ) : ∈ },where ( ) is a closed ball centered at with the radius .The Hausdorff distance ( , ) between compact nonempty sets and is( , ) = min{ > 0 : ( ⊂ + )∧( ⊂ + )},where + , + are the -dilatations of and . Consider the following sets (Fig. 2):( ) = ∩ ( + ), ( ) = ∩ ( + ).
Fig. 2 .
2Two sets + and + are the -dilatations of segments and , and two new proposed set-valued maps ( ) and ( ) were inspired by Hausdorff distance Then ( , ) = min{ > 0 : ( ) ∪ ( ) = ∪ }. Consider two contexts 1 = { 11 , ..., 1 }, 2 = { 21 , ..., 2 }, where 1 , 2 are words in the contexts, = 1, .., , = 1, ..., . Denote by 1 = { 11 , ..., 1 }, 2 = { 21 , .
Fig. 3 .
3An example of similar average vectors ( = 2 = 1 = ) and totally different sets of vectors: { 1 , 2 , 3 } and { 1 }
- 6 ..
6Given > 0, let us construct the filtered set of synonyms for each synset ( ) = { ∈ : ̸ = * , ( , * ) > }. Denote by ( ) = |( ( ))| the power of a set ( ). Line 7. Calculate for ( ) > 0 the average vector of the synset candidates If ( ) = 0, then let ( ) be equal to the zero vector. Line 8. Calculate the similarity of the average vectors of the sentence and the k -th filtered synset ( ) = ( ( ), ).
Line 10-11. Suppose =1,..., { ( )} = * ( ), i.e. * ∈ {1, . . . , } is the number of the largest ( ). If * is not unique, then take Algorithm 1: Average algorithm with synonyms -filtration Data: * -vector of the target word * with senses (synsets), ∈ , -sentence with the target word * , * ∈ , { } -synsets of the target word, that is ∋ * , = 1, . Result: * ∈ {1, . . . , } is the number of the sense of the word * in the sentence . ̸ = * , ( , * ) > } 6 ( ) = | ( )|, number of candidates of synonyms the average vector of synset candidates:
} ⇒ * ∈ {1, . . . , } , * is the number of the largest ( )11 while * is not unique another > 0 and repeat the procedure from line 3.
Fig. 4 .Fig. 5 .
45Digest of the Wiktionary entry "служить" Sample source data are (1) vertices 1 ...5
Data: * -vector of target word * with senses (synsets), ∈ , * ∈ , { } -synsets of * , = 1, . Result: * ∈ {1, . . . , } is the number of the sense of the word * in the sentence . "near" and "distant" elements of the sets: 6˜( ) = | ( )| 1+| ( )| 7 end get the number of the largest ratio * 8˜* ( ) = max =1,...,˜( ) 9 while * is not unique
Fig. 7 .
7An example of series of ( ) (sets of words of k -th synset which are close and near to the sentence ) in the -algorithm based on -dilatation. The growth of the dilation of the vertices of the second synset { 1 2 , 2 2 } captures the vertices of the sentence = { 1 , 3 , 4 , 5 } faster than the dilation of the vertices of the first synset. In other symbols:( 2 + ) ∩ ⊃ ( 1 + ) ∩ .That is, according to the -algorithm, the second value of the word-vector 2 , represented by the synset 2 , will be selected for the sentence
Fig. 8 .
8Left-continuous step functions˜1( ),˜2( )
Fig. 9 .
9The architecture of WCorpus system and the use of other resources 3. Secondly, word forms were lemmatized using phpMorphy 10 program (0.9 million lemmas).
Fig. 10 .
10Russian verb "служить" (sluzhit') has seven meanings and seven synsets in the developed system WCorpus. 49 sentences are already linked to relevant senses of this verb. 1259 sentences remain to be tagged by experts
Fig. 11 .
11Number of sentences with the correct tagged sense for nine Russian words by the K -algorithm
Fig. 12 .
12Normalised data with the fraction of sentences with correctly tagged sense for nine Russian words More meanings, poorer results.
Fig. 13 .
13Comparison of 0 -algorithm, -algorithm, -algorithm
Fig. 14 .
14Comparison of 0 -algorithm, -algorithm, -algorithm and the most frequent meaning
Fig. 15 .
15Proportions of correctly guessed sentences distributed over a set of possible calculated results
Consider a sentence = ( 1 . . . * . . . ) containing a target word * (denote it as * ). and a vector representation = ( 1 . . . * . . . )Average algorithm with synonyms
-filtration
of
, where
is a word,
is a vector
representation of
. Denote * as * . Suppose
the target word * has
senses. Denote by
a synset corresponding to -th sense, =
1, . . . , ,
= { 1 , . . . ,
}, where
are
synonyms. Let
= { 1 , . . . ,
} be a set
of vector representations of synonyms
, =
1, . . . , .
number of candidates of synonyms the average vector of synset candidates:
See the function selectSynsetForSentenceByAverageSimilarity in the file https://github.com/componavt/ wcorpus.py/blob/master/src/test_synset_for_sentence/lib_sfors/synset_selector.py
See section "Web of tools and resources" on page 156.3 See the function selectSynsetForSentenceByAverageSimilarityModified in the file https://github.com/ componavt/wcorpus.py/blob/master/src/test_synset_for_sentence/lib_sfors/synset_selector.py
See the function selectSynsetForSentenceByAlienDegree in the file https://github.com/componavt/wcorpus.py/ blob/master/src/test_synset_for_sentence/lib_sfors/synset_selector.py
https://github.com/componavt/wcorpus 6 See WCorpus database scheme: https://github.com/componavt/wcorpus/blob/master/doc/workbench/db_ scheme.png 7 https://stats.wikimedia.org/wikisource/EN/TablesWikipediaZZ.htm 8 https://stats.wikimedia.org/wikisource/EN/Sitemap.htm 9 https://dumps.wikimedia.org/backup-index.html
See SQL-queries applied to the subcorpus https://github.com/componavt/wcorpus/wiki/SQL 16 https://github.com/componavt/wcorpus.py
https://github.com/componavt/wcorpus.py/blob/master/src/test_synset_for_sentence/lib_sfors/ synset_selector.py
The study was supported by the Russian Foundation for Basic Research, grant No. 18-012-00117.
A simple but tough-to-beat baseline for sentence embeddings. S Arora, Y Liang, T Ma, Proceedings of the ICLR, 2017. P. 1-16. the ICLR, 2017. P. 1-16access date: 3.04Arora S., Liang Y., Ma T. A simple but tough-to-beat baseline for sentence embeddings. In Proceedings of the ICLR, 2017. P. 1-16. URL: https://pdfs.semanticscholar.org/ 3fc9/7768dc0b36449ec377d6a4cad8827908d5b4. pdf (access date: 3.04.2018).
A unified model for word sense representation and disambiguation. X Chen, Z Liu, M Sun, 10.3115/v1/d14-1110Proceedings of the EMNLP. the EMNLPaccess date: 3.04Chen X., Liu Z., Sun M. A unified model for word sense representation and disambiguation. In Proceedings of the EMNLP, 2014. P. 1025- 1035. doi: 10.3115/v1/d14-1110. URL: http: //www.aclweb.org/anthology/D14-1110 (access date: 3.04.2018).
A survey of binary similarity and distance measures. S S Choi, S H Cha, C C Tappert, Cybernetics and Informatics. Choi S. S., Cha S. H., Tappert C. C. A survey of binary similarity and distance measures. Journal of Systemics, Cybernetics and Informatics. 2010.
Convolution kernels on discrete structures. D Haussler, Department of Computer Science, University of California at Santa CruzTechnical reportpdf (access date: 3.04Haussler D. Convolution kernels on discrete structures. Technical report, Department of Computer Science, University of California at Santa Cruz. 1999. URL: https://www.soe.ucsc. edu/sites/default/files/technical-reports/ UCSC-CRL-99-10.pdf (access date: 3.04.2018).
Analysis of Wikipedia-based corpora for question answering. T Jurczyk, A Deshmane, J Choi, arXiv:1801.02073arXiv preprintaccess date: 3.04Jurczyk T., Deshmane A., Choi J. Analysis of Wikipedia-based corpora for question answering. arXiv preprint arXiv:1801.02073. 2018. URL: http://arxiv.org/abs/1801.02073 (access date: 3.04.2018).
Calculated attributes of synonym sets. A Krizhanovsky, A Kirillov, arXiv:1803.01580arXiv preprintaccess date: 3.04Krizhanovsky A., Kirillov A. Calculated attributes of synonym sets. arXiv preprint arXiv:1803.01580. 2018. URL: http://arxiv.org/ abs/1803.01580 (access date: 3.04.2018).
WCorpus mysql database with texts of 3 writers. figshare. A Krizhanovsky, A Kirillov, N Krizhanovskaya, 10.6084/m9.figshare.5938150.v1v1 (access date: 3.04Krizhanovsky A., Kirillov A., Krizhanovskaya N. WCorpus mysql database with texts of 3 writers. figshare. 2018. URL: https://doi.org/10.6084/ m9.figshare.5938150.v1 (access date: 3.04.2018).
Assign senses to sentences of 3 writers. Google Sheets. A Krizhanovsky, A Kirillov, N Krizhanovskaya, access date: 27.04Krizhanovsky A., Kirillov A., Krizhanovskaya N. Assign senses to sentences of 3 writers. Google Sheets. 2018. URL: http://bit.ly/2I14QIT (access date: 27.04.2018).
Texts in, meaning out: neural language models in semantic similarity task for Russian. A Kutuzov, E Kuzmenko, arXiv:1504.08183.2015arXiv preprintaccess date: 3.04Kutuzov A., Kuzmenko E. Texts in, meaning out: neural language models in semantic similarity task for Russian. arXiv preprint arXiv:1504.08183. 2015. URL: https://arxiv.org/abs/1504.08183 (access date: 3.04.2018).
Similarity measures for binary and numerical data: a survey. M-J Lesot, M Rifqi, H Benhadda, 10.1504/ijkesdp.2009.021985International Journal of Knowledge Engineering and Soft Data Paradigms. 2009. 116533&rep=rep1& type=pdf (access date: 3.04Lesot M-J., Rifqi M., Benhadda H. Similarity measures for binary and numerical data: a survey. International Journal of Knowledge Engineering and Soft Data Paradigms. 2009. Vol. 1. no. 1. P. 63-84. doi: 10.1504/ijkesdp.2009.021985. URL: http://citeseerx.ist.psu.edu/viewdoc/ download?doi=10.1.1.212.6533&rep=rep1& type=pdf (access date: 3.04.2018).
Linking ImageNet WordNet Synsets with Wikidata. F Nielsen, 18.04WWW '18 Companion: The 2018 Web Conference Companion. Nielsen F. Linking ImageNet WordNet Synsets with Wikidata. In WWW '18 Companion: The 2018 Web Conference Companion. 2018. URL: https://arxiv.org/pdf/1803.04349.pdf (access date: 18.04.2018).
| [
"https://github.com/componavt/piwidict",
"https://github.com/componavt/",
"https://github.com/componavt/wcorpus.py/",
"https://github.com/componavt/wcorpus",
"https://github.com/componavt/wcorpus/blob/master/doc/workbench/db_",
"https://github.com/componavt/wcorpus/wiki/SQL",
"https://github.com/componavt/wcorpus.py",
"https://github.com/componavt/wcorpus.py/blob/master/src/test_synset_for_sentence/lib_sfors/"
] |
[
"A Gamma-Poisson Mixture Topic Model for Short Text",
"A Gamma-Poisson Mixture Topic Model for Short Text"
] | [
"Jocelyn Mazarura \nDepartment of Statistics\nUniversity of Pretoria\nPretoriaSouth Africa\n",
"Alta De Waal ",
"Pieter De Villiers \nDepartment of Electrical\nElectronic and Computer Engineering\nUniversity of Pretoria\nPretoriaSouth Africa\n",
"\nCenter for Artificial Intelligence Research (CAIR)\nPretoriaSouth Africa\n"
] | [
"Department of Statistics\nUniversity of Pretoria\nPretoriaSouth Africa",
"Department of Electrical\nElectronic and Computer Engineering\nUniversity of Pretoria\nPretoriaSouth Africa",
"Center for Artificial Intelligence Research (CAIR)\nPretoriaSouth Africa"
] | [] | Most topic models are constructed under the assumption that documents follow a multinomial distribution. The Poisson distribution is an alternative distribution to describe the probability of count data. For topic modelling, the Poisson distribution describes the number of occurrences of a word in documents of fixed length. The Poisson distribution has been successfully applied in text classification, but its application to topic modelling is not well documented, specifically in the context of a generative probabilistic model. Furthermore, the few Poisson topic models in literature are admixture models, making the assumption that a document is generated from a mixture of topics. In this study, we focus on short text. Many studies have shown that the simpler assumption of a mixture model fits short text better. With mixture models, as opposed to admixture models, the generative assumption is that a document is generated from a single topic. One topic model, which makes this one-topic-per-document assumption, is the Dirichlet-multinomial mixture model. The main contributions of this work are a new Gamma-Poisson mixture model, as well as a collapsed Gibbs sampler for the model. The benefit of the collapsed Gibbs sampler derivation is that the model is able to automatically select the number of topics contained in the corpus. The results show that the Gamma-Poisson mixture model performs better than the Dirichlet-multinomial mixture model at selecting the number of topics in labelled corpora. Furthermore, the Gamma-Poisson mixture produces better topic coherence scores than the Dirichlet-multinomial mixture model, thus making it a viable option for the challenging task of topic modelling of short text. | 10.1155/2020/4728095 | [
"https://arxiv.org/pdf/2004.11464v1.pdf"
] | 216,144,729 | 2004.11464 | 15fac6e1c2c6d5da27c8703840aebf67a90b18e5 |
A Gamma-Poisson Mixture Topic Model for Short Text
Jocelyn Mazarura
Department of Statistics
University of Pretoria
PretoriaSouth Africa
Alta De Waal
Pieter De Villiers
Department of Electrical
Electronic and Computer Engineering
University of Pretoria
PretoriaSouth Africa
Center for Artificial Intelligence Research (CAIR)
PretoriaSouth Africa
A Gamma-Poisson Mixture Topic Model for Short Text
Hindawi Template version: Apr19 1 Correspondence should be addressed to Jocelyn Mazarura; Jocelyn.Mazarura@up.ac.za
Most topic models are constructed under the assumption that documents follow a multinomial distribution. The Poisson distribution is an alternative distribution to describe the probability of count data. For topic modelling, the Poisson distribution describes the number of occurrences of a word in documents of fixed length. The Poisson distribution has been successfully applied in text classification, but its application to topic modelling is not well documented, specifically in the context of a generative probabilistic model. Furthermore, the few Poisson topic models in literature are admixture models, making the assumption that a document is generated from a mixture of topics. In this study, we focus on short text. Many studies have shown that the simpler assumption of a mixture model fits short text better. With mixture models, as opposed to admixture models, the generative assumption is that a document is generated from a single topic. One topic model, which makes this one-topic-per-document assumption, is the Dirichlet-multinomial mixture model. The main contributions of this work are a new Gamma-Poisson mixture model, as well as a collapsed Gibbs sampler for the model. The benefit of the collapsed Gibbs sampler derivation is that the model is able to automatically select the number of topics contained in the corpus. The results show that the Gamma-Poisson mixture model performs better than the Dirichlet-multinomial mixture model at selecting the number of topics in labelled corpora. Furthermore, the Gamma-Poisson mixture produces better topic coherence scores than the Dirichlet-multinomial mixture model, thus making it a viable option for the challenging task of topic modelling of short text.
Introduction
Topic modelling is a text mining technique used to uncover latent topics in large collections of documents. The Latent Dirichlet allocation (LDA) [1] model is the state-of-the art topic model. It has a proven history of success on long documents, such as news articles and e-books. Owing to the increasing popularity of micro-blogging websites, social media platforms and online shopping (which involves product reviews), text that is significantly shorter has become increasingly relevant. Such sources of text potentially hold valuable information that can be useful in many applications, such as event tracking [2], interest profiling [3] and product recommendation [4].
Traditional topic models infer topics based on word co-occurrence relationships between words [5]. In order to extract meaningful topics, a topic model must successfully infer these relationships from a corpus. Per definition, short text contains few words and consequently tends to contain less co-occurrence information than long text. For instance, some platforms, such as Twitter, impose a character limit on each post, which severely constrains the amount of information one can share in a single post. This has created a need to reconsider topic model assumptions in order to overcome this challenge. One topic model which has shown better performance on short text than LDA is the Dirichlet-multinomial mixture model (DMM) [6], [7]. LDA is sometimes referred to as an admixture model [8] as it assumes each document contains several topics. In contrast, DMM is inherently a mixture model, thus it assumes that each document contains only a single topic, which is a seemingly more sensible assumption for short text. This simple assumption is likely the reason for the better performance which has been observed on some short text datasets [9]- [11].
The conjugate Dirichlet prior allows for convenient hierarchical Bayesian modelling of count data using a multinomial distribution, and, over the years, most topic models have been built under the assumption that documents are sampled from a multinomial distribution. Another natural choice of distribution for count data is the Poisson distribution. However, it has received significantly less attention, as some researchers have found that it does not fit natural text [12]. Nevertheless, other researchers have found that the family of Poisson distributions produces good results on text (and other count data) categorisation, which is the motivation for our investigation into the Poisson distribution as a viable option for topic modelling of short text [13], [14].
The contributions of this work are as follows:
1. We propose a new topic model for short text, the Gamma-Poisson mixture (GPM) topic model, that has not been applied in the literature before. This model is based on the Poisson distribution and we show that it is able to produce topics with improved coherence scores when compared to GSDMM (the collapsed Gibbs sampler version of DMM) [6]. 2. We derive a collapsed Gibbs sampler for the estimation of the model parameters. This estimation procedure enables the model to estimate the number of topics automatically. 3. We perform extensive experiments in Python on three short text corpora and report on the characteristics of the new model. 4. We have also made available the development version of the GPM model in a Python package at https://github.com/jrmazarura/GPM.
Related work
Conventional topic models take advantage of word co-occurrence information in documents to infer the latent topics. However, due to its length, this kind of information is limited in short text, which poses a challenge when applying traditional topic models. It is for this reason that short texts are often described as being sparse. In order to overcome the challenges associated with topic modelling of short text, some researchers have proposed pooling or aggregating short texts to create longer documents prior to applying traditional topic models [3], [15]- [18].
Others have successfully proposed modifications to conventional topic models, such as LDA or DMM. These modifications include, incorporating auxiliary information from external corpora [19], [20] and inducing sparsity into the models [21]- [23]. Lastly, another popular approach, is the derivation of completely new models [5], [24]. In light of the success of DMM on short text, the new model that we propose is a modification of DMM. 1 In the context of topic modelling, the multinomial distribution is most commonly used to model the words in a document. In contrast, significantly fewer topic models are constructed based on the Poisson distribution. Yet, in other text mining fields, such as in text classification [13], [14] and information retrieval [27], some researchers were able to obtain improved results with the Poisson distribution in comparison to the multinomial distribution in their applications. This serves as further motivation for considering the Poisson distribution as a basis for our topic model.
The Gamma-Poisson (GaP) model [28] and the Poisson decomposition model [29] are both examples of topic models that assume word counts follow a Poisson distribution. Other Poisson-based topic models are presented in [30], [31]. None of these models was specifically designed for short text and the authors only test their models on long documents. Our model is distinctly different from these in that it assumes each document contains a single topic, whereas these models assume each document contains multiple topics. The Poisson-based Dirichlet multinomial mixture model (PDMM) [11] is another DMM-based topic model formed by incorporating a Poisson distribution in the generative process so as to allow each document to contain either 1, 2 or 3 topics. In order to allow PDMM to also take advantage of semantic relations between words, Li et al. [11] extended PDMM by incorporating word embeddings through a generalized Poĺya urn. Despite PDMM being termed "Poisson-based", it still models word counts with a multinomial distribution.
Lastly, unlike the multinomial distribution, the Poisson distribution does not assume that occurrences of the same word are independent of each other [32]. Furthermore, as the Poisson distribution only has a rate parameter, the need to estimate the total number of trials, which is a non-trivial task, is evaded [14]. In light of these properties, we believe that a Poisson-based topic model could yield favourable results. In the next section, we investigate the characteristics of word frequencies in our datasets to further motivate the case for the Poisson distribution.
Empirical analysis of word occurrences in short text
In contrast to the amount of literature available on multinomial-based topic models, there is significantly less research on topic models that are based on the Poisson distribution. This is likely due to the work of Gale and Church [12] which demonstrated that the Poisson distribution is not a good fit for observed word frequencies in real world texts. They proposed a -Poisson mixture as a more suitable alternative. In order to motivate that the Poisson distribution fails to model word frequency, Gale and Church [12] selected a word from their corpus, "said", and plotted the graph shown in Figure 1. Figure 1 shows the number of documents in which the word "said" was used 0 times, 1 time, 2 times, …, or 32 times. The curve shows the predicted number of documents from a Poisson distribution calculated using the maximum likelihood estimate of the parameter. It is clear that the Poisson does not provide a good fit, thus they proposed a mixture of Poisson distributions or a negative binomial distribution as a better alternative. However, it must be noted that the documents under consideration were long and different results may be observed when the same graph is plotted for a word in a short text corpus. To demonstrate this, we selected the word "jet" from topic 1 in the Pascal Flickr corpus (discussed in the Dataset section) and obtained the results in Figure 2. The length of the documents considered in Figure 1 was approximately 2 000 words per document whereas the average length of a document in the Pascal Flickr corpus was merely 5 words with a minimum and maximum length of 1 and 19 words, respectively. Considering this, it is highly unlikely that large frequencies would be observed. From Figure 2, the maximum frequency of the word "jet" is 1 and, as we no longer have the heavy tail, the predicted values from the Poisson distribution (solid line) are close to the observed values. Similar results were observed with many of the other words in the corpus. Thus, we did not deem it necessary to model each word count with a -Poisson mixture as proposed by Church and Gale [12]. Another possible discrete distribution that may be considered is the negative binomial distribution. It is able to relax the assumption made by the Poisson distribution that the mean and variance of the data are equal. The negative binomial is the preferred choice when the observed data displays overdispersion, that is, when the variance exceeds the mean. Further investigation of the means and variances of the words in topic 1 of the Pascal Flickr dataset, as well as several other short text corpora, did not yield significant evidence to warrant the use of the negative binomial distribution.
Gale and Church [12] also identified a phenomenon referred to as burstiness. A word is said to be bursty or contagious if, after its first mention, it is likely to be observed again in the same document. In order to address word burstiness in the context of document classification, some authors have proposed the use of the Dirichlet compound multinomial [33] whereas others suggest using contagious distributions, such as the negative binomial distribution, to model word frequencies [14]. However, Figure 2 does not appear to display evidence of burstiness, neither did we observe evidence of significant burstiness in the short text corpora we studied.
In conclusion, we can see that a simple Poisson is a viable option to model word frequency in short text. In addition, it is also an attractive choice as it has a conjugate prior whereas the -Poisson mixture and negative binomial suggested by Church and Gale [12] do not. It is for these reasons that we consider topic modelling using the Poisson distribution. We now introduce a new topic model, the Gamma-Poisson mixture (GPM) topic model. Table 1 shows a summary of the notation that will be used throughout this paper. if is a quantity that describes a characteristic of the corpus, ( ) denotes the same characteristic of the corpus excluding the th document
The Gamma-Poisson Mixture Topic Model for Short Text
The Gamma-Poisson mixture topic model is a hierarchical Bayesian model for topic modelling of short text. For simplicity, it assumes that the frequencies of words in a document are independent of each other and that the corpus is a mixture of documents, which belong to different topics. Mixture models are amongst the simplest of latent variable models.
Considering the success of GSDMM 2 on short text [9]- [11], our GPM topic model makes similar assumptions: (1) Documents are formed from a mixture model and (2) each document belongs to exactly one topic (cluster). This embodies the following probabilistic generative process for a document, :
1. A topic, , is randomly selected depending on mixing weights ( = ).
A document is then randomly selected from ( | = ).
Consequently, the likelihood of a document is given by
( ) = ∑ ( | = ) ( = ) =1 ,
where denotes the total number of topics in the corpus. Like GSDMM, GPM makes the Naïve Bayes assumption: given the topic, the frequency of the words in the document are independent of each other. Thus, under GPM the conditional probability of a document given a topic is given by
( | = ) = ∏ ( | ), =1
where denotes the frequency of word in document , and denotes the expected frequency of word on topic . The key difference between GPM and GSDMM arises at this point. The GPM assumes the frequencies, , are modelled according to independent Poisson distributions as opposed to modelling the joint distribution of the counts with a multinomial distribution as in the DMM. In addition, owing to its conjugacy, a gamma prior with shape parameter and scale parameter is imposed on . Similarly, owing to the Dirichlet distribution's conjugacy to the multinomial, GSDMM assumes a Dirichlet prior.
Under the GPM, the mixing weights represent the proportion of each of the topics in the corpus. The topic assignment of each document is modelled by a multinomial distribution.
Thus, ( = ) = where 0 ≤ ≤ 1 and ∑ = 1 =1
. Furthermore, a Dirichlet prior with parameter is imposed on = [ 1 , 2 , … , ]. As GPM is inherently a mixture model, this part of the model is the same as GSDMM.
The generative process of GPM can be summarised in a graphical model as in Figure 3. , and unshaded circles represent latent variables, such as the topic distribution, . Rectangles represent repeated structures, whereas arrows indicate conditioning, such as the conditioning of documents on both topic distribution and topic assignment. Figure 3 describes the statistical conditioning of variables on their parent variables. The only random variable that is observed is the corpus, whereas all others are latent variables. In the following section, we will discuss the estimation procedure for the GPM.
The Collapsed Gibbs sampler
A typical Gibbs sampler [34] requires that each parameter be sampled in turn conditioned on all the other parameters. As the topics are only dependent on the topic assignment of each document, it is only necessary to sample the topic assignments. The conjugacy of the chosen priors introduces analytic tractability that makes it is possible to easily integrate out the other parameters that would otherwise need to be sampled. Thus, owing to this simplification, this sampling scheme is referred to as a collapsed Gibbs sampler.
Other estimation techniques, such as the Expectation-Maximisation (EM) algorithm, could have also been used. However, it is the collapsed Gibbs sampler that gives our model the favourable property of being able to automatically select the number of latent topics. In practice, one popular way of selecting the number of topics is achieved via the use of nonparametric topic models [35]. Thus, although parametric in nature, our model displays this "non-parametric" behaviour to some extent.
In order to estimate the model parameters, the collapsed Gibbs sampler assigns each document to a single topic. This is achieved by sampling from the conditional probability of document belonging to a class, ( = | ( ) , , , , ). From the rules of conditional probability, it follows that
( = | ( ) , , , , ) = ( , | , , ) ( , ( ) | , , ) ∝ ( , | , , ) ( ( ) , ( ) | , , ) ,(1)
where the superscript ( ) is used to denote that document is excluded. = [ 1 , 2 , … , ] and = [ 1 , 2 , … , ] are the hyperparameters of the gamma prior, whereas = [ 1 , 2 , … , ] denotes the hyperparameter of the Dirichlet prior.
In order to sample a topic assignment for each document according to Equation 1, only the joint distribution, ( , | , , ), is required. It is shown in the Appendix that it is given by
( , | , , ) = Δ( + ) Δ( ) ∏ ∏ Γ( + ) ! Γ( ) × ( + 1) + =1 . =1 (2)
By substituting Equation 2 into Equation 1, under the assumption that = , = and = for all and , it follows that Equation 1 can be expressed as
( = | ( ) , ) ∝ ( ) + − 1 + γ × ! × ( ( ) + 1) ( ) + ( ( ) + + 1) ( ) + + × ∏ ∏( ( ) + + − 1) =1 =1
. (3) Thus, for each document, a topic is sampled repeatedly until convergence is achieved. The topics are then found by the following estimates:
̂= + ( + 1 ) ,
where the top words that describe topic are the words with the highest estimated expected frequencies, ̂. Full details of the derivation are given in the Appendix.
The collapsed Gibbs sampler for GPM is summarised in Algorithm 1. Topic models are very powerful tools as they possess characteristics from both clustering and dimensionality reduction techniques: (1) A corpus is represented in a lower dimensional form by a set of topics and, (2) similar to clustering, each document is associated with a single topic or multiple topics depending on the model. Our GPM topic model possesses both these qualities. The first property is captured by the parameters. The second is satisfied in Equation 3. The advantage of topic models over traditional clustering algorithms, is that "labels" are also produced, in the form of topics. In order for topic models to be useful, they are designed to not only provide data compression, but to also produce interpretable topics.
In order to demonstrate the utility of our new model, we perform extensive experimentation on different datasets. Details are provided in the next section.
Experiments Datasets
In order to test our model, we ran experiments on different datasets and compared the performance of GPM against that of GSDMM. The datasets have been summarised in Table 2.
All statistics were collected from the datasets after basic pre-processing (removal of stop words, punctuation, special symbols and numbers). Note, we will often refer to the original number of classes/categories for each dataset as the true number of topics/clusters or true . All datasets can be obtained from https://github.com/qiang2100/STTM [25].
Experimental design
All experiments were executed in Python 3.6 in Windows 10 on a computer with a 3.50 GHz quad core processor and 16 GB RAM. We used our own implementations of each model and have made our implementation of the GPM topic model publicly available as a Python package at https://github.com/jrmazarura/GPM. For the GSDMM, the parameter values were set to = = 0.1 and the algorithm was run for 15 iterations, as in the original paper. For the GPM, the parameter plays the same role as the parameter in GSDMM, thus it was also set to 0.1. For the Poisson distribution we opted for a gamma prior with shape and scale parameters, and , both set to 0.001. This choice is motivated within the upcoming sections.
Document Length Normalisation
Since the Poisson distribution gives the probability of observing a given number of events in a fixed interval, it is necessary to normalise the lengths of the documents. This is achieved by replacing the word frequencies, , with
new = ∑ =1 ,
where denotes a predefined length [13]. In all our experiments, we set = 20 and rounded off each new to the nearest integer. was chosen to be 20 as it provided a good balance between performance and runtime for our datasets.
Model Evaluation
In order to evaluate the performance of our model we used the average of the UMASS topic coherence [38] score for each topic. The coherence score for each topic, , is given by
ℎ ( ) = ∑ log ( , ) + ( ) ( , )∈ ,
where denotes the th word in topic , ( , ) denotes the number of documents in which words and co-occur and ( ) denotes the number of documents in which word occurs. is a smoothing parameter to prevent taking the logarithm of zero and it is set to equal 1 as proposed in the original paper. As with most topic models, the GPM is an unsupervised technique. Model evaluation is generally not a trivial task in the context of unsupervised learning as datasets lack labels upon which evaluations can be based. The UMASS coherence score is a well-known measure of the degree of interpretability of a topic and it has been shown to align well with human evaluations of coherence [38]. Naturally, topics that are coherent are most desirable; therefore, a higher average coherence score is preferable. Similar to GSDMM, our model has the special characteristic of being able to automatically select the number of topics, thus, the coherence score is only calculated on the topics found by the model.
Results and Discussion
Influence of the starting number of topics
Topic modelling is typically an unsupervised technique. Similar to -means clustering, the number of topics (clusters), , is a challenge to select as the value is not usually known in advance. The GPM is able to infer the number of topics automatically provided that the starting value of is large enough. This is due to the dependence of the topic assignment probability, Equation 3, on , which is the number of documents in topic . This implies that a document is more likely to be assigned to a topic which has documents assigned to it, than a topic that does not have documents assigned to it.
As will be shown in the next section, the collapsed Gibbs sampler is quick to converge, thus the Gibbs sampler was run for 15 iterations. As our model also provides stable and relatively consistent results (as will be shown in the next section), experiments were repeated 3 times assuming = 5, 10, 20, 30, 40, 50, 100, 200, 300, … , 800. We set = = 0.001 for all and = 0.1 for all . Table 3 shows the average number of topics found by the model for some of the different starting values of , whereas Table 4 shows the corresponding average coherence scores. One possible reason for this difference could be that the human labelling may have been too rigid and documents were classified into too few topics yet there may have been subtopics present. Consequently, it is possible that such a discrepancy could also arise if different human reviewers were tasked with labelling each document independently. In the context of topic modelling, this difference is not usually a problem, especially if the topics are interpretable, as the model may have simply identified subtopics present in the corpus. Since the model does not differentiate between "main" topics or subtopics, they would all be included together in the final topic count. Nonetheless, it is still striking that in both cases, the model was able to automatically discard the extra 80-90% of topics that were unnecessary. This greatly alleviates the challenge of selecting the appropriate value of .
In topic modelling, one of the most important aspects is the interpretability of the uncovered topics. Even if the final number of clusters found is not necessarily the same as what human annotators would find, it is important that the words in the topics are coherent. Figures 4(b), 5(b) and 6(b) show that the coherence improves as the initial increases. In fact, there reaches a point where there is almost no more improvement in average coherence when the initial number of topics is increased. In most cases, there appears to be an insignificant improvement to the coherence score, when is selected to be greater than 200.
Influence of the number of iterations
One of the challenges faced when using sampling methods to estimate parameters is determining the appropriate number of sampling iterations to perform. In order to investigate the performance of the models with respect to the number of iterations, we recorded the average coherence and number of clusters at each of 30 iterations. This was repeated three times for each dataset. From the previous results, we found that the number of clusters was close to the human annotated number and the coherence scores reached their maximum when the model started with 400 topics, thus we use this value in all the experiments. In addition, we also set = = 0.001 for all and = 0.1 for all . The results are shown in Figures 7 to 9. The (a) graphs all show the number of clusters that the model found at each iteration, whereas the (b) graphs show the topic coherence at each iteration. In general, similar patterns are observed. It is evident that convergence is reached quickly. In all cases, convergence is reached by the 15 th iteration and the variation in the results is typically relatively small.
Influence of alpha and beta
The hyperparameters and represent the shape and scale parameters of the gamma distribution respectively, and represents the hyperparameter of the Dirichlet prior. We assume that = , = and = for all and . The parameter is analogous to the parameter of the GSDMM. The authors of GSDMM conducted experiments to investigate the impact of different selections of on the number of clusters found and they observed that it had a very small impact. As expected, we also observed similar results with GPM thus we only focus on the impact of and , assuming = 0.1. The GPM was run on the Pascal Flickr dataset for = 40, = 0.01, 0.05, 0.25, 0.5, 0.75, 1, 2 and = 5, 2, 1, 0.5, 0.2. Then the final number of clusters found was recorded. The results on the Pascal Flickr dataset are shown in Figure 10.
Owing to the computationally heavy nature of performing a grid search, each experiment was run only once per pair of and values, with the starting number of topics set to be at least 20 more than the true value. Figure 10 shows a clear downward trend for all values of , the scale parameter. However, the final number of topics found is clearly influenced by the shape parameter, . On the Pascal Flickr dataset, the model was only able to get close to the true number of topics (20) when was chosen to be near 0.5. Similar downward trends were also observed on the other two datasets and was also found to be of minimal impact on the number of topics found. However, for the Tweet dataset, was required to be near 0.05 for the model to find close to 89 topics, whereas the Search Snippets dataset required an value close to 1.5 to find close to 8 topics. Figure 11 shows the probability density functions of the gamma distributions with these different values of and a fixed value of = 0.5. It is evident that these choices of alpha tend to produce skewed distributions, which place most of their probability near 0. A similar comparison to that of Figure 10 was also conducted to investigate the impact of and on the coherence scores and the results from the Pascal Flickr dataset are shown in Figure 12. Figure 12 shows the average coherence of topics found for the same values of , and used in Figure 10. The labels at each point indicate the number of topics found by the model. In Figure 12, a general pattern is observed. The coherence scores appear to increase, then drop as increases and, once again, does not appear to have a significant impact. The Tweet and Search Snippets datasets also displayed general patterns, but the pattern was not necessarily the same across the datasets. This simply serves as an indication that the selection of is not a trivial task.
It is evident from Figure 12 that increasing decreases the number of topics the model tends to find. Conversely, as decreases, the number of topics found increases. Interestingly, this behaviour of in GPM is similar to the behaviour that was observed with the parameter of GSDMM [6]. According to Equation 3, for small values of , the probability of a document belonging to a topic is more sensitive to , the number of times word is observed in topic . This means that, when a topic has more words in common with a document it is more likely to be assigned to that topic. On the other hand, when is large, the probability of being assigned to a topic is less sensitive to . Instead, the probability is influenced more by the first term in Equation 3, which is dependent on , the number of documents in topic . As a result, a topic with more documents is likely to get larger since Equation 3 will assign more probability to topics that contain more documents. This explains the tendency of the model to assign all the documents to one topic when is large.
In practice, the number of clusters is not usually known in advance so it is not possible to use the true to choose a suitable value for . Furthermore, the coherence is also not always highest at the true number of clusters. In order to overcome this challenge, we then considered setting the hyperparameters of the gamma prior to = = 0.001. 3 The top horizontal line in Figure 12 shows the coherence score found by the GPM under this gamma(0.001, 0.001) prior. 4 The coherence is not only higher than that of the other selections of and , but the GPM also outperforms the GSDMM model (indicated by the lower horizontal line). In addition, the average number of clusters found by the GPM was also close to the true value. It is for this reason, that we recommend the use of = = 0.001 and use these values in all our experiments.
In conclusion, it is clear that this selection of and greatly simplifies the topic modelling process for the GPM. In addition, we have also seen that the model possesses the flexibility of allowing the user to easily adjust the number of topics found by simply changing the value of .
Comparison with Dirichlet-Multinomial mixture model
The GSDMM model was originally presented as a clustering algorithm, as opposed to a topic model, and was consequently assessed on its ability to cluster documents [6]. As the GPM is designed for topic modelling, we assess its ability to extract meaningful topics by investigating the topic coherence. Despite there being other topic models for short text, the GPM is related to the GSDMM in that it also makes the one-topic-per-document assumption and is able to automatically select the number of topics. In order to compare the performance of the GPM topic model against that of the GSDMM, both models were fitted to the datasets and the results are summarised in the figures and tables that follow. All models were run for 15 iterations starting with 400 initial topics. This was repeated 10 times for each model with = = 0.001 for all and = 0.1 for all . 3 In Bayesian literature, the gamma distribution with shape and rate parameters both equal to 0.001 is a commonly used non-informative prior [39]. In this paper, the gamma distribution is parameterised by shape and scale parameters. Despite using the scale-parameter instead of rate-parameter formulation, we show empirically that choosing 0.001 yields better performance than other choices. This is likely because the data contains many zeros and this gamma places most of its probability around 0. 4 This result is for a fixed alpha and beta, but it is shown in the graph as a horizontal line across all alpha values to emphasize that this choice of parameter outperforms the GPM with other choices of alpha and beta. For ease of comparison, the GSDMM is also indicated by a horizontal line although its hyperparameters are also fixed. Figure 13 shows boxplots of the topic coherence scores. It is evident that the GPM generally outperforms the GSDMM in all three datasets, as the topic coherence of the topics obtained by the GPM is mostly larger than that of the GSDMM. For completeness, we also consider the number of clusters found by each model in Table 5.
For the Tweet corpus, the true number of clusters, as determined by human annotators, is 89. On average, the GSDMM was more inclined to find more clusters than the GPM. It is also worthwhile to note that the results obtained for the GSDMM on the Tweet dataset are close to those obtained in the original paper [6]. On the Pascal Flickr and Search Snippets datasets, both models tended to find more clusters than those determined by the human annotators. However, the GPM was able to get closer to the true value than the GSDMM. Interestingly, on the Search Snippets corpus, the GSDMM found significantly more topics than were found by the GPM. It is likely the case that the GSDMM found finer-grained topics, thus increasing the number of topics found, whereas the GPM model discovered fewer, but more general, topics. We now consider the actual topics found by the models on one of the datasets. We specifically focus on the Search Snippets results in order to observe what other topics were found by the GSDMM model that were not found by the GPM. Table 6 lists some of the top words for each of the topics found by the GPM (column 2), as well as possible labels for each topic (column 1). The labels have been assigned based on the original 8 topics of the dataset and then a possible subtopic label was added in parentheses. This labelling and selection of subtopics was performed subjectively, so another annotator's assessment may produce different results. In assigning the topics to the predefined labels, one challenge faced was that some topics had potential overlaps. For instance, a topic in the Engineering category could also have fallen in the Education-Science category. By analysing the first column, we also observe that 7 out of the 8 original predefined topics appear to be represented in these results. According to our labelling, the missing topic is the Engineering topic. This is most likely due to the fact that only 369 of the 12 295 documents belonged to this topic, which is merely 3% of the entire corpus. The proportions of each topic in the Search Snippets corpus are shown in Figure 14. As was observed in Table 5, the GSDMM found more than 250 extra topics. Table 7 shows two additional subtopics for each of the 8 predefined categories that were found by the GSDMM, but not the GPM. Since GSDMM found significantly more topics, it was able to uncover finer-grained topics.
Thus, in such cases where a brief overview is desired, the model producing the smaller number of topics might be preferable. Where more detail is desired, one can opt for a model that produces more topics.
Conclusions and Future Work
Despite the lack of attention on the Poisson distribution in topic modelling, we have shown its utility in modelling short text. We proposed a new topic model for short text, the Gamma-Poisson mixture (GPM) topic model and performed extensive experimentation in order to investigate its properties empirically. In addition, we also derived a collapsed Gibbs sampler for the model.
As is well-known in the field topic modelling, the selection of the appropriate number of topics is a challenge. Our new topic model is able to address this in its ability to automatically select the number of topics. This is achieved via the use of the collapsed Gibbs sampler. We also showed that our model was able to find estimates that were close to the true number of topics on labelled corpora. A further benefit of the collapsed Gibbs sampler, is that it also converges very quickly, thus evading the need for long burn-in periods as is typical in the application of traditional Gibbs samplers.
In addition, GPM possesses the flexibility of allowing the user to adjust the number of topics found as required. It also tends to produce consistent results with little variation. Furthermore, when compared with the GSDMM, the GPM outperformed the GSDMM. Firstly, the number of topics found by GPM was closer to the true value. Secondly, GPM was able to find topics with higher average coherence scores, thus making it a good option for topic modelling on short text.
There are many avenues for future work related to the GPM. We plan to assess the GPM on other performance measures, such as classification accuracy in end-to-end classification. We will also perform further experimentation to compare it against other short text topic models.
It was shown in [6] that the second term on the right-hand side of Equation A2 simplifies to
( | ) = Δ( + ) Δ( ) ,(A3)
where = [ 1 , 2 , … , ] and denotes the number of documents assigned to the th topic. Using the same Δ notation as in [6], it follows that Δ( ) =
Under GPM, documents and words are assumed to be independent. In addition, the word counts are assumed to follow a Poisson distribution. Thus, given the topics, the corpus can be modelled as
( | , ) = ∏ ∏ ( | ) = ∏ ∏ − ! =1 . =1 =1 =1 (A5)
In order to simplify further derivation of the collapsed Gibbs sampler, Equation A5 can be reexpressed by the introduction of , the number of documents assigned to the th topic, and , the number of times word is observed in topic , as follows: It follows that ∼ ( + , +1 ) and the topic distribution estimates are given by ̂= + ( + 1 ) .
Figure 1 :
1The circles show the number of documents that contain the word "said" for different frequencies. The curve denotes predicted frequencies from a Poisson distribution fitted to the data. Adapted from "Poisson mixtures" by Church, K. W., & Gale, W. A. (1995). Natural Language Engineering, 1(2), 163-190. Copyright by Cambridge University Press 1995. Reproduced with permission.
Figure 2 :
2The circles show the number of documents that contain the word "jet" for different frequencies in the Pascal Flickr dataset. The curve denotes predicted frequencies from a Poisson distribution fitted to the data.
Figure 3 :
3Graphical model of GPM. Shaded squares indicate fixed parameters. Shaded circles denote observed variables, such as a document,
Figures 4 to 6
6provide a visual summary of these results. According to Figures 4(a), 5(a) and 6(a), in all cases, the model approaches the true number of topics as the starting number of topics increases. In most cases, the most accurate number of topics was found by setting to 400. For the Tweet dataset, the model converges to between 70 and 80 topics, which is close to the true value of 89. For the other datasets, the model slightly over-estimates the number of topics. On the Pascal Flickr dataset, at = 400, the final number of clusters is over-estimated by about 10 topics (true = 20) whereas on the Search Snippets dataset, the final number of clusters is over-estimated by about 20 topics (true = 8).
Figure 4 :Figure 5 :Figure 6 :
456Tweet dataset: (a) Average final number of topics found by the model (b) Pascal Flickr dataset: (a) Average final number of topics found by the model (b) Average topic coherence scores Search Snippets dataset: (a) Average final number of topics found by the model (b) Average topic coherence scores
Figure 7 :Figure 8 :Figure 9 :
789Tweet dataset: (a) Number of topics found by the model per iteration (b) Average topic coherence score per iteration Pascal Flickr dataset: (a) Number of topics found by the model per iteration (Search Snippets dataset: (a) Number of topics found by the model per iteration (b) Average topic coherence score per iteration
Figure 10 :
10Final number of topics found for different values of alpha and beta on the Pascal Flickr dataset Based on the chosen values of and , the expected value of the gamma priors for the Tweet, Pascal Flickr and Search Snippets datasets are 0.025, 0.25 and 0.75, respectively. Considering the short length of the documents and the massive sizes of the vocabularies, it is not surprising that most words will have very low observed frequencies. In fact, since many zeros are observed for each word, the estimates of the Poisson parameters are also very small which results in most of the probability being loaded on zero. For example, ( ) = 0.975 for = 0 where ∼ (0.025).
Figure 11 :
11Probability density functions of the gamma distributions for = 0.05, 0.5, 1.5 and a fixed value of = 0.5
Figure 12 :
12Average topic coherence of topics found for different values of alpha and beta on the Pascal Flickr dataset. The labels at each point indicate the number of topics found by the model.
Figure 13 :
13Coherence scores of the different models
Figure 14 :
14Relative frequency of documents belonging to each topic in the Search Snippets corpus. The number above each bar is the frequency of documents belonging to each topic. The corpus contains a total of 12 295documents.
the first term on the right-hand side of Equation A2, can be expressed as ( | , , ) = ∫ ( | , ) ( | , ) .
..
The derivation of the conditional distribution in Equation A1 can now be concluded by substituting Equation A8 as follows: We also make use of the fact that the Γ function has the property that If it is assumed that = , = and = for all and , then Equation A9 simplifies to after sampling from Equation A9 or A10 until convergence, the lambda parameters, which give the topic distributions, are estimated by the posterior means. The posterior is given by ( | , , ,
Table 1 :
1Notation.Symbol
Description
number of documents in the corpus.
size of the vocabulary
number of topics
length of th document ( = 1, 2, … , )
collection of documents
frequency vector of th document
number of times word occurs in the th document ( = 1, 2, … , )
vector of topic assignments of each document
topic assignment of document
number of documents in topic ( = 1, 2, … , )
number of times word is observed in topic
number of words in topic
( )
Algorithm 1: Collapsed Gibbs sampler for GPM.Input: Corpus,
Output: Topic labels for each document,
Begin
Initialise
, and
to 0 for each topic
for each document
, = 1,2, … ,
randomly sample a topic for
← ∼
(1/ )
←
+ 1 and
←
+
for each word frequency
in
←
+
for = 1,2, … , iterations
for each document
, = 1,2, … ,
record the current topic of document
: =
←
− 1 and
←
−
for each word frequency
in
←
−
sample a new topic for
← ∼ (
= | ( ) , ) (Equation 3)
←
+ 1 and
←
+
for each word frequency
in
←
+
The Tweet dataset[6] is a collection of tweets from the 2011 and 2012 Text REtrieval Conference. The most relevant tweets in 89 different categories were selected to create this collection. Each tweet is regarded as an individual document.The Pascal Flickr dataset contains captions of images from Flickr and the Pattern Analysis, Statistical Modelling, and Computational Learning (PASCAL) Visual Object Classes Challenge [36]. The captions are divided into 20 different classes and altogether the corpus contains 4 821 captions which are each treated as individual documents.The Search Snippet dataset[37] was created by first selecting 8 different domains: Business, Computers, Culture-Arts-Entertainment, Education-Science, Engineering, Health, Politics-Society and Sports. For each domain, 11 to 118 related phrases were typed into the Google search engine, and then the snippets from the top 20 to 30 results were collected to create a corpus of 12 295 snippets.
Table 2 :
2Document statistics.Dataset
Number of
documents
Size of
vocabulary
Number
of topics
Average
(Standard
deviation) of
document length
Minimum
(Maximum)
document
length
Tweet
2 472
5098
89
8.5 (3.2)
2 (20)
Pascal Flickr
4 821
3188
20
4.9 (1.8)
1 (19)
Search Snippets
12 295
4705
8
14.4 (4.4)
1 (37)
Table 3 :
3Average final number of topics found by the model (and standard deviation).Starting value of
Table 4 :
4Average topic coherence scores (and standard deviation).Starting value of
Table 5 :
5Summary of number of topics found by each modelGSDMM
GPM
Table 6 :
6Topics found by GPM match hits russia anna chakvetadze sania financial india Sports (quad biking) quad china atv automatic reverse quads gear product showroom Sports/Business goalkeepers cricket nasdaq information stock market security CAE/Computer span painting election contractors staining servicemagic * Key: CAE = Culture-Arts-Entertainment, ES = Education-Science, PS = Politics-SocietyTopic (subtopic)
Top words
Business (software)
trillian instant pro studios creators messenger accounting
Business (trade)
import trade export leads business international global
Business (consumer)
consumption consumer motives goals ratneshwar glen mick
CAE (Chris Pirillio)
pirillo chris live internet broadcast podcast itunes streaming
CAE (music)
lyrics song com archive searchable songs database search
CAE (painting)
surreal leonardo del vinci picasso surrealism artlex artchive
CAE (videos)
videos metacafe ping pong movies internet tags amazing clips
CAE (movies)
imdb movies celebs title name diesel movie mtv aesthetic weapon
CAE (posters)
posters allposters com prints custom professional framing
CAE (transformers)
transformers movie world bay war alien directed races
Computers (networking)
approach computer networking featuring ross kurose
Computers (root)
root roottalk expression formula cern draw retrieve rene value
Computers (programming) computer programming software web memory wikipedia intel
Computers (code)
formula expression kspread value user symbol log api input
Computers (connections)
speed test com accurate flash cable speedtest dsl connections
ES (news)
information com news wikipedia research edu home science
ES (history)
eawc edu classic ancient exploration greece evansville anthony
ES (dictionary)
dictionary online definition word christ merriam webster
Health (diet)
calorie calories energy drink enviga counter nutrition picnics
Health (disease)
treatment arthritis cause symptoms diagnosis lupus disease
PS (society)
bombs smoke homepage police press blogspot accounting bank
PS (politics)
party bob led revolutionary worker communist revolution
Sports (cars)
wheels rims car custom chrome tires truck inch tire
Sports (tennis)
Table 7 :
7Selected topics found by GSDMM Topic Top words Business (economics) gdp economy product domestic gross economic value market Business (jobs) jobs job com search careerbuilder accounting marketing sales sites CAE (fashion) fashion designers design designer clothing accessories milan clothes CAE (famous places) ballet hollywood california angeles los universal florida studios Computers (systems) systems theory analysis design information programming amazon Computers (security) security computer network spam virus spyware viruses networking ES (genetics) research national gov laboratory genetic home institute genome ES (earth) earth structure interior edu crust tectonics model kids gov core Engineering (physics) physics quantum theory theoretical solid edu research technology Engineering (Einstein) einstein albert physics nobel eric literature weisstein world time Health (aids) hiv aids prevention epidemic cdc information gov health infection Health (medical care) hospital patient doctor medical care news information health PS (elections) party democratic political communist socialist republican labor news PS (army) force navy naval air mil commander news fleet web reserve Sports (swimming) swimming swim swimmers help information coaching technique Sports (football) football fans game nba playoff story players assault adidas university * Key: CAE = Culture-Arts-Entertainment, ES = Education-Science, PS = Politics-Society
The avid reader is referred to the following recent review papers for further reading on short text topic modelling:[25] and[26].
GSDMM is the collapsed Gibbs sampler version of DMM[6]. The authors in[6] used the abbreviation GSDMM (Gibbs Sampler DMM). We use this version in this paper, thus from here onwards we will also refer to the GSDMM as opposed to DMM.
The top words that describe topic are the words with the highest expected frequencies, ̂.Data AvailabilityAll datasets can be obtained from https://github.com/qiang2100/STTM.Conflict of InterestThe authors declare that there are no conflicts of interest regarding the publication of this paper.Funding StatementThis work was performed as part of the employment of the authors by the University of Pretoria and was supported by the Centre for Artificial Intelligence Research (CAIR).AppendixThe derivation of collapsed sampler for Gamma-Poisson mixture model A summary of the notation that will be used is given inTable 1.Since the topic estimates are only dependent on the topic assignments, it is only necessary to sample the topic assignment for each document. This is achieved by sampling from the conditional probability of a document belonging to a class, Owing to conditional independence between and , it follows that ( , | , , ) = ( | , , ) ( | ).(A2)
Latent Dirichlet allocation. D M Blei, A Y Ng, M I Jordan, Journal of machine Learning research. 3D. M. Blei, A. Y. Ng, and M. I. Jordan, "Latent Dirichlet allocation," Journal of machine Learning research, vol. 3, no. Jan, pp. 993-1022, 2003.
PET: a statistical model for popular events tracking in social communities. C X Lin, B Zhao, Q Mei, J Han, Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. the 16th ACM SIGKDD international conference on Knowledge discovery and data miningC. X. Lin, B. Zhao, Q. Mei, and J. Han, "PET: a statistical model for popular events tracking in social communities," in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 2010, pp. 929-938.
Twitterrank: finding topic-sensitive influential twitterers. J Weng, E.-P Lim, J Jiang, Q He, Proceedings of the third ACM international conference on Web search and data mining. the third ACM international conference on Web search and data miningJ. Weng, E.-P. Lim, J. Jiang, and Q. He, "Twitterrank: finding topic-sensitive influential twitterers," in Proceedings of the third ACM international conference on Web search and data mining, 2010, pp. 261-270.
Product recommendation with latent review topics. J Zhang, S Piramuthu, Information Systems Frontiers. 203J. Zhang and S. Piramuthu, "Product recommendation with latent review topics," Information Systems Frontiers, vol. 20, no. 3, pp. 617-625, 2018.
A biterm topic model for short texts. X Yan, J Guo, Y Lan, X Cheng, Proceedings of the 22nd international conference on World Wide Web. the 22nd international conference on World Wide WebX. Yan, J. Guo, Y. Lan, and X. Cheng, "A biterm topic model for short texts," in Proceedings of the 22nd international conference on World Wide Web, 2013, pp. 1445- 1456.
A Dirichlet multinomial mixture model-based approach for short text clustering. J Yin, J Wang, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningJ. Yin and J. Wang, "A Dirichlet multinomial mixture model-based approach for short text clustering," in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 2014, pp. 233-242.
Text classification from labeled and unlabeled documents using EM. K Nigam, A K Mccallum, S Thrun, T Mitchell, Machine learning. 392-3K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell, "Text classification from labeled and unlabeled documents using EM," Machine learning, vol. 39, no. 2-3, pp. 103-134, 2000.
Mixed-membership models of scientific publications. E Erosheva, S Fienberg, J Lafferty, Proceedings of the National Academy of Sciences. 1011E. Erosheva, S. Fienberg, and J. Lafferty, "Mixed-membership models of scientific publications," Proceedings of the National Academy of Sciences, vol. 101, no. suppl 1, pp. 5220-5227, 2004.
A comparison of the performance of latent Dirichlet allocation and the Dirichlet multinomial mixture model on short text. J Mazarura, A De Waal, 2016 Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech). J. Mazarura and A. De Waal, "A comparison of the performance of latent Dirichlet allocation and the Dirichlet multinomial mixture model on short text," in 2016 Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), 2016, pp. 1-6.
Comparing Twitter and traditional media using topic models. W X Zhao, European conference on information retrieval. W. X. Zhao et al., "Comparing Twitter and traditional media using topic models," in European conference on information retrieval, 2011, pp. 338-349.
Enhancing topic modeling for short texts with auxiliary word embeddings. C Li, Y Duan, H Wang, Z Zhang, A Sun, Z Ma, ACM Transactions on Information Systems (TOIS). 36211C. Li, Y. Duan, H. Wang, Z. Zhang, A. Sun, and Z. Ma, "Enhancing topic modeling for short texts with auxiliary word embeddings," ACM Transactions on Information Systems (TOIS), vol. 36, no. 2, p. 11, 2017.
Poisson mixtures. K W Church, W A Gale, Natural Language Engineering. 12K. W. Church and W. A. Gale, "Poisson mixtures," Natural Language Engineering, vol. 1, no. 2, pp. 163-190, 1995.
Gamma-Poisson distribution model for text categorization. H Ogura, H Amano, M Kondo, ISRN Artificial Intelligence. 2013H. Ogura, H. Amano, and M. Kondo, "Gamma-Poisson distribution model for text categorization," ISRN Artificial Intelligence, vol. 2013, 2013.
Bayesian models for frequent terms in text. E M Airoldi, W W Cohen, S E Fienberg, Proceedings of the Classification Society of North America and INTERFACE Annual Meetings. the Classification Society of North America and INTERFACE Annual Meetings990991E. M. Airoldi, W. W. Cohen, and S. E. Fienberg, "Bayesian models for frequent terms in text," in Proceedings of the Classification Society of North America and INTERFACE Annual Meetings, 2005, vol. 990, p. 991.
Empirical study of topic modeling in Twitter. L Hong, B D Davison, Proceedings of the first workshop on social media analytics. the first workshop on social media analyticsL. Hong and B. D. Davison, "Empirical study of topic modeling in Twitter," in Proceedings of the first workshop on social media analytics, 2010, pp. 80-88.
Improving LDA topic models for microblogs via tweet pooling and automatic labeling. R Mehrotra, S Sanner, W Buntine, L Xie, Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. the 36th international ACM SIGIR conference on Research and development in information retrievalR. Mehrotra, S. Sanner, W. Buntine, and L. Xie, "Improving LDA topic models for microblogs via tweet pooling and automatic labeling," in Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, 2013, pp. 889-892.
Short and sparse text topic modeling via selfaggregation. X Quan, C Kit, Y Ge, S J Pan, Twenty-Fourth International Joint Conference on Artificial Intelligence. X. Quan, C. Kit, Y. Ge, and S. J. Pan, "Short and sparse text topic modeling via self- aggregation," in Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
Topic modeling of short texts: A pseudo-document view. Y Zuo, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningY. Zuo et al., "Topic modeling of short texts: A pseudo-document view," in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 2105-2114.
Topic modeling for short texts with auxiliary word embeddings. C Li, H Wang, Z Zhang, A Sun, Z Ma, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalC. Li, H. Wang, Z. Zhang, A. Sun, and Z. Ma, "Topic modeling for short texts with auxiliary word embeddings," in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, 2016, pp. 165-174.
Improving topic models with latent feature word representations. D Q Nguyen, R Billingsley, L Du, M Johnson, Transactions of the Association for Computational Linguistics. 3D. Q. Nguyen, R. Billingsley, L. Du, and M. Johnson, "Improving topic models with latent feature word representations," Transactions of the Association for Computational Linguistics, vol. 3, pp. 299-313, 2015.
The dual-sparse topic model: mining focused topics and focused terms in short text. T Lin, W Tian, Q Mei, H Cheng, Proceedings of the 23rd international conference on World wide web. the 23rd international conference on World wide webT. Lin, W. Tian, Q. Mei, and H. Cheng, "The dual-sparse topic model: mining focused topics and focused terms in short text," in Proceedings of the 23rd international conference on World wide web, 2014, pp. 539-550.
Discrete component analysis. W Buntine, A Jakulin, International Statistical and Optimization Perspectives Workshop" Subspace, Latent Structure and Feature Selection. W. Buntine and A. Jakulin, "Discrete component analysis," in International Statistical and Optimization Perspectives Workshop" Subspace, Latent Structure and Feature Selection", 2005, pp. 1-33.
ULW-DMM: An effective topic modeling method for microblog short text. J Yu, L Qiu, IEEE Access. 7J. Yu and L. Qiu, "ULW-DMM: An effective topic modeling method for microblog short text," IEEE Access, vol. 7, pp. 884-893, 2018.
Word network topic model: a simple but general solution for short and imbalanced texts. Y Zuo, J Zhao, K Xu, Knowledge and Information Systems. 482Y. Zuo, J. Zhao, and K. Xu, "Word network topic model: a simple but general solution for short and imbalanced texts," Knowledge and Information Systems, vol. 48, no. 2, pp. 379-398, 2016.
Q Jipeng, Q Zhenyu, L Yun, Y Yunhao, W Xindong, arXiv:1904.07695Short Text Topic Modeling Techniques, Applications, and Performance: A Survey. arXiv preprintQ. Jipeng, Q. Zhenyu, L. Yun, Y. Yunhao, and W. Xindong, "Short Text Topic Modeling Techniques, Applications, and Performance: A Survey," arXiv preprint arXiv:1904.07695, 2019.
A Detailed Survey on Topic Modeling for Document and Short Text Data. S Likhitha, B S Harish, H M Kumar, 10.5120/ijca2019919265International Journal of Computer Applications. 17839S. Likhitha, B. S. Harish, and H. M. Keerthi Kumar, "A Detailed Survey on Topic Modeling for Document and Short Text Data," International Journal of Computer Applications, vol. 178, no. 39, pp. 1-9, Aug. 2019, doi: 10.5120/ijca2019919265.
A study of Poisson query generation model for information retrieval. Q Mei, H Fang, C Zhai, Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. the 30th annual international ACM SIGIR conference on Research and development in information retrievalQ. Mei, H. Fang, and C. Zhai, "A study of Poisson query generation model for information retrieval," in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, 2007, pp. 319-326.
GaP: a factor model for discrete data. J Canny, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. the 27th annual international ACM SIGIR conference on Research and development in information retrievalJ. Canny, "GaP: a factor model for discrete data," in Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, 2004, pp. 122-129.
A topic model based on Poisson decomposition. H Jiang, R Zhou, L Zhang, H Wang, Y Zhang, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementH. Jiang, R. Zhou, L. Zhang, H. Wang, and Y. Zhang, "A topic model based on Poisson decomposition," in Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 2017, pp. 1489-1498.
Admixture of Poisson MRFs: A topic model with word dependencies. D Inouye, P Ravikumar, I Dhillon, International Conference on Machine Learning. D. Inouye, P. Ravikumar, and I. Dhillon, "Admixture of Poisson MRFs: A topic model with word dependencies," in International Conference on Machine Learning, 2014, pp. 683-691.
Scalable deep Poisson factor analysis for topic modeling. Z Gan, C Chen, R Henao, D Carlson, L Carin, International Conference on Machine Learning. Z. Gan, C. Chen, R. Henao, D. Carlson, and L. Carin, "Scalable deep Poisson factor analysis for topic modeling," in International Conference on Machine Learning, 2015, pp. 1823-1832.
Bayesian methods for frequent terms in text: Models of contagion and the∆ 2 statistic. E M Airoldi, W W Cohen, S E Fienberg, CSNA & INTERFACE Annual Meetings. St. Louis, MI3E. M. Airoldi, W. W. Cohen, and S. E. Fienberg, "Bayesian methods for frequent terms in text: Models of contagion and the∆ 2 statistic," in CSNA & INTERFACE Annual Meetings, St. Louis, MI, 2005, vol. 3.
Modeling word burstiness using the Dirichlet distribution. R E Madsen, D Kauchak, C Elkan, Proceedings of the 22nd international conference on Machine learning. the 22nd international conference on Machine learningR. E. Madsen, D. Kauchak, and C. Elkan, "Modeling word burstiness using the Dirichlet distribution," in Proceedings of the 22nd international conference on Machine learning, 2005, pp. 545-552.
Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. S Geman, D Geman, IEEE Transactions. 6S. Geman and D. Geman, "Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images," IEEE Transactions on pattern analysis and machine intelligence, no. 6, pp. 721-741, 1984.
Hierarchical Dirichlet Processes. Y W Teh, M I Jordan, M J Beal, D M Blei, 10.1198/016214506000000302Journal of the American Statistical Association. 101476Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, "Hierarchical Dirichlet Processes," Journal of the American Statistical Association, vol. 101, no. 476, pp. 1566- 1581, Dec. 2006, doi: 10.1198/016214506000000302.
The Pascal visual object classes (voc) challenge. M Everingham, L Van Gool, C K Williams, J Winn, A Zisserman, International journal of computer vision. 882M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, "The Pascal visual object classes (voc) challenge," International journal of computer vision, vol. 88, no. 2, pp. 303-338, 2010.
Learning to classify short and sparse text & web with hidden topics from large-scale data collections. X.-H Phan, L.-M Nguyen, S Horiguchi, Proceedings of the 17th international conference on World Wide Web. the 17th international conference on World Wide WebX.-H. Phan, L.-M. Nguyen, and S. Horiguchi, "Learning to classify short and sparse text & web with hidden topics from large-scale data collections," in Proceedings of the 17th international conference on World Wide Web, 2008, pp. 91-100.
Optimizing semantic coherence in topic models. D Mimno, H M Wallach, E Talley, M Leenders, A Mccallum, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingD. Mimno, H. M. Wallach, E. Talley, M. Leenders, and A. McCallum, "Optimizing semantic coherence in topic models," in Proceedings of the conference on empirical methods in natural language processing, 2011, pp. 262-272.
Bayesian cognitive modeling: A practical course. M Lee, E.-J Wagenmakers, Cambridge University PressM. Lee and E.-J. Wagenmakers, Bayesian cognitive modeling: A practical course. Cambridge University Press, 2014.
| [
"https://github.com/jrmazarura/GPM.",
"https://github.com/qiang2100/STTM",
"https://github.com/jrmazarura/GPM.",
"https://github.com/qiang2100/STTM.Conflict"
] |
[
"Tensor2Tensor for Neural Machine Translation",
"Tensor2Tensor for Neural Machine Translation"
] | [
"Ashish Vaswani ",
"Samy Bengio ",
"Eugene Brevdo ",
"Francois Chollet ",
"Aidan N Gomez ",
"Stephan Gouws ",
"Llion Jones ",
"Łukasz Kaiser ",
"Nal Kalchbrenner ",
"Niki Par-Mar ",
"Ryan Sepassi ",
"Noam Shazeer ",
"Jakob Uszkoreit "
] | [] | [
"Proceedings of AMTA"
] | Tensor2Tensor is a library for deep learning models that is very well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model.Neural Machine Translation BackgroundMachine translation using deep neural networks achieved great success with sequence-tosequence models Sutskever et al. (2014); Bahdanau et al. (2014); Cho et al. (2014) that used recurrent neural networks (RNNs) with LSTM cells Hochreiter and Schmidhuber (1997). The basic sequence-to-sequence architecture is composed of an RNN encoder which reads the source sentence one token at a time and transforms it into a fixed-sized state vector. This is followed by an RNN decoder, which generates the target sentence, one token at a time, from the state vector.While a pure sequence-to-sequence recurrent neural network can already obtain good translation results Sutskever et al. (2014); Cho et al. (2014), it suffers from the fact that the whole input sentence needs to be encoded into a single fixed-size vector. This clearly manifests itself in the degradation of translation quality on longer sentences and was partially overcome in Bahdanau et al. (2014) by using a neural model of attention. Convolutional architectures have been used to obtain good results in word-level neural machine translation starting from Kalchbrenner and Blunsom (2013) and later in Meng et al. (2015). These early models used a standard RNN on top of the convolution to generate the output, which creates a bottleneck and hurts performance. Fully convolutional neural machine translation without this bottleneck was first achieved in Kaiser and Bengio (2016) and Kalchbrenner et al. (2016). The model in Kaiser and Bengio (2016) (Extended Neural GPU) used a recurrent stack of gated convolutional layers, while the model in Kalchbrenner et al. (2016) (ByteNet) did away with recursion and used left-padded convolutions in the decoder. This idea, introduced in WaveNet van den Oord et al. (2016), significantly improves efficiency of the model. The same technique was improved in a number of neural translation models recently, including Gehring et al. (2017) and Kaiser et al. (2017). | null | [
"https://www.aclweb.org/anthology/W18-1819.pdf"
] | 3,988,816 | 1803.07416 | 642c1b4a9da95ea4239708afc5929a5007a1870d |
Tensor2Tensor for Neural Machine Translation
2018. March 17 -21. 2018
Ashish Vaswani
Samy Bengio
Eugene Brevdo
Francois Chollet
Aidan N Gomez
Stephan Gouws
Llion Jones
Łukasz Kaiser
Nal Kalchbrenner
Niki Par-Mar
Ryan Sepassi
Noam Shazeer
Jakob Uszkoreit
Tensor2Tensor for Neural Machine Translation
Proceedings of AMTA
12018. March 17 -21. 2018Page 193
Tensor2Tensor is a library for deep learning models that is very well-suited for neural machine translation and includes the reference implementation of the state-of-the-art Transformer model.Neural Machine Translation BackgroundMachine translation using deep neural networks achieved great success with sequence-tosequence models Sutskever et al. (2014); Bahdanau et al. (2014); Cho et al. (2014) that used recurrent neural networks (RNNs) with LSTM cells Hochreiter and Schmidhuber (1997). The basic sequence-to-sequence architecture is composed of an RNN encoder which reads the source sentence one token at a time and transforms it into a fixed-sized state vector. This is followed by an RNN decoder, which generates the target sentence, one token at a time, from the state vector.While a pure sequence-to-sequence recurrent neural network can already obtain good translation results Sutskever et al. (2014); Cho et al. (2014), it suffers from the fact that the whole input sentence needs to be encoded into a single fixed-size vector. This clearly manifests itself in the degradation of translation quality on longer sentences and was partially overcome in Bahdanau et al. (2014) by using a neural model of attention. Convolutional architectures have been used to obtain good results in word-level neural machine translation starting from Kalchbrenner and Blunsom (2013) and later in Meng et al. (2015). These early models used a standard RNN on top of the convolution to generate the output, which creates a bottleneck and hurts performance. Fully convolutional neural machine translation without this bottleneck was first achieved in Kaiser and Bengio (2016) and Kalchbrenner et al. (2016). The model in Kaiser and Bengio (2016) (Extended Neural GPU) used a recurrent stack of gated convolutional layers, while the model in Kalchbrenner et al. (2016) (ByteNet) did away with recursion and used left-padded convolutions in the decoder. This idea, introduced in WaveNet van den Oord et al. (2016), significantly improves efficiency of the model. The same technique was improved in a number of neural translation models recently, including Gehring et al. (2017) and Kaiser et al. (2017).
Self-Attention
Instead of convolutions, one can use stacked self-attention layers. This was introduced in the Transformer model Vaswani et al. (2017) and has significantly improved state-of-the-art in machine translation and language modeling while also improving the speed of training. Research continues in applying the model in more domains and exploring the space of self-attention mechanisms. It is clear that self-attention is a powerful tool in general-purpose sequence modeling. While RNNs represent sequence history in their hidden state, the Transformer has no such fixed-size bottleneck. Instead, each timestep has full direct access to the history through the dot-product attention mechanism. This has the effect of both enabling the model to learn more distant temporal relationships, as well as speeding up training because there is no need to wait for a hidden state to propagate across time. This comes at the cost of memory usage, as the attention mechanism scales with t 2 , where t is the length the sequence. Future work may reduce this scaling factor.
The Transformer model is illustrated in Figure 1. It uses stacked self-attention and pointwise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1 respectively. Encoder: The encoder is composed of a stack of identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, positionwise fully connected feed-forward network.
Decoder: The decoder is also composed of a stack of identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack.
More details about multi-head attention and overall architecture can be found in Vaswani et al. (2017).
Tensor2Tensor (T2T) is a library of deep learning models and datasets designed to make deep learning research faster and more accessible. T2T uses TensorFlow, Abadi et al. (2016), throughout and there is a strong focus on performance as well as usability. Through its use of TensorFlow and various T2T-specific abstractions, researchers can train models on CPU, GPU (single or multiple), and TPU, locally and in the cloud, usually with no or minimal devicespecific code or configuration.
Development began focused on neural machine translation and so Tensor2Tensor includes many of the most successful NMT models and standard datasets. It has since added support for other task types as well across multiple media (text, images, video, audio). Both the number of models and datasets has grown significantly.
Usage is standardized across models and problems which makes it easy to try a new model on multiple problems or try multiple models on a single problem. See Example Usage (appendix B) to see some of the usability benefits of standardization of commands and unification of datasets, models, and training, evaluation, decoding procedures.
Development is done in the open on GitHub (http://github.com/tensorflow/tensor2tensor) with many contributors inside and outside Google.
System Overview
There are five key components that specify a training run in Tensor2Tensor:
Datasets: The Problem class encapsulate everything about a particular dataset. A
Problem can generate the dataset from scratch, usually downloading data from a public source, building a vocabulary, and writing encoded samples to disk. Problems also produce input pipelines for training and evaluation as well as any necessary additional information per feature (for example, its type, vocabulary size, and an encoder able to convert samples to and from human and machine-readable representations).
2. Device configuration: the type, number, and location of devices. TensorFlow and Ten-sor2Tensor currently support CPU, GPU, and TPU in single and multi-device configurations. Tensor2Tensor also supports both synchronous and asynchronous data-parallel training.
3. Hyperparameters: parameters that control the instantiation of the model and training procedure (for example, the number of hidden layers or the optimizer's learning rate). These are specified in code and named so they can be easily shared and reproduced.
4. Model: the model ties together the preceding components to instantiate the parameterized transformation from inputs to targets, compute the loss and evaluation metrics, and construct the optimization procedure.
5.
Estimator and Experiment: These classes that are part of TensorFlow handle instantiating the runtime, running the training loop, and executing basic support services like model checkpointing, logging, and alternation between training and evaluation.
These abstractions enable users to focus their attention only on the component they're interested in experimenting with. Users that wish to try models on a new problem usually only have to define a new problem. Users that wish to create or modify models only have to create a model or edit hyperparameters. The other components remain untouched, out of the way, and available for use, all of which reduces mental load and allows users to more quickly iterate on their ideas at scale.
Appendix A contains an outline of the code and appendix B contains example usage.
Tensor2Tensor provides a vehicle for research ideas to be quickly tried out and shared. Components that prove to be very useful can be committed to more widely-used libraries like Tensor-Flow, which contains many standard layers, optimizers, and other higher-level components. Tensor2Tensor supports library usage as well as script usage so that users can reuse specific components in their own model or system. For example, multiple researchers are continuing work on extensions and variations of the attention-based Transformer model and the availability of the attention building blocks enables that work.
Some examples:
• The Image Transformer Parmar et al. (2018) extends the Transformer model to images. It relies heavily on many of the attention building blocks in Tensor2Tensor and adds many of its own.
• tf.contrib.layers.rev block, implementing a memory-efficient block of reversible layers as presented in Gomez et al. (2017), was first implemented and exercised in Tensor2Tensor.
• The Adafactor optimizer (pending publication), which significantly reduces memory requirements for second-moment estimates, was developed within Tensor2Tensor and tried on various models and problems.
• tf.contrib.data.bucket by sequence length enables efficient processing of sequence inputs on GPUs in the new tf.data.Dataset input pipeline API. It was first implemented and exercised in Tensor2Tensor.
Reproducibility and Continuing Development
Continuing development on a machine learning codebase while maintaining the quality of models is a difficult task because of the expense and randomness of model training. Freezing a codebase to maintain a certain configuration, or moving to an append-only process has enormous usability and development costs. We attempt to mitigate the impact of ongoing development on historical reproducibility through 3 mechanisms:
1. Named and versioned hyperparameter sets in code 2. End-to-end regression tests that run on a regular basis for important model-problem pairs and verify that certain quality metrics are achieved.
3. Setting random seeds on multiple levels (Python, numpy, and TensorFlow) to mitigate the effects of randomness (though this is effectively impossible to achieve in full in a multithreaded, distributed, floating-point system).
If necessary, because the code is under version control on GitHub (http://github.com/tensorflow/tensor2tensor), we can always recover the exact code that produced certain experiment results.
Figure 1 :
1The Transformer model architecture.
• Create Experiment, including training and evaluation hooks which control support services like logging and checkpointing · model.loss * When training: model.optimize * When evaluating: create evaluation metrics• Create input functions -Problem.input fn: produce an input pipeline for a given mode. Uses TensorFlow's tf.data.Dataset API. * Problem.dataset which creates a stream of individual examples * Pad and batch the examples into a form ready for efficient processing • Run the Experiment estimator.train * train op = model fn(input fn(mode=TRAIN)) * Run the train op for the number of training steps specified estimator.evaluate * metrics = model fn(input fn(mode=EVAL)) * Accumulate the metrics across the number of evaluation steps specifiedB Example UsageTensor2Tensor usage is standardized across problems and models. Below you'll find a set of commands that generates a dataset, trains and evaluates a model, and produces decodes from that trained model. Experiments can typically be reproduced with the (problem, model, hyperparameter set) triple.The following train the attention-based Transformer model on WMT data translating from English to German:• Create Estimator encapsulating the model function
-T2TModel.estimator model fn
* model(features)
· model.bottom: This uses feature type information from the Problem to transform
the input features into a form consumable by the model body (for example, embedding
integer token ids into a dense float space).
· model.body: The core of the model.
· model.top: Transforming the output of the model body into the target space using
information from the Problem
pip install tensor2tensor
PROBLEM=translate_ende_wmt32k
MODEL=transformer
HPARAMS=transformer_base
# Generate data
t2t-datagen \
--problem=$PROBLEM \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR
# Train and evaluate
t2t-trainer \
--problems=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--data_dir=$DATA_DIR \
--output_dir=$OUTPUT_DIR \
--train_steps=250000
# Translate lines from a file
t2t-decoder \
--data_dir=$DATA_DIR \
--problems=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$OUTPUT_DIR \
--decode_from_file=$DECODE_FILE \
--decode_to_file=translation.en
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 193
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 194
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 195
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 196
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 197
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 198
Proceedings of AMTA 2018, vol. 1: MT Research Track Boston, March 17 -21, 2018 | Page 199
A Tensor2Tensor Code Outline• Create HParams• Create RunConfig specifying devices -Create and include the Parallelism object in the RunConfig which enables data-parallel duplication of the model on multiple devices (for example, for multi-GPU synchronous training).
Tensorflow: A system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, M Kudlur, J Levenberg, R Monga, S Moore, D G Murray, B Steiner, P Tucker, V Vasudevan, P Warden, M Wicke, Y Yu, X Zheng, 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zheng, X. (2016). Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265-283.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, abs/1409.0473CoRRBahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, C Gulcehre, F Bougares, H Schwenk, Y Bengio, abs/1406.1078CoRRCho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learn- ing phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078.
Convolutional sequence to sequence learning. J Gehring, M Auli, D Grangier, D Yarats, Y N Dauphin, abs/1705.03122CoRRGehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. CoRR, abs/1705.03122.
The reversible residual network: Backpropagation without storing activations. A N Gomez, M Ren, R Urtasun, R B Grosse, abs/1707.04585CoRRGomez, A. N., Ren, M., Urtasun, R., and Grosse, R. B. (2017). The reversible residual network: Back- propagation without storing activations. CoRR, abs/1707.04585.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735- 1780.
Can active memory replace attention?. Ł Kaiser, S Bengio, Advances in Neural Information Processing Systems. Kaiser, Ł. and Bengio, S. (2016). Can active memory replace attention? In Advances in Neural Information Processing Systems, pages 3781-3789.
Depthwise separable convolutions for neural machine translation. L Kaiser, A N Gomez, F Chollet, abs/1706.03059CoRRKaiser, L., Gomez, A. N., and Chollet, F. (2017). Depthwise separable convolutions for neural machine translation. CoRR, abs/1706.03059.
Recurrent continuous translation models. N Kalchbrenner, P Blunsom, Proceedings EMNLP 2013. EMNLP 2013Kalchbrenner, N. and Blunsom, P. (2013). Recurrent continuous translation models. In Proceedings EMNLP 2013, pages 1700-1709.
Neural machine translation in linear time. N Kalchbrenner, L Espeholt, K Simonyan, A Van Den Oord, A Graves, K Kavukcuoglu, abs/1610.10099CoRRKalchbrenner, N., Espeholt, L., Simonyan, K., van den Oord, A., Graves, A., and Kavukcuoglu, K. (2016). Neural machine translation in linear time. CoRR, abs/1610.10099.
Encoding source language with convolutional neural network for machine translation. F Meng, Z Lu, M Wang, H Li, W Jiang, Q Liu, ACL. Meng, F., Lu, Z., Wang, M., Li, H., Jiang, W., and Liu, Q. (2015). Encoding source language with convolutional neural network for machine translation. In ACL, pages 20-30.
. N Parmar, A Vaswani, J Uszkoreit, Ł Kaiser, N Shazeer, A Ku, Image Transformer. ArXiv e-printsParmar, N., Vaswani, A., Uszkoreit, J., Kaiser, Ł., Shazeer, N., and Ku, A. (2018). Image Transformer. ArXiv e-prints.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in Neural Information Processing Systems. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104-3112.
A Van Den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu, abs/1609.03499WaveNet: A generative model for raw. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. (2016). WaveNet: A generative model for raw audio. CoRR, abs/1609.03499.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, CoRRVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need. CoRR.
| [
"http://github.com/tensorflow/tensor2tensor)",
"http://github.com/tensorflow/tensor2tensor),"
] |
[
"Semi-supervised Classification for Natural Language Processing",
"Semi-supervised Classification for Natural Language Processing"
] | [
"Rushdi Shams rshams@csd.uwo.ca. \nDepartment of Computer Science\nUniversity of Western Ontario\nN6A 5B7LondonONCanada\n"
] | [
"Department of Computer Science\nUniversity of Western Ontario\nN6A 5B7LondonONCanada"
] | [] | Semi-supervised classification is an interesting idea where classification models are learned from both labeled and unlabeled data. It has several advantages over supervised classification in natural language processing domain. For instance, supervised classification exploits only labeled data that are expensive, often difficult to get, inadequate in quantity, and require human experts for annotation. On the other hand, unlabeled data are inexpensive and abundant. Despite the fact that many factors limit the wide-spread use of semi-supervised classification, it has become popular since its level of performance is empirically as good as supervised classification. This study explores the possibilities and achievements as well as complexity and limitations of semi-supervised classification for several natural langue processing tasks like parsing, biomedical information processing, text classification, and summarization. | null | [
"https://arxiv.org/pdf/1409.7612v1.pdf"
] | 2,181,647 | 1409.7612 | f3952ed854071b46b897f63fc1c3953a4ac33563 |
Semi-supervised Classification for Natural Language Processing
25 Sep 2014
Rushdi Shams rshams@csd.uwo.ca.
Department of Computer Science
University of Western Ontario
N6A 5B7LondonONCanada
Semi-supervised Classification for Natural Language Processing
25 Sep 2014Semi-supervised learningclassificationnatural language processingdata mining
Semi-supervised classification is an interesting idea where classification models are learned from both labeled and unlabeled data. It has several advantages over supervised classification in natural language processing domain. For instance, supervised classification exploits only labeled data that are expensive, often difficult to get, inadequate in quantity, and require human experts for annotation. On the other hand, unlabeled data are inexpensive and abundant. Despite the fact that many factors limit the wide-spread use of semi-supervised classification, it has become popular since its level of performance is empirically as good as supervised classification. This study explores the possibilities and achievements as well as complexity and limitations of semi-supervised classification for several natural langue processing tasks like parsing, biomedical information processing, text classification, and summarization.
I. INTRODUCTION
Classical supervised methods use labeled data to train their classifier models. These methods are widespread and used in many different domains including natural language processing. The key material used in different natural language processing tasks is text. The number of text, however, is increasing everyday due to the pervasive use of computing. There are more unlabeled than labeled text since data labeling is expensive due to engagement of human for data annotation. The process also consumes time. These difficulties have serious effects on supervised learning since a good fit of a classifier model requires as much labeled data as possible for its training [1].
Semi-supervised learning can be a good means to overcome the aforementioned problems. The basic principle of semisupervised learning is simple: use both unlabeled and labeled data to generate classifier models. This makes semi-supervised learning substantially useful for a domain like natural language processing because of a good note, unlabeled text is inexpensive, abundant, and more available than labeled text. In addition, empirically, semi-supervised models have good performance records. On many tasks, they are as good as supervised models and in most cases, they are better than cluster-based, unsupervised models [2]. However, many of these results are not conclusive and proper care therefore should be taken due to some serious concerns related to semisupervised learning.
This study explores the use of semi-supervised learning for natural language processing. Interestingly, like traditional machine learning methods, semi-supervised learning can be used to solve classification, regression, and clustering problems. The particular focus of this study, however, is on semisupervised classification. In this study, popular research papers and classic books are explored to outline the possibilities and achievements of semi-supervised classification for natural language processing. As well, thorough investigations are carried out, both from theoretical and empirical data, to explain the complexity and limitations of this method. Natural language processing is one of the largest research areas of artificial intelligence. The scope of this study is, however, limited to most popular tasks such as parsing, biomedical information processing, text classification, and summarization.
The organization of the paper is as follows. Section II presents an overview of semi-supervised learning that includes learning problems, and different types of semi-supervised algorithms and learning techniques. Following that Section III outlines the use of semi-supervised classification for different natural language processing tasks. In Section IV, several considerations and conclusions are drawn.
II. OVERVIEW OF SEMI-SUPERVISED LEARNING
Unlike supervised and unsupervised learning, semisupervised learning exploits both labeled and unlabeled data. To start with, semi-supervised methods train models with a very little labeled data. Surprisingly, test results show that marginal labeled data are sufficient to train models with good fit for semi-supervised learning [3]. The generated models are then applied on unlabeled data in an attempt to label them. The confidence of the models in labeling them is measured against a confidence threshold set a priori by users. Note that (b) Supervised and semi-supervised decision boundaries for labeled and unlabeled data. Fig. 2: Supervised and semi-supervised decision boundaries drawn by a random classifier for two labeled and 100 unlabeled data [2]. In 2a, the supervised decision boundary is in the middle by averaging the values of the data points. In 2b, the supervised decision boundary produces more classification errors due to the distribution of the data.
learning algorithms often have their own confidence measures that generally depend on their working principles. For instance, class probability values for each data instance are considered as confidence measures for Naïve Bayes models [4]. For an unlabeled data, if the models reach the pre-set confidence threshold, then the newly labeled data are added to the pool of originally labeled data. This process continues unless (i) the models' confidences for the labels stop reaching the threshold, or (ii) the models confidently label all the unlabeled data and there are no unlabeled data remaining in the dataset. The interesting cycle of labeling and re-labeling of semi-supervised learning is illustrated in Figure 1.
A. Learning Problems
Semi-supervised learning problems can be broadly categorized into two groups: (i) transductive learning and (ii) inductive learning. Transductive learning is like a take-home exam. This group of semi-supervised learning tries to evaluate the goodness of a model assumption on unlabeled data after training a classifier with the available labeled data. Inductive learning, on the contrary, is often seen as an in-class examit evaluates the goodness of a model assumption on unseen, unlabeled test data after training a classifier with both labeled and unlabeled data. Figure 1 shows the boundaries between these two types of semi-supervised learning. While the entire cycle in the figure illustrates inductive learning, steps 1-3 describe transductive learning. Figure 2 can be referred to understand how semi-supervised learning works with a very few labeled but abundant unlabeled data. Figure 2a shows that based on the position of a positive (x = 1) and a negative (x = −1) labeled data, a supervised decision boundary is drawn right at x = 0 based on the average of the data points. However, given only these two labeled data and 100 unlabeled data (represented by green dots in Figure 2b), this supervised decision boundary still remains at x = 0. In contrast, had a semi-supervised classifier been used, the boundary would have shifted more to the right (say some point at x = 0.4) than the supervised decision boundary. This shift is due to the distribution of unlabeled data points considering the position of the positive and negative examples. In this particular case, the semi-supervised classifier assumes that the green dots near to the red cross point form one kind of data distribution while the green dots near to the blue circle form a different one. Interestingly, semi-supervised learning fails in many intriguing cases, where the distributions of labeled and unlabeled data are not as distinguishable as seen in Figure 2.
B. Working Principle
C. Types of Algorithms
There are several semi-supervised algorithms and most of them can be categorized into two groups based on their properties: (i) generative algorithms and (ii) discriminative algorithms. The models generated by these two types of algorithms are therefore called generative and discriminative models, respectively. The following can explain the key difference between the two types of models. Say, we are given a set of speeches given by human presenters. As well, a set of languages are provided. The task is to simply classify every speech into one of the languages. This learning problem can be solved in either of the following two ways: first, the learner learns each language and then attempts to classify the speeches according to its learning. Second, the learner learns the differences among the speeches according to various attributes or features present in them and then attempts to classify the speeches according to its learning. Note that for the second case, the learner does not need to learn all the languages. The prior is called a generative learner and the latter is known as a discriminative learner. Let us take a look into these two types of algorithms mathematically. Say, we are given a set of instances x and their classes y in the form of (x, y): (1, 0), (1, 0), (2, 0), (2, 1). Generative algorithms attempt to find out the joint probability, p(x, y) (a) Joint probability calculated by generative algorithms (b) Conditional probability calculated by discriminative algorithms. Fig. 3: Probability distribution of the data as seen by generative and discriminative algorithms. Generative algorithms calculate the joint probability distribution of the data while discriminative algorithms deal with their conditional probability. from these data (see Figure 3a) while discriminative algorithms calculate their conditional probability, p(x|y) (Figure 3b). Now, for supervised algorithms a discriminative model predicts the label y from training example x as follows:
f (x) = argmax y p(y|x).
(1)
However, from the Baye's theorem, we know that
p(y|x) = p(x|y)p(y) p(x) .(2)
However, for Equation 1, p(x) can be ignored since it finds the function, f (x) for the maximum value of y. Therefore, ignoring p(x) in Equation 2 gives us
f (x) = argmax y p(x|y)p(y).(3)
Interestingly, Equation 3 is what supervised, generative algorithms use to induce their models. In other words, for supervised algorithms, Equation 1 is used to find out class boundaries based on the given training instances x and Equation 3 is used to generate x for any given y. The latter, however, is not found as easily for semi-supervised algorithms as for supervised algorithms. The first and foremost reason for this is that in semi-supervised problems, the algorithms cannot completely ignore p(x) because most of what it has are the distributions of training examples (i.e., p(x)). Moreover, for semi-supervised algorithms, a very few class labels are provided (for training examples) and therefore from the few given y's, the conditional probabilities, p(x|y) are difficult to generate. This is a key difference between supervised and semi-supervised algorithms. An example is provided to understand the difference better. For semi-supervised algorithms, Equation 1 can be substituted by
p(y|x) = p(x|y)p(y) y ′ p(x|y ′ )p(y ′ ) ,(4)
where y ′ denotes the classes of the few given training examples x. Equation 4 has a probability density function p(x|y) in its numerator. If the distribution of x comes from a Gaussian and it is a function of mean vector and covariance matrix of the Gaussian, then using a Maximum Likelihood Estimate, the mean vector and covariance matrix can be tuned to maximize the density function. Thereafter, this tuning can be optimized using an Expectation-Maximization (EM) algorithm. Note that according to the distribution of x, different algorithms use different techniques for tuning and optimizing the density function p(x|y) in Equation 4. Among the semi-supervised algorithms, Transductive Support Vector Machine (TSVM) and graph-based methods are generative algorithms while EM and self-learning are discriminative algorithms.
D. Types of Learning
The semi-supervised learning can be broadly categorized into three: (i) self-training, (ii) co-training, and (iii) active learning. 1) Self-training: In self-training, from a set of initially labeled data L, a classifier, C 1 is generated. This classifier is then applied on a set of initially unlabeled data U . According to a pre-set confidence threshold, the classifications of unlabeled data are observed. If the classifier's confidence reaches the threshold, the newly classified instances are concatenated with L to produce a set L new and removed from U to produce U new . A second classifier, C 2 is generated from L new , and thereafter applied on U new . This cycle continues until the classifier converges-which means that either (a) all the unlabeled data are confidently labeled by the classifier or (b) the classifier's confidence stops reaching the threshold for several cycles. Self-training is very simple and particularly useful if the supervised algorithm is difficult to modify. Nonetheless, selftraining performs poorly for datasets that contain large number of outliers.
2) Co-training: In contrast to self-training, for co-training, two partitions L 1 and L 2 are created from the initially labeled data L. The partitions are based on two different sets of attributes or features V 1 and V 2 (in semi-supervised literature, they are often referred to as views). Then, two classifiers independently generates respective models F 1 and F 2 from L 1 and L 2 using V 1 and V 2 . Following that, from the unlabeled data pool U , k most confident predictions of F 1 are added to L 2 and k most confident predictions of F 2 are added to L 1 .
These added examples are removed from U . F 1 is re-trained with L 1 and F 2 is re-trained with L 2 . This cycle continues until the classifiers converge. Finally, using a voting or averaging method, test data are classified. Note that co-training can be seen as self-training with two or more classifiers. Co-training is very useful if the attributes or features naturally split into two distinguishable sets. However, there are two important conditions that should be met for co-training to work. Given enough labeled data, 1) each view alone should be sufficient to make good classifications and 2) the co-trained algorithms should individually perform good.
3) Active Learning: Finally, for active learning a model is generated from labeled data and attempts to classify unlabeled instances. The classification it makes is then provided to a human expert called the oracle for her judgment. The correctly labeled instances according to the oracle are then added to the pool of labeled data while the instances with incorrect labels remain in the unlabeled data pool. This process continues until the unlabeled data pool becomes empty. Active learning is very useful for limited available data (both labeled and unlabeled). Because of the presence of an oracle, this semi-supervised learning is slow and almost always expensive.
III. SEMI-SUPERVISED CLASSIFICATION FOR NATURAL LANGUAGE PROCESSING
In this section, different natural language processing applications of semi-supervised classification are discussed. The discussion is mainly based on the findings from several classic and state-of-the-art literature from the domain of parsing, text classification, text summarization, and biomedical information mining.
A. Parsing
Steedman et al. [5] found that self-training has very small effects on parser improvements. Similar results are reported by Clark et al. [6] who applied self-training to part-of-speech (POS) tagging. The only works that reported successful execution of self-training to improve parsers are very few [7] [8]. This paper concentrates on the work of McClosky et al. because they do not adapt the parser in use that because adaptation has some drastic effects on self-training. Rather than using an adaptive parser, the Charniak parser used in their research utilized both labeled and unlabeled data that come from the same source domain. Using of a re-ranker besides the parser is also what makes their work different than many contemporary work. The parser uses third order Markov grammar and five probability distributions that are conditioned with more than five linguistic attributes. Firstly, the parser produces 50-best parses for the sentences in the datasets. Secondly, a maximum entropy re-ranker with over a million attributes re-ranks these parses. The experiment is extensive: datasets used in this experiment are Penn treebank section 2 − 21 for training (approximately 40, 000 wall street journal articles), section 23 for testing, and section 24 for heldout data. 24 million LA Times articles were used as unlabelled data collected from the North American News Text Corpus (NANC). The authors experiment with and without the reranker as they added unlabelled sentences to their labeled data pool. They found that the parser performs better with the reranker system. The improvement reported is about 1.1% Fscore-among which the self-trained parser contributes 0.8% and the re-ranker contributes 0.3%). The authors also did some experiments with sentences in section 1, 22, and 24 to see how the self-trained parser performs at sentence level. Each sentence in these sections was labelled as better, no change or worse compared to the baseline F-score for the sentences. Interestingly, the outcomes showed that the parser had improvement neither for unknown words nor for prepositional phrases. However, there was an explicit improvement for intermediate-length sentences but no improvement for the extremes (Goldilocks effect). The parser performs poorly for conjunctions.
Zhu [9], however, asserted that in semi-supervised classification, unlabeled sentences for which the parser accuracy is unusually better than normal should be restricted to be included in the pool of labeled data. McClosky et al. [7], however, stated that they did not followed this approach particularly. The speed of the semi-supervised Charniak parser is similar to its supervised version but it requires more memory to execute the cycles involved in self-training. Also, the labeled and unlabeled data were collected from two different datasets (although they are both newspaper sources) that usually limits the success of self-training. Nevertheless, the experiment is a success and this question is unanswered in their paper.
B. Text Classification
Semi-supervised classification has been used widely in natural language processing tasks such as spam classification, which is a form of text classification. The results in 2006 ECML/PKDD spam discovery challenge [10] indicated that spam filters based on semi-supervised classification outperformed supervised filters. Extensive experiments showed that semi-supervised filters work better when source of available labeled examples differs from those to be classified. Interestingly, Mojdeh and Cormack [11] found completely different results when they re-designed the challenge with different collections of email datasets.
The 2006 ECML/PKDD discovery challenge had two interesting tasks. The first task is called the Delayed Feedback where the filters are trained with emails T 1 and then they classify some test emails t 1 . In their second cycle of training, they are trained with T 1 and t 1 combined and the training continues for the entire dataset for the challenge. The best (1−AUC) reported in the challenge is a remarkable 0.01%. The second task for the challenge is called the Cross-user Train where the classifiers are trained on one particular set of emails and then tested on a completely different set of emails. The best (1−AUC) reported for this task is greater than the first task: 0.1%. The best performing filters in the challenge were all semi-supervised filters and based on support vector machines SVM and TSVM [12], Dynamic Markov Compression (DMC) [13], and Logistic regression with self-training (LR) [14]. On the other hand, in 2007 TREC Spam Track Challenge [15], the participating spam filters were trained with publicly available emails and their model accuracy was tested on emails collected from user inboxes (i.e., personalized emails). In an attempt to see whether semi-supervised filters perform as good as it was reported in [10], Mojdeh and Cormack [11] reproduced the work by replacing the datasets of ECML/PKDD challenge with TREC challenge datasets. The delayed feedback task was reproduced as follows: first, 10, 000 messages were used for training and the next 60, 000 messages were divided into six batches each containing 10, 000 messages. Second, the remaining 5, 000 messages were kept for testing the models. On the other hand, to reproduce the Cross-user Train task, 30, 338 messages from particular user inboxes were used for training while 45, 081 messages from other users were used for model evaluation.
The experimental outcomes showed that for both the tasks, the semi-supervised versions of DMC, LR, and TSVM underperformed for LREC dataset. Their respective 1−AUC scores for the delayed feedback task were 0.090, 0.046, and 0.230. On the other hand, the 1−AUC of their supervised versions were 0.016, 0.049, and 0.030 for the task. For the cross-user task, the 1−AUC of the semi-supervised DMC, LR, and TSVM filters were 9.97, 10.72, and 24.3, respectively. For the same task, their supervised versions performed way better. The authors also reported a cross-corpus experiment to reproduce the results of ECML/PKDD Challenge. Here, the first 10, 000 messages from the TREC 2005 dataset were considered. Besides, the TREC 2007 dataset was split into 10, 000 message segments. The outcomes again showed that self-training is harmful for the filters. Except the TSVM filter, the rest of the two semi-supervised filters failed to perform as good as their supervised versions.
Keeping the aforementioned results in mind, we can say that semi-supervised classification is applicable to text classification but the performance depends on the labeled and unlabeled training data, and the source from which the data are derived.
C. Extractive Text Summarization
Wong et al. [16] have conducted a comparative study where they produced extractive summaries by using both supervised and semi-supervised classifiers. The authors used four traditional groups of attributes to train their classifiers: (1) surface (2) relevance (3) event, and (4) content attributes. They tried different combinations of the attributes and found that the classifiers had produced better summaries when the surface, relevance, and content attributes were combined. The novelty of their work is that they used supervised SVM as well as its semi-supervised version called probabilistic SVM or PSVM to generate classifiers and compared their performances. As performance measure they considered ROUGE scores and found that the ROUGE-I score of their SVM classifier is 0.396 while the human ROUGE-I was 0.42 when compared to the gold standard summaries. On the other hand, the co-training with the PSVM and Naïve Bayes classifiers produced summaries that have ROUGE-I of 0.366. Although this performance is not better than what they found with the supervised SVM or human summaries, it was better than supervised PSVM and Naïve Bayes classifiers. Note that as their datasets, the authors used the DUC 2001 dataset 1 . The dataset contains 30 clusters of relevant documents. Each cluster comes with model summaries created by the dataset annotators. 50, 100, 200, and 400-word summaries are provided for each cluster. Among the clusters, 25 are used as training data while the remaining 5 clusters are used for testing. The authors also concluded that the ROUGE-I scores of their classifier are better if they produce 400-word summaries for the test clusters.
Nevertheless, the reported methodology of the paper has some serious drawbacks. Many of the methods used in this research are not in line with what had been found by classic empirical studies. For instance, the co-training is done on the same attribute space that violates the primary hypothesis of cotraining: two classifiers used in co-training should use separate views (see Section II-D2). Secondly, the authors selected the set of attributes (surface, relevance, and content attributes) by only considering the performance of PSVM with them and ignoring the performance of the supervised Naïve Bayes with them.
D. Biomedical Information Mining
Now-a-days, there is much impetus for information mining from biomedical research papers. Researchers put significant effort to mine secondary information such as protein interaction relations from biomedical research papers to help identify primary information like DNA replication, genotypephenotype relations, and signaling pathways. The first and foremost task for protein interaction relation miners is to classify sentences in research papers that describe one or more protein interactions. These sentences are called the candidate sentences. A number of supervised tools are developed to classify candidate sentences from biomedical articles (see for example [17], [18], and [19]). However, the first semisupervised approach for the task was reported by Erkan et al. [20]. Their approach identified candidate sentences using similarities of the paths present between two protein names found from the dependency parses of the sentences. What follows are the brief descriptions of their method. The authors produced dependency trees for each sentence from two given datasets. The paths between two protein names in the parse trees were then analyzed. According to these paths, the sentences were labeled and treated as the gold standard for the tool's evaluation. Given the paths, two distance-based measures, cosine similarity and edit distance, were used by their tool to find out interactions between the proteins. These measures were provided to both supervised and semi-supervised algorithms to generate models to classify the sentences in the datasets. The labels predicted by the supervised and semi-supervised classifiers were then evaluated against the gold standard. According to the outcomes, the semi-supervised classifiers performed better than their supervised versions by a wide margin. Four algorithms were used to generate the classifiers among which two are supervised (SVM and K-Nearest Neighbor (KNN)) and the rest were their respective semi-supervised versions (TSVM and Harmonic Functions). The distance-based measures were used to generate attributes for the classifiers and were extracted from two datasets named AIMED and Christine-Brun (CB). The AIMED dataset contains 4, 026 sentences of which 951 describe protein interactions while the CB dataset is composed of 4, 056 sentences of which 2, 202 describe protein interactions. Each of the four algorithms then generated a classifier from the two sets of attributes found from the two distance measures. Experimental outcomes show that for the AIMED dataset, TSVM with edit distance attributes performed the best with 59.96% F-score. This F-score was significantly better than the F-scores found using the supervised classifiers. Comparisons showed that the F-score with TSVM was significantly better than those reported by two contemporary work [18] [21]. On the other hand, the tool performed even better on the CB dataset where its TSVM classifier with edit distance based attributes produced an F-score of 85.20%. Similar to the result found with the AIMED dataset, the performances of the supervised classifiers were not satisfactory. The authors also examined the effect of the size of the labeled training data for the classifiers. In the case of AIMED, the authors found that with small labeled training data, semi-supervised algorithms were better. In addition, SVM performed poorly with less training data but as more data became available for its training, it started to perform well. On the other hand, for the CB dataset, KNN performed poorly with much labeled data. Interestingly, SVM performed competitively with the semisupervised classifiers with more labeled data.
Note that TSVM is susceptible to the distribution of the labeled data. However, the work did not report any test on the data distribution. The AIMED dataset, in addition, has class imbalance problem that seriously affects the performance of TSVM classifiers. This can be seen as the limitation of the work since it did not explain why in their case the TSVM classifier performed better than the rest.
IV. CONCLUSIONS
The findings of empirical research on parsing, text classification, text summarization and biomedical information mining are investigated in this study. According to most of them, semi-supervised classification has substantial advantages over supervised classification when labeled data are difficult to manage and unlabeled data are abundant. This paper also outlines the theories behind the success of semi-supervised Fig. 4: The use of labeled and unlabeled data in semisupervised classification. A dot represents a paper that uses semi-supervised classification. Light gray dots mean older papers while dark gray dots mean newer papers [9].
classification. According to the theories, there is no free lunch for semi-supervised classification rather its success depends on underlying data distribution, data complexity, model assumption, choice of proper algorithm, problem in hand, and most of all-experience. Surprisingly, the investigation has found that the classic studies often do not consider the do's and don'ts suggested by the theories. Despite the success reported in the empirical studies, it is therefore inconclusive whether semisupervised classification can be really as useful as supervised classification.
The complexity associated with semi-supervised classification limits its use. This can be seen from the illustration in Figure 4. It shows the use of labeled and unlabeled data in semi-supervised classification. Each dot in the illustration represents a paper that uses semi-supervised classification. While the light gray dots represent older papers, the dark gray dots represent recent papers. We can come to two conclusions from this data: 1) there are not much reported work that implement semi-supervised classification and a bulk of the reported work are old and 2) although the main purpose of using semi-supervised classification is the abundance of unlabeled data, the amount of unlabeled data used in research are at most 10 6 so far-in layman's term, which is just above the number of people in a stadium.
Nevertheless, semi-supervised classification is the only option until now to deal the natural language processing problems where there are more unlabeled than labeled data. This study, however, points out the following suggestions for dealing with semi-supervised classification more effectively:
1) The model assumption for semi-supervised algo-rithms must match the problem in hand. For instance, if the classes produce well-clustered data then expectation-maximization is a good algorithm to choose; if the attribute space can be naturally split into two sets then co-training is preferred; if two points with similar attribute values tend to be in the same class then graph-based method (not discussed in this paper) can be a reasonable choice; if SVM performs well on labeled data then TSVM is a natural extension; and given the supervised algorithm is complicated and difficult to modify, self-training is useful.
2) The distributions of both labeled and unlabeled data need to be investigated. TSVM, for instance, performs poorly with unlabeled data that have highly overlapped positive and negative distribution since it assumes that its decision boundary would go right through the densest region. Therefore, in this case a TSVM classifier usually produces a lot of false positives and false negatives.
3) The proportion of labeled and unlabeled data is important to notice before choosing an algorithm. However, there is not conclusive remark on how the proportion affects the overall classification performance. 4) It has been found empirically that there is an effect of dependency among attributes on semi-supervised classification. To be more specific, with fewer labeled examples, the number of dependent attributes should be kept as low as possible. 5) Data noises should be investigated as they have effect on classification performance. It is easier to detect noise in the labeled data than in unlabeled data. Note that data noise has less effect on semi-supervised classification than supervised classification. 6) The labeled and unlabeled data, usually, are collected from different sources and this can affect the classification performance. If the labeled and unlabeled data are collected from completely different sources and their properties differ, then rather than using semisupervised classification, transfer learning and selftaught classification are encouraged to use [22].
Fig. 1 :
1The overview of semi-supervised learning. The figure also outlines the scope of transductive and inductive learning.
decision boundaries for labeled data.
Download at: http://duc.nist.gov
Text bundling: Statistics based data-reduction. L Shih, J D Rennie, Y.-H Chang, D R Karger, ICML, T. Fawcett and N. MishraAAAI PressL. Shih, J. D. Rennie, Y.-H. Chang, and D. R. Karger, "Text bundling: Statistics based data-reduction." in ICML, T. Fawcett and N. Mishra, Eds. AAAI Press, 2003, pp. 696-703. [Online]. Available: http://dblp.uni-trier.de/db/conf/icml/icml2003.html#ShihRCK03
Introduction to Semi-Supervised Learning. X Zhu, A B Goldberg, R Brachman, T Dietterich, Morgan and Claypool PublishersX. Zhu, A. B. Goldberg, R. Brachman, and T. Dietterich, Introduction to Semi-Supervised Learning. Morgan and Claypool Publishers, 2009.
O Chapelle, B Schlkopf, A Zien, Semi-Supervised Learning. The MIT Press1st edO. Chapelle, B. Schlkopf, and A. Zien, Semi-Supervised Learning, 1st ed. The MIT Press, 2010.
Estimating continuous distributions in bayesian classifiers. G H John, P Langley, Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, ser. UAI'95. the Eleventh Conference on Uncertainty in Artificial Intelligence, ser. UAI'95San Francisco, CA, USAMorgan Kaufmann Publishers IncG. H. John and P. Langley, "Estimating continuous distributions in bayesian classifiers," in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, ser. UAI'95. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1995, pp. 338-345.
Bootstrapping statistical parsers from small datasets. M Steedman, M Osborne, A Sarkar, S Clark, R Hwa, J Hockenmaier, P Ruhlen, S Baker, J Crim, 10.3115/1067807.1067851Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics. the Tenth Conference on European Chapter of the Association for Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics1ser. EACL '03M. Steedman, M. Osborne, A. Sarkar, S. Clark, R. Hwa, J. Hockenmaier, P. Ruhlen, S. Baker, and J. Crim, "Bootstrapping statistical parsers from small datasets," in Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics -Volume 1, ser. EACL '03. Stroudsburg, PA, USA: Association for Computational Linguistics, 2003, pp. 331-338. [Online]. Available: http://dx.doi.org/10.3115/1067807.1067851
Bootstrapping pos taggers using unlabelled data. S Clark, J R Curran, M Osborne, 10.3115/1119176.1119183ser. CONLL '03Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Stroudsburg, PA, USAAssociation for Computational Linguistics4S. Clark, J. R. Curran, and M. Osborne, "Bootstrapping pos taggers using unlabelled data," in Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 -Volume 4, ser. CONLL '03. Stroudsburg, PA, USA: Association for Computational Linguistics, 2003, pp. 49-55. [Online]. Available: http://dx.doi.org/10.3115/1119176.1119183
Effective self-training for parsing. D Mcclosky, E Charniak, M Johnson, Proc. N. American ACL (NAACL. N. American ACL (NAACLD. Mcclosky, E. Charniak, and M. Johnson, "Effective self-training for parsing," in In Proc. N. American ACL (NAACL, 2006, pp. 152-159.
Map adaptation of stochastic grammars. M Bacchiani, M Riley, B Roark, R Sproat, 10.1016/j.csl.2004.12.001Comput. Speech Lang. 201M. Bacchiani, M. Riley, B. Roark, and R. Sproat, "Map adaptation of stochastic grammars," Comput. Speech Lang., vol. 20, no. 1, pp. 41-68, Jan. 2006. [Online]. Available: http://dx.doi.org/10.1016/j.csl.2004.12.001
Semi-supervised learning literature survey. X Zhu, 1530Computer Sciences. University of Wisconsin-MadisonTech. Rep.OnlineX. Zhu, "Semi-supervised learning literature survey," Computer Sci- ences, University of Wisconsin-Madison, Tech. Rep. 1530, 2005. [On- line]. Available: http://pages.cs.wisc.edu/ ∼ jerryzhu/pub/ssl survey.pdf
Ecml-pkdd discovery challenge 2006 overview. S Bickel, Proceedings of the ECML-PKDD Discovery Challenge Workshop. the ECML-PKDD Discovery Challenge WorkshopS. Bickel, "Ecml-pkdd discovery challenge 2006 overview," in Proceed- ings of the ECML-PKDD Discovery Challenge Workshop, 2006.
Semi-supervised spam filtering: does it work. M Mojdeh, G V Cormack, SIGIR, S.-H. Myaeng, D. W. Oard, F. Sebastiani, T.-S. Chua, and M.-K. LeongACMM. Mojdeh and G. V. Cormack, "Semi-supervised spam filtering: does it work?" in SIGIR, S.-H. Myaeng, D. W. Oard, F. Sebastiani, T.-S. Chua, and M.-K. Leong, Eds. ACM, 2008, pp. 745-746. [Online]. Available: http://dblp.uni-trier.de/db/conf/sigir/sigir2008.html#MojdehC08
Making Large-scale Support Vector Machine Learning Practical. T Joachims, B. Schölkopf, C. J. C. Burges, and A. J. SmolaMIT PressCambridge, MA, USAAdvances in kernel methodsT. Joachims, "Advances in kernel methods," B. Schölkopf, C. J. C. Burges, and A. J. Smola, Eds. Cambridge, MA, USA: MIT Press, 1999, ch. Making Large-scale Support Vector Machine Learning Practical, pp. 169-184. [Online]. Available: http://dl.acm.org/citation.cfm?id=299094.299104
Spam filtering using statistical data compression models. A Bratko, G V Cormack, D R , B Filipic, P Chan, T R Lynam, T R Lynam, Journal of Machine Learning Research. 7A. Bratko, G. V. Cormack, D. R, B. Filipic, P. Chan, T. R. Lynam, and T. R. Lynam, "Spam filtering using statistical data compression models," Journal of Machine Learning Research, vol. 7, pp. 2673-2698, 2006.
Harnessing unlabeled examples through iterative application of dynamic markov modeling. G V Cormack, Proceedings of the ECML-PKDD Discovery Challenge Workshop. the ECML-PKDD Discovery Challenge WorkshopG. V. Cormack, "Harnessing unlabeled examples through iterative application of dynamic markov modeling," in In Proceedings of the ECML-PKDD Discovery Challenge Workshop, 2006.
Trec 2006 spam track overview. G Cormack, Proceedings of TREC 2006. TREC 2006G. Cormack, "Trec 2006 spam track overview," in Proceedings of TREC 2006, 2006.
Extractive summarization using supervised and semi-supervised learning. K.-F Wong, M Wu, W Li, Proceedings of the 22Nd International Conference on Computational Linguistics. the 22Nd International Conference on Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics1ser. COLING '08K.-F. Wong, M. Wu, and W. Li, "Extractive summarization using supervised and semi-supervised learning," in Proceedings of the 22Nd International Conference on Computational Linguistics -Volume 1, ser. COLING '08. Stroudsburg, PA, USA: Association for Computational Linguistics, 2008, pp. 985-992. [Online]. Available: http://dl.acm.org/citation.cfm?id=1599081.1599205
Prebind and textomy -mining the biomedical literature for protein-protein interactions using a support vector machine. I M Donaldson, J D Martin, B De Bruijn, C Wolting, V Lay, B Tuekam, S Zhang, B Baskin, G D Bader, K Michalickova, T Pawson, C W V Hogue, BMC Bioinformatics. 411I. M. Donaldson, J. D. Martin, B. de Bruijn, C. Wolting, V. Lay, B. Tuekam, S. Zhang, B. Baskin, G. D. Bader, K. Michalickova, T. Pawson, and C. W. V. Hogue, "Prebind and textomy -mining the biomedical literature for protein-protein interactions using a support vector machine." BMC Bioinformatics, vol. 4, p. 11, 2003. [Online]. Available: http://dblp.uni-trier.de/db/journals/bmcbi/bmcbi4.html#DonaldsonMBWLTZBBMPH
Extracting protein-protein interaction information from biomedical text with svm. T Mitsumori, M Murata, Y Fukuda, K Doi, H Doi, IEICE Transactions. 898T. Mitsumori, M. Murata, Y. Fukuda, K. Doi, and H. Doi, "Extracting protein-protein interaction information from biomedical text with svm." IEICE Transactions, vol. 89-D, no. 8, pp. 2464-2466, 2006. [Online]. Available: http://dblp.uni-trier.de/db/journals/ieicet/ieicet89d.html#MitsumoriMFDD06
Extracting information on protein-protein interactions from biological literature based on machine learning approaches. K Sugiyama, K Hatano, M Yoshikawa, S Uemura, Genome Informatics Series. K. Sugiyama, K. Hatano, M. Yoshikawa, and S. Uemura, "Extracting information on protein-protein interactions from biological literature based on machine learning approaches," Genome Informatics Series, pp. 699-700, 2003.
Semi-supervised classification for extracting protein interaction sentences using dependency parsing. G Erkan, A Özgür, D Radev, Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningEMNLP-CoNLLG. Erkan, A.Özgür, and D. Radev, "Semi-supervised classification for extracting protein interaction sentences using dependency parsing," in In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 2007, pp. 228-237.
Biomedical information extraction with predicate-argument structure patterns. A Yakushiji, Y Miyao, Y Tateisi, J Tsujii, Proceedings of the 11th Annual Meeting of the Association for Natural Language Processing. the 11th Annual Meeting of the Association for Natural Language ProcessingA. Yakushiji, Y. Miyao, Y. Tateisi, and J. Tsujii, "Biomedical in- formation extraction with predicate-argument structure patterns," in Proceedings of the 11th Annual Meeting of the Association for Natural Language Processing, 2005, pp. 60-69.
Self-taught learning: Transfer learning from unlabeled data. R Raina, A Battle, H Lee, B Packer, A Y Ng, http:/doi.acm.org/10.1145/1273496.1273592Proceedings of the 24th International Conference on Machine Learning, ser. ICML '07. the 24th International Conference on Machine Learning, ser. ICML '07New York, NY, USAACMR. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, "Self-taught learning: Transfer learning from unlabeled data," in Proceedings of the 24th International Conference on Machine Learning, ser. ICML '07. New York, NY, USA: ACM, 2007, pp. 759-766. [Online]. Available: http://doi.acm.org/10.1145/1273496.1273592
| [] |
[
"DECENTRALIZED KNOWLEDGE GRAPH REPRESENTATION LEARNING",
"DECENTRALIZED KNOWLEDGE GRAPH REPRESENTATION LEARNING"
] | [
"Lingbing Guo ",
"Weiqing Wang ",
"Zequn Sun ",
"Chenghao Liu ",
"Wei Hu ",
"\nDepartment of Data Science &\nSchool of Information Systems\nState Key Laboratory for Novel Software Technology Nanjing University\nState Key Laboratory for Novel Software Technology Nanjing University\nAI Monash University\nSingapore\n",
"\nState Key Laboratory for Novel Software Technology Nanjing University\nManagement University\n\n"
] | [
"Department of Data Science &\nSchool of Information Systems\nState Key Laboratory for Novel Software Technology Nanjing University\nState Key Laboratory for Novel Software Technology Nanjing University\nAI Monash University\nSingapore",
"State Key Laboratory for Novel Software Technology Nanjing University\nManagement University\n"
] | [] | Knowledge graph (KG) representation learning methods have achieved competitive performance in many KG-oriented tasks, among which the best ones are usually based on graph neural networks (GNNs), a powerful family of networks that learns the representation of an entity by aggregating the features of its neighbors and itself. However, many KG representation learning scenarios only provide the structure information that describes the relationships among entities, causing that entities have no input features. In this case, existing aggregation mechanisms are incapable of inducing embeddings of unseen entities as these entities have no pre-defined features for aggregation. In this paper, we present a decentralized KG representation learning approach, decentRL, which encodes each entity from and only from the embeddings of its neighbors. For optimization, we design an algorithm to distill knowledge from the model itself such that the output embeddings can continuously gain knowledge from the corresponding original embeddings. Extensive experiments show that the proposed approach performed better than many cutting-edge models on the entity alignment task, and achieved competitive performance on the entity prediction task. Furthermore, under the inductive setting, it significantly outperformed all baselines on both tasks.Recently, many graph neural networks (GNNs) based models for KG representation learning (Wang et al. | null | [
"https://arxiv.org/pdf/2010.08114v1.pdf"
] | 223,953,343 | 2010.08114 | 5cae66c471e1744719c3cb10c2a0ba4cbee2d13d |
DECENTRALIZED KNOWLEDGE GRAPH REPRESENTATION LEARNING
Lingbing Guo
Weiqing Wang
Zequn Sun
Chenghao Liu
Wei Hu
Department of Data Science &
School of Information Systems
State Key Laboratory for Novel Software Technology Nanjing University
State Key Laboratory for Novel Software Technology Nanjing University
AI Monash University
Singapore
State Key Laboratory for Novel Software Technology Nanjing University
Management University
DECENTRALIZED KNOWLEDGE GRAPH REPRESENTATION LEARNING
Knowledge graph (KG) representation learning methods have achieved competitive performance in many KG-oriented tasks, among which the best ones are usually based on graph neural networks (GNNs), a powerful family of networks that learns the representation of an entity by aggregating the features of its neighbors and itself. However, many KG representation learning scenarios only provide the structure information that describes the relationships among entities, causing that entities have no input features. In this case, existing aggregation mechanisms are incapable of inducing embeddings of unseen entities as these entities have no pre-defined features for aggregation. In this paper, we present a decentralized KG representation learning approach, decentRL, which encodes each entity from and only from the embeddings of its neighbors. For optimization, we design an algorithm to distill knowledge from the model itself such that the output embeddings can continuously gain knowledge from the corresponding original embeddings. Extensive experiments show that the proposed approach performed better than many cutting-edge models on the entity alignment task, and achieved competitive performance on the entity prediction task. Furthermore, under the inductive setting, it significantly outperformed all baselines on both tasks.Recently, many graph neural networks (GNNs) based models for KG representation learning (Wang et al.
INTRODUCTION
Knowledge graphs (KGs) support many data-driven applications (Ji et al., 2020). Recently, learning low-dimensional representations (a.k.a. embeddings) of entities and relations in KGs has been increasingly given attentions (Rossi et al., 2020). We find that existing models for KG representation learning share similar characteristics to those for word representation learning. For example, TransE (Bordes et al., 2013), a well-known translational KG embedding model, interprets a triple (e 1 , r, e 2 ) as e 1 + r ≈ e 2 , where e 1 , e 2 , r denote subject, object and their relationship, respectively, and the boldfaces denote the corresponding embeddings. If we view e 1 as a word in sentences, and e 2 as well as many other objects of e 1 as the context words, then TransE and many KG embedding models (Wang et al., 2014;Dettmers et al., 2018;Nguyen et al., 2018;Kazemi & Poole, 2018;, learn representations in a Skip-gram (Mikolov et al., 2013a) manner, where an entity's representation is learned to predict the context entities.
have achieved state-of-the-art performance in KG-related tasks such as entity alignment and entity prediction. Those models learn KG representations in a CBOW (continuous bag-of-words) (Mikolov et al., 2013a) manner, in which the context entities are aggregated to predict the target. But they also consider the representation of an entity itself when aggregating the neighborhood information. This nature prevents those models (e.g., GCN (Kipf & Welling, 2017) and GAT (Velickovic et al., 2018)) to be generalized to represent unseen entities. In many cases, the entities in prevalent KG-related tasks do not have self features. This motivates us to learn entity representations from and only from their context neighbors.
We propose a decentralized KG representation learning approach, decentRL. The key idea of decentRL is to decentralize the semantic information of entities over only their neighbors (i.e., distributed context vector in CBOW (Mikolov et al., 2013b)), which can be easily implemented by representing each entity through averaging its neighbor embeddings. In this paper, we look for a more efficient but still simple way to realize this concept on the most popular graph attention network (GAT) (Velickovic et al., 2018), as well as its many variants (Sun et al., 2020;Vashishth et al., 2020). We illustrate the methodology by decentralized attention network (DAN), which is based on the vallina GAT. DAN is able to support KG representation learning for unseen entities with only structure information, which is essentially different from the way of using self features (e.g., attribute information) in the existing graph embedding models (Hamilton et al., 2017;Hettige et al., 2020). Furthermore, the neighbors in DAN serve as an integrity to give attentions, which means that DAN is robuster and more expressive compared with conventional graph attention mechanism (Velickovic et al., 2018).
Another key problem in decentralized KG representation learning is how to estimate and optimize the output embeddings. If we distribute the information of an entity over its neighbors, the original embedding of this entity e i also learns how to effectively participate in the aggregations of its different neighbors conversely. Suppose that we have obtained an output representation g i from DAN for entity e i , we can simply estimate and optimize g i by aligning it with e i . But directly minimizing the L1/L2 distance between g i and e i may be insufficient. Specifically, these two embeddings have completely different roles and functions in the model, and the shared information may not reside in the same dimensions. Therefore, maximizing the mutual information between them is a better choice. Different from the existing works like MINE (Belghazi et al., 2018) or InfoNCE (van den Oord et al., 2018), in this paper, we design a self knowledge distillation algorithm, called auto-distiller. It alternately optimizes g i and its potential target e i , such that g i can automatically and continuously distill knowledge from the original representation e i across different batches.
The main contributions of this paper are listed as follows. (1) We propose decentralized KG representation learning, and present DAN as the prototype of graph attention mechanism under the open-world setting. (2) We design an efficient knowledge distillation algorithm to support DAN for generating representations of unseen entities. (3) We implement an end-to-end framework based on DAN and auto-distiller. The experiments show that it achieved superior performance on two prevalent KG representation learning tasks (i.e., entity alignment and entity prediction), and also significantly outperformed those cutting-edge models under the open-world setting.
BACKGROUND
Knowledge Graph. A KG can be viewed as a multi-relational graph, in which nodes represent entities in the real world and edges have specific labels to represent different relationships between entities. Formally, we define a KG as a 3-tuple G = (E, R, T ), with E and R denoting the sets of entities and relationships, respectively. T is the set of relational triples.
KG Representation Learning. Conventional models are mainly based on the idea of Skip-gram. According to the types of their score functions, these models can be divided into three categories: translational models (e.g., TransE (Bordes et al., 2013) and TransR (Lin et al., 2015a)), semantic matching models (e.g., DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016)) and neural models (e.g., ConvE (Dettmers et al., 2018) and RSN (Guo et al., 2019)). We refer interested readers to the surveys (Wang et al., 2017;Ji et al., 2020) for details. Recently, GNN-based models receive great attentions in this field, which are closely related to this paper. Specifically, R- GCN (Schlichtkrull et al., 2018), AVR-GCN (Ye et al., 2019) and CompGCN (Vashishth et al., 2020) introduce different relation-specific composition operations to combine neighbors and the corresponding relations before neighbor aggregation. RDGCN (Wu et al., 2019) refactors KGs as dual relation graphs (Monti et al., 2018) where edge labels are represented as nodes for graph convolution. All the aforementioned GNN-based models choose GCNs and/or GATs to aggregate the neighbors of an entity, in which an identity matrix is added to the adjacency matrix. This operation is helpful when elements have self features, but poses a problem in learning the representations of unseen entities where no self features are attached to them. Differently, decentRL fully relies on the neighbor context to attend to the neighbors of each entity in linear complexity, which is efficient and easy to be deployed.
Entity Alignment. Entity alignment aims to find the potentially aligned entity pairs in two different KGs G 1 = (E 1 , R 1 , T 1 ) and G 2 = (E 2 , R 2 , T 2 ), given a limited number of aligned pairs as training data S ⊂ E 1 × E 2 . Oftentimes, G 1 , G 2 are merged to a joint KG G = (E, R, T ), which enables the models learn representations in a unified space.
Entity Prediction. Entity prediction (a.k.a. KG completion (Bordes et al., 2013)) seeks to find the missing subject e 1 or object e 2 , given an incomplete relation triple (?, r, e 2 ) or (e 1 , r, ?).
It is worth noting that the performance on the entity prediction task may be greatly improved by complex deep networks, as it relies on the predictive ability rather than the embedding quality (Guo et al., 2019). Hence, many cutting-edge models cannot obtain promising results in entity alignment (Guo et al., 2019;Sun et al., 2020). Oppositely, entity alignment directly compares the distance of learned entity embeddings, which clearly reflects the quality of output representations. Few models demonstrate consistently good performance on both tasks, whereas decentRL is capable of achieving competitive, even better, performance compared with respective state-of-the-art models.
DECENTRALIZED REPRESENTATION LEARNING
In the decentralized setting, the representation of an entity e i is aggregated from and only from its neighbors N i = {e 1 , e 2 , . . . , e |Ni| }. As it may have many neighbors that are unequally informative (Velickovic et al., 2018), involving attention mechanism is a good choice.
GRAPH ATTENTION NETWORKS
We start by introducing the Graph attention network (GAT) (Velickovic et al., 2018), which leverages linear self attention to operate spatially close neighbors. For an entity e i , GAT aggregates the representations of its neighbors N i and itself into a single representation c i as follows:
c i = ej ∈Ni∪{ei} a ij We j ,(1)
where a ij is the learnable attention score from e i to e j , and W is the weight matrix. To obtain a ij , a linear attention mechanism is used here:
a ij = softmax σ(a T [W 1 e i W 2 e j ]) ,(2)
where a is a weight vector to convert the concatenation of two embeddings into a scalar attention score, and denotes the concatenation operation. W 1 and W 2 are two weight matrices. σ is the activation function, usually being LeakyReLU (Xu et al., 2015). GAT computes the attention score of an entity e i to its neighbors in linear complexity, which is very efficient when being applied to large-scale graphs.
DECENTRALIZED ATTENTION NETWORKS
Intuitively, if e i is the embedding of an unseen entity, it is rarely useful in computing the attention scores (as it is just a randomly initialized vector). Thus, purely relying on its neighbors may be a good choice. Specifically, to obtain the decentralized attention scores, one may simply sum all the attention scores from other neighbors a ij = softmax( e k ∈Ni\{ej } a kj ). However, it would lead to a problem that this sum only represents the attention of each neighbor to e j . In this case, a high attention score from one neighbor e k to e j can dominate the value of a ij , but it does not mean that e j is more important for e i . Therefore, all neighbors should act as an integrity in giving attentions.
Towards this end, we propose decentralized attention networks (DANs). Formally, to obtain the decentralized attention weight a ij , we have to feed the attention layer with two types of input: the neighbor context vector n i (i.e., query), and the candidate neighbor embedding e j (i.e., key and value). Separately controlling the iterations of these two variables in a multi-layer model is evidently inefficient. Instead, we realize this operation by involving a second-order attention mechanism. For layer k, DAN calculates the decentralized attention score a k ij as follows:
a k ij = softmax σ(a T k [W k 1 d k−1 i W k 2 d k−2 j ]) ,(3)where d k−1 i , d k−2 j
denote the output embeddings of layer k − 1 for e i and of layer k − 2 for e j , respectively. If we regard d k−1 i as the neighbor aggregation of layer k − 1 for e i , then d k−2 j is exactly the embedding of e j used in summing d k−1 i . In this case, a k ij can represent the attention weight of e i 's neighbor context to e j . Then, we can obtain the output of layer k by:
d k i = ej ∈Ni a k ij W k d k−2 j .(4)
It is worth noting that we perform convolutions on layer k − 2, as the score a k ij is attended to the neighbor representations in layer k − 2. This keeps the consistency and ensures the output representations are consecutive. Also, it enhances the correlation of output in different layers, and forms the second-order graph attention mechanism.
For the first layer of DAN, we initialize d 0 i and d −1 j as follows:
d 0 i = 1 |N i | ej ∈Ni W 0 e j , d −1 j = e j .(5)
Here, we simply use a mean aggregator to obtain the decentralized embedding d 0 i of layer 0, but other aggregators like pooling may be employed as well. This simple mean aggregator can also be regarded as a CBOW model with dynamic window size. For the architecture and implementation of DAN, please refer to Appendix A.
INSIGHT OF DAN
We compare GAT with DAN in Figure 1. Although the attention layers levearged by DAN and GAT are identical, the decentralized structure has two significant strengths:
Inductive representation learning. In GAT, the self embedding e i participates in the calculation of attention scores (Equation 1) and the aggregation of neighbors (Equation 2). Therefore, when e i is an open entity, its embedding is completely random-initialized. In this case, the attention scores computed by GAT are almost meaningless. Oppositely, DAN generates the embedding of e i without requirement of its embedding throughout. Such characteristic enables DAN to induce embeddings on unseen entities.
Robustness. When calculating the attention scores for an entity e i , GAT only takes e i as query, the importance of other neighbors is overlooked, which may lead to biased attention computation. On the other hand, it is generally known that most entities in KGs only has a small number of neighbors . Due to the lack of training examples, the embeddings of these entities are not as informative as those with more neighbors. Therefore, they may be not capable of serving as queries for computing attention scores, causing that GAT cannot obtain reliable attention scores in some cases. By contrast, the queries in DAN are neighbor context vectors, which have richer semantics and also enable DAN to compute the attention scores in an unbiased way.
Furthermore, the computational complexity of DAN is almost identical to that of GAT, except that DAN has an additional mean aggregator. From Figure 1 and Equation 5, we can find that such an aggregator is evidently simpler than the linear attention layer, which means that its computational complexity (both time and space) can be almost overlooked. Therefore, DAN is an efficient model.
DECENTRALIZED REPRESENTATION ESTIMATION
The final output representation g i of DAN for e i can be optimized by minimizing the L1/L2 distance between g i and e i to enable self-supervised learning. Such distance estimation pursues a precise match at every dimension of these two embeddings, but ignores the implicit structure information across different dimensions.
MUTUAL INFORMATION MAXIMIZATION
As mentioned in Section 1, the original embedding e i also serves as one of neighbor embeddings in aggregating the decentralized embeddings of its neighbors, which implies that e i itself also preserves the latent information used to support its neighbors. Inspired by MINE (Belghazi et al., 2018), InfoNCE (van den Oord et al., 2018), and DIM (Hjelm et al., 2019), in this paper we do not try to optimize g i by reconstructing the original representation e i . Instead, we implicitly align them in a way of maximizing the mutual information I(g i , e i ).
Specifically, we define a learnable function f : R D ⊗ R O → R to estimate the mutual information density (van den Oord et al., 2018) between g i and the copy of e i (the reason why using the copied vector will be explained shortly):
f (g i , e i ) = exp(g T i W fêi + b f ),(6)
where D and O are the dimensions of the output and input representations, respectively. W f , b f are the weight matrix and bias, respectively.ê i denotes the copy of e i . We expect that f (g i ,ê i ) is significantly larger than f (g i ,ê j ) for j = i. Following InfoNCE, the objective can be written as:
I(g i ,ê i ) = E Xi log f (g i ,ê i ) ej ∈Xi f (g i ,ê j ) ,(7)
where X i = {e 1 , . . . , e |Xi| } contains |X i | − 1 sampled negative entities plus the target entity e i . Maximizing this objective results in maximizing a lower-bound of mutual information between g i andê i (van den Oord et al., 2018).
AUTO-DISTILLER
Note that in Equations (6) and (7), we actually use the copy of the original representations, which leads to a completely different optimization process compared with existing works. Specifically, some methods like InfoNCE or DIM jointly optimize the two input in the density function, as both variables are the output of deep neural models requiring to be updated in back-propagation. But in decentRL, e i is just a randomly initialized vector, and its gradient in Equation (7) may conflict with the gradient where it is taken as an input neighbor in learning decentralized representations of its neighbors. On the other hand, such optimization operation also prevents e i from learning the neighborhood information at the bottom layer, and compels this variable to match g i .
To address this problem, we view e i as a teacher, and g i as a student to learn from the teacher (Tian et al., 2020). Our aim is to let this teacher continuously gain more knowledge to teach the student, which we call it auto-distiller. Therefore, our final objective is:
argmax gi,f E Xi log f (g i ,ê i ) ej ∈Xi f (g i ,ê j ) + argmax ei ej ∈Ni E Xj log f (g j ,ê j ) e k ∈Xj f (g j ,ê k ) ,(8)
and it has two important characteristics: Lemma 1 (automatic distillation). Optimizing the first term of Equation 8 for entity e i naturally contributes to optimizing the second term for the neighbors of e i , which means conventional minibatch training procedure can be applied. Lemma 2 (lower-bound). The mutual information between g i andê i is still lower-bounded in auto-distiller.
Proof. See Appendix B.
EXPERIMENTS
We evaluated decentRL on two prevalent tasks, namely entity alignment and entity prediction, for KG representation learning. As few existing models show state-of-the-art performance on both tasks, we picked the state-of-the-art methods in respective tasks and compared decentRL with them. To probe the effectiveness of decentRL, we also conducted ablation study and additional experiments. Limited by the length, please see Appendix C for more analytic results.
DATASETS
Entity Alignment Datasets. We consider the JAPE dataset DBP15K (Sun et al., 2017), which is widely used by existing studies. It includes three entity alignment settings, each of which contains two KGs of different languages. For example, ZH-EN indicates Chinese-English alignment on DBpedia.
Entity Prediction Datasets. We consider four datasets: FB15K, WN18, FB15K-237, and WN18RR (Bordes et al., 2013;Dettmers et al., 2018;Toutanova & Chen, 2015). The former two have been used as benchmarks for many years, while the latter two are the corrected versions, as FB15K and WN18 contain a considerable amount of redundant data (Dettmers et al., 2018).
EXPERIMENT SETUP
For both tasks, we initialized the original entity embeddings, relation embeddings and weight matrices with Xavier initializer (Glorot & Bengio, 2010).
To learn cross-KG embeddings for the entity alignment task, we incorporated a contrastive loss (Sun et al., 2020;Wang et al., 2018) to cope with aligned entity pairs S, which can be written as follows:
L a = (i,j)∈S + ||g i − g j || + (i ,j )∈S − α λ − ||g i − g j || + ,(9)
where S + , S − are the positive entity pair set and sampled negative entity pair set, respectively. || · || denotes the L2 distance between two embeddings. α and λ are hyper-parameters. By jointly minimizing two types of losses, decentRL is able to learn cross-KG embeddings for entity alignment.
Similarly, for entity prediction, we also need to choose a decoder to enable decentRL to predict missing entities (Vashishth et al., 2020). We chose two simple models, TransE (Bordes et al., 2013) and DistMult (Yang et al., 2015) for the main experiments, which are sufficient to achieve comparable performance against the state-of-the-art. Table 1 depicts the entity alignment results on the JAPE dataset. We observe that: (1) decentRL significantly outperformed all the methods on Hits@1 and MRR, which empirically showed the advantage of decentRL in learning high-quality representations.
ENTITY ALIGNMENT RESULTS
(2) The scores of Hits@10 of decentRL were slightly below those of AliNet. We argue that decentRL is a purely end-to-end model, (Guo et al., 2019) 0 which did not incorporate any additional data augmentation used in AliNet (Sun et al., 2020) that may improve the H@10 results. Moreover, decentRL is much easier to be optimized, as it does not need to coordinate the hyper-parameters of each part in the pipeline. Also, there is no conflict to combine decentRL and the data augmentation algorithm for further improvement.
We also evaluated the graph-based models in the open-world setting. Specifically, we first split the testing entity set into two subsets, namely known entity set and unknown entity set. Then, for those triples in the training set (in the non-open-world setting, all triples are used in training) containing unknown entities, we moved them to the testing triple set, which were only available during the testing phase. We followed a classical dataset splitting used in the entity alignment task, with 20% of entities in the original testing set are sampled as open entities. Table 4 in Appendix C.1 compares the datasets before and after re-splitting.
The experimental results are shown in Figure 2. We can find that decentRL outperformed GAT and AliNet (the second-best model) on all metrics. Although its performance slightly dropped compared with that in the close setting, the results of others (especially GAT, which only uses self representation as "query") suffered more under this open-world setting. Overall, decentRL is capable of achieving state-of-the-art performance on both open and conventional entity alignment tasks.
ENTITY PREDICTION RESULTS
We also evaluated decentRL on the entity prediction task, in comparison with different GNN-based models. The results on FB15K-237 are shown in Table 2 (check Appendix C.3 for more detail results on all entity prediction datasets), from which we observe that: (1) decentRL significantly outperformed all the other models for many metrics, especially MR (mean rank). This demonstrates that decentRL can learn better representations for both popular entities (valued by MRR metric) and (Sun et al., 2020). It has no L3 and L4 scores as its best performance was achieved by a two-layer model.
long-tail entities (valued by MR metric) (2) decentRL boosted DistMult to achieve almost state-ofthe-art performance on FB15K-237. The simpler model TransE, also gained great improvement on all metrics. The reason may be that DAN discovered a better aggregation weights and our auto-distiller continuously refined the output representations.
The corresponding results on open entity prediction are shown in Figure 3. We added a state-of-the-art yet more complicated GNN-based model CompGCN + ConvE in the comparison, from which we observe that: decentRL + DistMult outperformed all the other models under this open-world setting, which verified its effectiveness in inductive learning with only structure data. decentRL + TransE achieved the second-best performance, followed by CompGCN + ConvE. Overall, decentRL provided the decoders with better representations and supported them to achieve competitive and even better performance over the cutting-edge models.
COMPARISON WITH ALTERNATIVE MODELS
To exemplify the effectiveness of each module in decentRL, we derived a series of alternative models from decentRL and report the experimental results on entity alignment in Table 3. "centRL" denotes the model that used DAN but self-loop was added to the adjacency matrix.
From the results, we observe that: all the models in the table achieved state-of-the-art performance on H@1 metric, as DAN which leverages all neighbors as queries can better summarize the neighbor representations of entities.
On the other hand, we also find that decentRL + auto-distiller outperformed all the other alternatives. The centralized model centRL + auto-distiller had a performance drop compared with the decentralized one. The main reason is that entities themselves in centRL also participated in their own aggregations, which disturbed the roles of the original representations. Please see Appendix C.2 for the corresponding results under the open entity alignment setting.
COMPARISON OF THE OUTPUT EMBEDDINGS OF EACH LAYER
We also compared the performance of each layer in decentRL and AliNet. As shown in Figure 4, decentRL consistently outperformed AliNet on each layer except the input layer. As mentioned before, decentRL does not take the original representation of an entity as input, but this representation can still gain knowledge in participating in the aggregations of its neighbors and then teach the student (i.e., the corresponding output representation). The performance of the input layer was not as good as that in AliNet, because the latent information in this layer may not be aligned in each dimension.
On the other hand, we also observe that concatenating the representations of each layer in decentRL also improved the performance, with a maximum increase of 0.025 (0.023 for AliNet). Furthermore, decentRL can gain more benefits from increasing the layer number, while the performance of AliNet starts to drop when the layer number is larger than 2 (Sun et al., 2020).
CONCLUSION
In this paper we proposed decentralized KG representation learning, which explores a new and straightforward way to learn representations in open-world, with only structure information. The corresponding end-to-end framework achieved very competitive performance on both entity alignment and entity prediction tasks.
A DECENTRALIZED ATTENTION NETWORKS
A.1 ARCHITECTURE
We illustrate a four-layer decentralized attention network (DAN) in Figure 5. Following the existing works AliNet (Sun et al., 2020), CompGCN (Vashishth et al., 2020) etc. (Ye et al., 2019;Wu et al., 2019), we also combine the relation embeddings in the aggregation. The original entity embeddings (i.e., g −1 ), relation embeddings and weight matrices are randomly initialized before training. At step 0, we initialize g 0 with the original entity embeddings by mean aggregator. At step 1, g −1 and g 0 are fed into DAN. Then, we combine the hidden representations with relation embeddings (steps 2, 3). Finally, we obtain the output of the first layer g 1 (step 4). Repeating steps 1-4, we can sequentially obtain the output of the last layers. Grey, blue, light blue and orange nodes denote original entity embeddings, decentralized entity embeddings, hidden entity embeddings and relation embeddings, respectively. Taking the first layer as example, as DAN requires the output embeddings of two previous layers as input, we randomly initialize the original embeddings as g −1 (identical to d −1 ), and use an appropriate aggregator to generate the initial decentralized embeddings g 0 (identical to d 0 ).
A.2 IMPLEMENTATION DETAILS
Infrastructure. Following GAT (Velickovic et al., 2018), we adopt dropout (Srivastava et al., 2014) and layer normalization (Ba et al., 2016) for each module in DAN. To fairly compare decentRL with other models, we do not leverage the multi-head attention mechanism (Vaswani et al., 2017;Velickovic et al., 2018) which has not been used in other GNN-based models, although it can be easily integrated.
Furthermore, we consider residual connections (He et al., 2016) between different layers of DAN to avoid over-smoothing, in which we not only consider the output of the previous layer as "residual", but also involve the output of the mean aggregator (i.e., g 0 i ). This can be written as follows:
g k i := g 0 i + g k−1 i + g k i .(10)
For simplicity, in the rest of the paper, we still use g k i to denote the output at layer k. Adaption to different tasks. For different KG representation learning tasks, we also consider different adaptation strategies to achieve better performance. For the entity alignment task, we follow the existing GNN-based models (Sun et al., 2020;Wang et al., 2018) to concatenate the output representation of each layer as the final output. We formalize it as follows:
g i = [g 1 i . . . g K i ].(11)
On the other hand, the entity prediction task prefers the prediction ability rather than the learned representations (Guo et al., 2019). We only use the output of the last layer as the final output representation, which allows us to choose larger batch size or hidden size to obtain better performance. We write it as follows:
g i = g K i .(12)
To enhance the predictive ability of decoders, here we only regard the mutual information-based loss as a kind of regularization (which is similar to MINE (Belghazi et al., 2018)), and thus we re-scale the loss weight to 0.001.
B AUTOMATIC KNOWLEDGE DISTILLATION B.1 INSIGHT
The existing works usually choose to jointly optimize two input variables in the density function f , in which these two variables can be regarded as two different outputs of two models. For example, InfoNCE uses an encoder to obtain the latent representations and another model to summarize those representations to one context vector. This is similar to DeepInfoMax and DeepGraphInfo, which also leverage two models to obtain local features and summarize global features, respectively.
However, in our case, the mutual information that we want to maximize is between the input and output of the same model, where the input vectors are just randomly initialized raw embeddings. We argue that jointly optimizing the original input e i with the output g i in Equation (7) may drive e i to completely match g i .
To resolve this problem, we only use the copy of e i when estimating the mutual information density between e i and g i . In other words, we do not update the gradient of e i in Equation (7), leading to a natural knowledge distillation architecture.
Specifically, we separately optimize e i and g i in different training examples or batches. The first step is corresponding to the former part of Equation (8):
argmax gi,f E Xi log f (g i ,ê i ) ej ∈Xi f (g i ,ê j ) .(13)
Here, e i is served as a "pre-trained" teacher model to teach a "student". Hence, the learnable parameters are g i and f .
As aforementioned, e i needs to participate in learning the representations of its neighbors, during which it can gain knowledge to teach its student g i . This step is achieved by the latter part in Equation (8):
argmax ei ej ∈Ni E Xj log f (g j ,ê j ) e k ∈Xj f (g j ,ê k ) ,(14)
where our aim is to find the optimal e i to maximize the mutual information between the original and output representations of its neighbors.
B.2 THE LOWER-BOUND OF MUTUAL INFORMATION
We do not really need to explicitly separate the training procedure into the two steps described in Appendix B.1, which is widely used in adversarial learning. Instead, this knowledge distillation mechanism can be automatically achieved during different mini-batches.
Specifically, if we expand Equation (13) a little bit, then we obtain:
(N i , Θ, f ) = argmax Ni,Θ,f E Xi log f (G(N i ),ê i ) ej ∈Xi f (G(N i ),ê j ) ,(15)
where N i = {e j |e j ∈ N i } is the original neighbor embedding set for e i and Θ denotes the parameters of our decentralized model G. As the optimal Θ for the model depends on the neighbor representation set N i , and the optimal density function f also relies on the output of the model, it is impossible to search all spaces to find the best parameters. In practice, we choose to optimize a weaker lower-bound on the mutual information I(g i ,ê i ) (Tian et al., 2020). In this case, a relatively optimal neighbor embedding e * x in Equation (15) is:
e * x = argmax ex E Xi log f (G(N i ),ê i ) ej ∈Xi f (G(N i ),ê j ) ,(16)
and we have:
I(g i ,ê i |e * x ) = E Xi log f (G({e 1 , . . . , e * x , . . . , e |Ni| },ê i ) ej ∈Xi f (G({e 1 , . . . , e * x , . . . , e |Ni| },ê j ) (17) ≤ E Xi log f * (G * (N * i ),ê i ) ej ∈Xi f * (G * (N * i ),ê j ) = I(g * i ,ê i ) (18) ≤ I(g * i ,ê i ) + log(|X i |) (19) ≤ I(g i ,ê i ),(20)
where * denotes the optimal setting for the corresponding parameters. Equations (18) and (19) are the conclusion of InfoNCE, given that |X i | is large enough. The above equations suggest that optimizing e x can also lower-bound the mutual information without the requirement of other parameters being perfectly assigned.
Consider that the entity e x may have more than one neighbor, we can optimize those cases together:
e * x = argmax ex ej ∈Nx E Xj log f (G(N j ),ê j ) e k ∈Xj f (G(N j ),ê k )(21)
Evidently, the above equation is identical to Equation (14), which means that optimizing Equation (15) can subsequently contribute to optimizing the original neighbor representations.
Therefore, the proposed architecture can automatically distill knowledge, in different mini-batches, from the original representations into the output representations.
C FURTHER ANALYSIS
C.1 DATASET DETAILS
The detailed statistics of the entity alignment datasets are shown in Table 4. Although we only set 20% of entities in testing set as open entities, there are actually more than 20% of triples that were removed from the training set.
For the details of datasets used in entity prediction, we suggest readers to refer to (Bordes et al., 2013) and (Dettmers et al., 2018).
C.2 ABLATION STUDY ON OPEN ENTITY ALIGNMENT
We also conducted an ablation study on the open entity alignment task, as shown in Table 5. The experimental results, in principle, are consistent to those on conventional entity alignment. The proposed architecture (decentRL + auto-distiller) still outperformed other alternatives. By contrast, the performance of the centralized model with auto-distiller dropped significantly, in comparison with that it almost has identical performance with decentRL + infoNCE in Table 3. Another worth-noting point is that the gap on H@10 narrowed in the open entity alignment task, which may be because the training data were shrunk considerably due to removing the corresponding triples referred to unseen entities. The detailed results on entity prediction are shown in Tables 6 and 7, respectively. For the conventional benchmarks FB15K and WN18 that have been widely used for many years, our decentRL with only simple decoders achieved competitive even better performance compared with those state-of-the-art models. Furthermore, decentRL greatly improved the best results on MR, as it can more efficiently aggregate neighbor information to learn high-quality representations for those "challenging" entities.
On the other hand, we find that the performance of decentRL on FB15K-237 and WN18RR is not as promising as that in Table 6, although it still achieved the best H@10 and MR on FB15K-237. We argue that this may be caused by the insufficient predictive ability of simple decoders (Guo et al., 2019). However, we currently do not plan to adapt decentRL to some complex decoders like ConvE, as such complicated architecture can largely increase the time and space complexity. For example, CompGCN with ConvE needs to be trained at least two days on a small dataset like FB15K-237.
Overall, the performance of some simple linear KG representation learning models (i.e., TransE, DistMult, and ComplEx) received great benefits from decentRL, and even outperformed some cutting-edge models. Table 8.
With the increase of the input dimensions (i.e., embedding size), the performance of decentRL improved quickly, with dimension = 128 achieving comparable performance with the state-ofthe-art methods (e.g., AliNet with 300 dimension) and outperforming them at dimension = 256. Furthermore, decentRL can continually gain benefit from larger hidden sizes. Even when the dimension was set to 512, the improvement was still significant.
Figure 1 :
1Comparing graph attention network (GAT) with decentralized attention network (DAN).
Figure 2 :
2Open entity alignment results on DBP15K. Bars with dotted lines denote the performance drop compared with the corresponding results in the non-open setting. The same to the following.
Figure 3 :
3MRR results on open FB15K-237.
Figure 4 :
4Hits@1 results of each layer and the concatenation. The results of AliNet are from
Figure 5 :
5Overview of a four-layer decentralized attention network (DAN). Best viewed in color.
evaluated decentRL under different settings of dimensions. The results are shown in
Table 1 :
1Result comparison of entity alignment on DBP15K.Models
ZH-EN
JA-EN
FR-EN
H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR
AlignE (Sun et al., 2018)
0.472 0.792 0.581 0.448 0.789 0.563 0.481 0.824 0.599
RSN
Table 2 :
2Entity prediction results on FB15K-237. Raw denotes the original results of the decoders.Models
TransE
DistMult
H@10
MR
MRR H@10 MR MRR
Raw
0.465
357
0.294 0.419 354 0.241
+ D-GCN (Marcheggiani & Titov, 2017) 0.469
351
0.299 0.497 225 0.321
+ R-GCN (Schlichtkrull et al., 2018)
0.443
325
0.281 0.499 230 0.324
+ W-GCN (Shang et al., 2019)
0.444 1,520 0.267 0.504 229 0.324
+ CompGCN (Vashishth et al., 2020)
0.515
233
0.337 0.518 200 0.338
+ decentRL
0.521
159
0.334 0.541 151 0.350
Table 3 :
3Ablation study of entity alignment on DBP15K (average of 5 runs).Models
ZH-EN
JA-EN
FR-EN
H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR
decentRL + auto-distiller 0.589 0.819 0.672 0.596 0.819 0.678 0.602 0.842 0.689
decentRL + infoNCE
0.579 0.816 0.665 0.591 0.816 0.673 0.593 0.834 0.682
decentRL + L2
0.571 0.802 0.655 0.589 0.807 0.669 0.591 0.831 0.679
centRL + auto-distiller
0.579 0.812 0.663 0.589 0.812 0.671 0.593 0.836 0.681
centRL
0.544 0.791 0.632 0.561 0.799 0.646 0.560 0.820 0.654
0.410
0.550
0.571
0.555
0.549
0.588
0.518
0.519
0.515
0.539
0.4
0.6
0.8
Input
L1
L2
L3
L4 Concat
decentRL AliNet
0.447
0.566
0.581
0.570
0.564
0.598
0.529
0.536
0.528
0.549
Input
L1
L2
L3
L4 Concat
decentRL AliNet
0.434
0.560
0.576
0.568
0.560
0.601
0.516
0.539
0.526
0.552
Input
L1
L2
L3
L4 Concat
decentRL AliNet
Table 4 :
4Statistics of entity alignment datasets.Datasets
Original
Open
#Entities
#Relations
#Triples
#Train entity pairs
#Test entity pairs
#Train triples
#Test triples
ZH-EN
19,388
1,701
70,414
4,500
10,500
53,428
16,986
19,572
1,323
95,142
4,500
10,500
72,261
22,881
JA-EN
19,814
1,299
77,214
4,500
10,500
57,585
19,629
19,780
1,153
93,484
4,500
10,500
69,479
24,005
FR-EN
19,661
903
105,998
4,500
10,500
79,266
26732
19,993
1,208
115,722
4,500
10,500
87,030
28692
Table 5 :
5Ablation study on open entity alignment. Average of 5 runs.Methods
ZH-EN
JA-EN
FR-EN
H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRR
decentRL + auto-distiller 0.565 0.775 0.643 0.583 0.786 0.659 0.590 0.814 0.673
decentRL + infoNCE
0.557 0.775 0.637 0.574 0.785 0.652 0.583 0.811 0.666
decentRL + L2
0.552 0.770 0.632 0.574 0.782 0.650 0.581 0.806 0.664
centRL + auto-distiller
0.551 0.765 0.629 0.573 0.776 0.648 0.578 0.806 0.662
centRL
0.529 0.764 0.614 0.554 0.775 0.634 0.560 0.799 0.647
Table 6 :
6Entity prediction results on FB15K and WN18. DistMult 0.664 0.793 0.872 0.740 32 0.944 0.951 0.958 0.949 259 decentRL + ComplEx 0.745 0.847 0.901 0.804 33 0.945 0.952 0.958 0.949 251 C.3 DETAILED RESULTS ON ENTITY PREDICTIONMethods
FB15K
WN18
H@1 H@3 H@10 MRR MR H@1 H@3 H@10 MRR MR
TransE
0.297 0.578 0.749 0.463
-
0.113 0.888 0.943 0.495
-
DistMult
0.546 0.733 0.824 0.654 97 0.728 0.914 0.936 0.822 902
ComplEx
0.599 0.759 0.840 0.692
-
0.599 0.759 0.840 0.692
-
ConvE
0.558 0.723 0.831 0.657 51 0.935 0.946 0.956 0.943 374
RotatE
0.746 0.830 0.884 0.797 40 0.944 0.952 0.959 0.949 309
RSN
0.722
-
0.873
0.78
-
0.922
-
0.953 0.940
-
TuckER
0.741 0.833 0.892 0.795
-
0.949 0.955 0.958 0.953
-
decentRL + TransE
0.633 0.771 0.856 0.715 40 0.736 0.904 0.954 0.824 255
decentRL +
Table 7 :
7Entity prediction results on FB15K-237 and WN18RR. † " denotes methods executed by the source code with the provided best parameter settings.Methods
FB15K-237
WN18RR
H@1 H@3 H@10 MRR MR H@1 H@3 H@10 MRR
MR
TransE
-
-
0.465 0.294 357
-
-
0.501 0.226 3,384
DistMult
0.155 0.263 0.419 0.241 254 0.39
0.44
0.49
0.43 5,110
ComplEx
0.158 0.275 0.428 0.247 339 0.41
0.46
0.51
0.44 5,261
ConvE
0.237 0.356 0.501 0.325 244 0.400 0.440 0.520 0.430 4,187
RotatE
0.241 0.375 0.533 0.338 177 0.428 0.492 0.571 0.476 3,340
RSN
0.202
-
0.453 0.280
-
-
-
-
-
-
TuckER
0.266 0.394 0.544 0.358
-
0.443 0.482 0.526 0.470
-
CompGCN + ConvE
0.264 0.390 0.535 0.355 197 0.443 0.494 0.546 0.479 3,533
CompGCN + TransE †
0.242 0.367 0.510 0.332 214
-
-
-
-
-
CompGCN + DistMult † 0.249 0.368 0.515 0.337 199
-
-
-
-
-
CompGCN + ConvE †
0.262 0.385 0.532 0.352 215
-
-
-
-
-
decentRL + TransE
0.241 0.362 0.521 0.334 159 0.290 0.420 0.505 0.369 4,710
decentRL + DistMult
0.257 0.385 0.541 0.350 151 0.433 0.481 0.542 0.470 4,613
decentRL + ComplEx
0.261 0.388 0.544 0.354 172 0.422 0.466 0.533 0.458 3,744
"
Table 8 :
8Performance of decentRL with different dimensions. Average of 5 runs. H@1 H@10 MRR H@1 H@10 MRR H@1 H@10 MRRHidden size
ZH-EN
JA-EN
FR-EN
Layer normalization. CoRR. Jimmy Lei, Jamie Ryan Ba, Geoffrey E Kiros, Hinton, abs/1607.06450Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016.
Mutual information neural estimation. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, R Devon Hjelm, Aaron C Courville, In ICML. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, R. Devon Hjelm, and Aaron C. Courville. Mutual information neural estimation. In ICML, 2018.
Deep gaussian embedding of attributed graphs: Unsupervised inductive learning via ranking. Aleksandar Bojchevski, Stephan Günnemann, In ICLR. Aleksandar Bojchevski and Stephan Günnemann. Deep gaussian embedding of attributed graphs: Unsupervised inductive learning via ranking. In ICLR, 2018.
Translating embeddings for modeling multi-relational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, Jason Weston, Oksana Yakhnenko, NIPS. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Durán, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, 2013.
Multi-channel graph neural network for entity alignment. Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, Tat-Seng Chua, ACL. Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. Multi-channel graph neural network for entity alignment. In ACL, 2019.
Convolutional 2D knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, AAAI. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2D knowledge graph embeddings. In AAAI, 2018.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, AISTATS. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010.
Learning to exploit long-term relational dependencies in knowledge graphs. Lingbing Guo, Zequn Sun, Wei Hu, ICML. Lingbing Guo, Zequn Sun, and Wei Hu. Learning to exploit long-term relational dependencies in knowledge graphs. In ICML, 2019.
Inductive representation learning on large graphs. William L Hamilton, Zhitao Ying, Jure Leskovec, NIPS. William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NIPS, 2017.
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
Robust attribute and structure preserving graph embedding. Bhagya Hettige, Weiqing Wang, Yuan-Fang Li, Wray L Buntine, PAKDD. Bhagya Hettige, Weiqing Wang, Yuan-Fang Li, and Wray L. Buntine. Robust attribute and structure preserving graph embedding. In PAKDD, 2020.
Learning deep representations by mutual information estimation and maximization. R , Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, Yoshua Bengio, ICLR. R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR, 2019.
A survey on knowledge graphs: Representation, acquisition and applications. Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, Philip S Yu, abs/2002.00388CoRRShaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. A survey on knowledge graphs: Representation, acquisition and applications. CoRR, abs/2002.00388, 2020.
Simple embedding for link prediction in knowledge graphs. David Seyed Mehran Kazemi, Poole, NeurlIPS. Montréal, CanadaSeyed Mehran Kazemi and David Poole. Simple embedding for link prediction in knowledge graphs. In NeurlIPS, Montréal, Canada, 2018.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, ICLR. Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
Knowledge verification for long-tail verticals. Furong Li, Xin Luna Dong, Anno Langen, Yang Li, PVLDB. 10Furong Li, Xin Luna Dong, Anno Langen, and Yang Li. Knowledge verification for long-tail verticals. PVLDB, 10, 2017.
Learning entity and relation embeddings for knowledge graph completion. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, Xuan Zhu, AAAI. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. Learning entity and relation embeddings for knowledge graph completion. In AAAI, 2015a.
Encoding sentences with graph convolutional networks for semantic role labeling. Diego Marcheggiani, Ivan Titov, EMNLP. Diego Marcheggiani and Ivan Titov. Encoding sentences with graph convolutional networks for semantic role labeling. In EMNLP, 2017.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, ICLR Workshop. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representa- tions in vector space. In ICLR Workshop, 2013a.
Linguistic regularities in continuous space word representations. Tomas Mikolov, Yih Wen-Tau, Geoffrey Zweig, NAACL-HLT. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In NAACL-HLT, 2013b.
Dual-primal graph convolutional networks. Federico Monti, Oleksandr Shchur, Aleksandar Bojchevski, Or Litany, Stephan Günnemann, Michael M Bronstein, abs/1806.00770CoRRFederico Monti, Oleksandr Shchur, Aleksandar Bojchevski, Or Litany, Stephan Günnemann, and Michael M. Bronstein. Dual-primal graph convolutional networks. CoRR, abs/1806.00770, 2018.
A novel embedding model for knowledge base completion based on convolutional neural network. Tu Dinh Dai Quoc Nguyen, Dat Nguyen, Dinh Q Quoc Nguyen, Phung, NAACL-HLT. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Q. Phung. A novel embedding model for knowledge base completion based on convolutional neural network. In NAACL-HLT, 2018.
Knowledge graph embedding for link prediction: A comparative analysis. Andrea Rossi, Donatella Firmani, Antonio Matinata, Paolo Merialdo, Denilson Barbosa, abs/2002.00819CoRRAndrea Rossi, Donatella Firmani, Antonio Matinata, Paolo Merialdo, and Denilson Barbosa. Knowl- edge graph embedding for link prediction: A comparative analysis. CoRR, abs/2002.00819, 2020.
Modeling relational data with graph convolutional networks. Michael Sejr, Thomas N Schlichtkrull, Peter Kipf, Rianne Bloem, Van Den, Ivan Berg, Max Titov, Welling, In ESWC. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In ESWC, 2018.
End-to-end structureaware convolutional networks for knowledge base completion. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, Bowen Zhou, AAAI. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. End-to-end structure- aware convolutional networks for knowledge base completion. In AAAI, 2019.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 15Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 2014.
Cross-lingual entity alignment via joint attribute-preserving embedding. Zequn Sun, Wei Hu, Chengkai Li, Zequn Sun, Wei Hu, and Chengkai Li. Cross-lingual entity alignment via joint attribute-preserving embedding. In ISWC, 2017.
Bootstrapping entity alignment with knowledge graph embedding. Zequn Sun, Wei Hu, Qingheng Zhang, Yuzhong Qu, In IJCAI. Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. Bootstrapping entity alignment with knowledge graph embedding. In IJCAI, 2018.
Knowledge graph alignment network with gated multi-hop neighborhood aggregation. Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, Yuzhong Qu, AAAI. Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In AAAI, 2020.
Rotate: Knowledge graph embedding by relational rotation in complex space. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, Jian Tang, ICLR. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. In ICLR, 2019.
Contrastive representation distillation. Yonglong Tian, Dilip Krishnan, Phillip Isola, ICLR. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive representation distillation. In ICLR, 2020.
Observed versus latent features for knowledge base and text inference. Kristina Toutanova, Danqi Chen, CVSC. Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In CVSC, 2015.
Complex embeddings for simple link prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, Guillaume Bouchard, ICML. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In ICML, 2016.
Representation learning with contrastive predictive coding. Aäron Van Den Oord, Yazhe Li, Oriol Vinyals, abs/1807.03748CoRRAäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. CoRR, abs/1807.03748, 2018.
Composition-based multi-relational graph convolutional networks. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Partha P Talukdar, ICLR. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha P. Talukdar. Composition-based multi-relational graph convolutional networks. In ICLR, 2020.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
Graph attention networks. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, In ICLR. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018.
Knowledge graph embedding: A survey of approaches and applications. Quan Wang, Zhendong Mao, Bin Wang, Li Guo, IEEE Transactions on Knowledge and Data Engineering. 29Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29, 2017.
Knowledge graph embedding by translating on hyperplanes. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, AAAI. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. Knowledge graph embedding by translating on hyperplanes. In AAAI, 2014.
Cross-lingual knowledge graph alignment via graph convolutional networks. Zhichun Wang, Qingsong Lv, Xiaohan Lan, Yu Zhang, In EMNLP. Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. Cross-lingual knowledge graph alignment via graph convolutional networks. In EMNLP, 2018.
Relation-aware entity alignment for heterogeneous knowledge graphs. Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, Dongyan Zhao, IJCAI. Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. Relation-aware entity alignment for heterogeneous knowledge graphs. In IJCAI, pp. 5278-5284, 2019.
Empirical evaluation of rectified activations in convolutional network. Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li, CoRRBing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. CoRR, 2015.
Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, ICLR. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In ICLR, 2015.
A vectorized relational graph convolutional network for multi-relational network alignment. Rui Ye, Xin Li, Yujie Fang, Hongyu Zang, Mingzhong Wang, IJCAI. Rui Ye, Xin Li, Yujie Fang, Hongyu Zang, and Mingzhong Wang. A vectorized relational graph convolutional network for multi-relational network alignment. In IJCAI, 2019.
| [] |
[
"Textual Fingerprinting with texts from Parkin, Bassewitz, and Leander",
"Textual Fingerprinting with texts from Parkin, Bassewitz, and Leander"
] | [
"Christoph Schommer christoph.schommer@uni.lu \nJW Goethe-University Frankfurt am Main Dept. of Computer Science and Mathematics\nUniversity of Luxembourg Dept. of Computer Science -ILIAS\nLaboratory 6, Rue Richard Coudenhove-Kalergi, Robert-Mayer-Str. 11-15, D1359, 60486Luxembourg, Frankfurt am MainLuxembourg, Germany\n",
"Conny Uhde uhde@cs.uni-frankfurt.de \nJW Goethe-University Frankfurt am Main Dept. of Computer Science and Mathematics\nUniversity of Luxembourg Dept. of Computer Science -ILIAS\nLaboratory 6, Rue Richard Coudenhove-Kalergi, Robert-Mayer-Str. 11-15, D1359, 60486Luxembourg, Frankfurt am MainLuxembourg, Germany\n"
] | [
"JW Goethe-University Frankfurt am Main Dept. of Computer Science and Mathematics\nUniversity of Luxembourg Dept. of Computer Science -ILIAS\nLaboratory 6, Rue Richard Coudenhove-Kalergi, Robert-Mayer-Str. 11-15, D1359, 60486Luxembourg, Frankfurt am MainLuxembourg, Germany",
"JW Goethe-University Frankfurt am Main Dept. of Computer Science and Mathematics\nUniversity of Luxembourg Dept. of Computer Science -ILIAS\nLaboratory 6, Rue Richard Coudenhove-Kalergi, Robert-Mayer-Str. 11-15, D1359, 60486Luxembourg, Frankfurt am MainLuxembourg, Germany"
] | [] | Current research in author profiling to discover a legal author's fingerprint does not only follow examinations based on statistical parameters only but include more and more dynamic methods that can learn and that react adaptable to the specific behavior of an author. But the question on how to appropriately represent a text is still one of the fundamental tasks, and the problem of which attribute should be used to fingerprint the author's style is still not exactly defined. In this work, we focus on linguistic selection of attributes to fingerprint the style of the authors Parkin, Bassewitz and Leander. We use texts of the genre Fairy Tale as it has a clear style and texts of a shorter size with a straightforward story-line and a simple language. | null | [
"https://arxiv.org/pdf/0802.2234v1.pdf"
] | 2,776 | 0802.2234 | 50c5cffc529bd858b332a1b172e470fe019e55ac |
Textual Fingerprinting with texts from Parkin, Bassewitz, and Leander
February 15, 2008 15 Feb 2008
Christoph Schommer christoph.schommer@uni.lu
JW Goethe-University Frankfurt am Main Dept. of Computer Science and Mathematics
University of Luxembourg Dept. of Computer Science -ILIAS
Laboratory 6, Rue Richard Coudenhove-Kalergi, Robert-Mayer-Str. 11-15, D1359, 60486Luxembourg, Frankfurt am MainLuxembourg, Germany
Conny Uhde uhde@cs.uni-frankfurt.de
JW Goethe-University Frankfurt am Main Dept. of Computer Science and Mathematics
University of Luxembourg Dept. of Computer Science -ILIAS
Laboratory 6, Rue Richard Coudenhove-Kalergi, Robert-Mayer-Str. 11-15, D1359, 60486Luxembourg, Frankfurt am MainLuxembourg, Germany
Textual Fingerprinting with texts from Parkin, Bassewitz, and Leander
February 15, 2008 15 Feb 20081
Current research in author profiling to discover a legal author's fingerprint does not only follow examinations based on statistical parameters only but include more and more dynamic methods that can learn and that react adaptable to the specific behavior of an author. But the question on how to appropriately represent a text is still one of the fundamental tasks, and the problem of which attribute should be used to fingerprint the author's style is still not exactly defined. In this work, we focus on linguistic selection of attributes to fingerprint the style of the authors Parkin, Bassewitz and Leander. We use texts of the genre Fairy Tale as it has a clear style and texts of a shorter size with a straightforward story-line and a simple language.
1 What is it about? The 1 . forensic linguistics is concerned with a verification process for the decryption of texts and the analysis through pattern discovery. In this respect, verification means the usage of existing and well-known stylistic attributes to discover an individual (linguistic) fingerprint. However, it is still a controversial discussion, if such a linguistic fingerprint is a clear indication per se: stylistic tests assume that typical attributes are directly influenceable by the author and that a certain number of attributes still keep constantly, even though the author changes consciously the behavior or the own style [7]. However, following Dixon and Mannion to their evaluations to the texts of Oliver Goldsmith, it may be observed that an appropriate selection of stylistic attributes take a risk: Goldsmith's style is characterized by an adaptive fluency, where he adapts his onw style in a reported speech of the respective actor. To identify Goldsmith's characteristic style attributes, Dixon and Mannion compared his essays with those of four contemporary writers. They found out that two of the four writers show a suspicious similarity with Goldsmith as they originate from the same irish area, living in the english exile [14]. Additionally, stylistic attributes that are influenced by the genre may interfere the individual style [3]. This leads to the conclusion that only texts of the same domain are affiliated with each other. Using the texts of the Nijmegen-corpus, Baayen et al. have analyzed the differences of diverse authors of the same genre as well as the texts of authors who represent different genres: they have found out that texts of the same genre are generally more similar than texts of the different genre that are from the same author.
Style Discovery
Stylometry refers to the measurement of the style with the aim to fingerprint a text following a certain number of linguistic attributes, to conclude the authorship of a text and/or to order texts following their chronology [18]. The content, the meaning and the correctness of the text is not concerned. The general ambition is to discover those attributes that difference texts sufficiently [17]. Generally, the data is analyzed statistically taking numerical attributes into account but disregarding categorical attributes. Figure of speeches like metaphor and symbols are clear defined indeed, but are not to be discovered automatically. In [18], Oakes writes that any linguistic occurrence can be taken for the stylometric analyis as the attribute can be expressed by a numerical attribute. However, it must be assured that the attribute is relevant for other genres as well [16]. Another important aspect is the differentiation of linguistic attributes of whether they are consciously controlled by the author or not [13]. Many examinations take explicitly unconscious stylistic attributes as the relevant discrimination criterion as they are a stronger sign of a stylistic fingerprint. However, this includes the existence of stylistic attributes that stay constantly through the whole text and the existence of linguistic attributes that adapt [12]. In this respect, we focus on a differentiation of conscious and unconscious stylistic attributes, well noting that diverse authors differ more in their style than texts of an individual author. Furthermore, texts of an individual authors differ more than passages within a text [6]. We therefore conclude that an appropriate consistence of a continuous usage of conscious and unconscious stylistic attributes must be generally secured. Very generally, linguistic attributes refer either to a statistic frequency or to the differentiation of the vocabulary [20] -under the assumption that author differ in their vocabulary and that they control their vocabulary rather limited than specific. The vocabulary is then queried by habit, it is performed automatically and therefore constant, appropriate for text classification [10]. Several examinations have shown that a few stylistic attributes are insufficient for the differentiation of authors as they produce pairs of authors classifying in the same category. [6] and [19] suggest a wider spectrum of attributes leading to a better success and argue that the selected attributes can be ordered depending on their significance in respect to a classification the genre. The research on a stylistic analysis for author profiling has been started years ago, when Mendenhall [23] and Mascol [4] examined literary verses of the New Testament by considering aspects like frequency of words the length of sentences. They assumed that authors produce different texts, with different style and features. Many statistical examinations followed, for example to discover text features that may appear constantly. Many attributes have been found and mathematical issues proven, for example the Yules characteristics, Zipf's law and the Hapax Legomena. [15] has shown that a statistical relevance on a low number of textual data can be expressed and computed by a Bayesian statistics, which makes it applicable for a contribution to author profiling. However, to conclude that a text is written by a specific author has not often clear and misclassifications has been done. Since that time, many other attributes have been examined, for example the number of words of a certain wordclass [1], [9], syntax analysis [2], [21] and word phrases [8], and grammatical failures [11]. Many examinations combine some of these attributes as well as different methods from statistics and machine learning, for example principal component analysis, support vector machines, and cluster analysis [5].
About the evaluation environment
In this work, we understand Author Profiling as a way to identify authors by a certain number of linguistic (numerical) attributes and to assign texts correctly to them. In this respect, our hypothesis is that -based on the assumption that there exists a potential style identification -an stylistic identification of authors can be done with quality if we can find a sufficient number of expressive attributes, which describe the author's behavior in respect of characteristic and dissimilarity; and that allows an application of machine learning methods for an demonstrative evaluation. Nevertheless, to perform an empirical study in order to discover the author's style is mostly characterized by a linguistic detail, namely the principal use of attributes that are applicable within a computer-based analysis. And an independence of these attributes must be adjusted as well.
The genre we use
We focus on several texts from the genre Fairy Tale. The texts are in german language. Fairy tales have been selected as they are per se an excellent differentiator to other texts; they are distinguished by a clear style and author-independent. Mostly, fairy tales are of shorter size, they are amusing stories with fantastic content without a reference of time. The storyline is straightforward, the language simple. Although common speech texts are more difficult to differentiate as they do not have to follow a certain style per se, they are contradictory to the texts of a technical texts serving the presentation and the critical discussion with specific contextual aspects. Common speech texts are non-coded and of daily use, easy understandable, but less syntactically defined. Technical texts are often related to science and therefore underlie certain criterions. The understanding of a technical text is highly depending on the style. Nevertheless, authors try to keep their individual style, taking into account the correct use of orthography, syntax and punctuation. The text is often impersonal and written in present tense. We have therefore started our examinations with a comparison between selected texts from Fairy Tale and Common Speech and Technical Language, respectively. Approximately 10 authors per each genre with 3-5 documents have been selected.
Attribute Selection
In concern of the attributes, we take into account linguistic attributes as much as they significantly contribute to the author's style, but filter those out that are dependent from another. For the evaluations, we have used more than 30 attributes from statistics or linguistics, for example • Number of Words and Number of distinct words, where punctuation marks are disregarded.
• Frequency of personal pronoun. Depending on the genre, the personal pronoun is assessed; for example, the word I receives a higher weight in scientific texts than in fairy tales since we may assume that scientific texts follow a more neutral description (passive) or uses the We instead.
• The word with the highest frequency.
• Word length in average.
• Record length. We use this attribute although [22] mentions that the record length is not expressive and applicable as a single attribute. The disadvantage is the author's control and capability to imitate, especially against the punctuation. This makes it less suitable to older texts. [24] agree that the attribute record length is a weak measurement for the author's style but is useful when focusing on their distributions.
• Yules characteristic value k. This value bases on the assumption that the occurrence of a word is random and underlies the distribution of Poisson. The more the words are repeated, the higher k.
• Hapax Legomena, the number of words that occur exactly once in the text. This is to measure the author's disposition to use or to avoid synonyms.
• Sentence Structure. This attribute describes the author's disposition to prefer main clauses or subordinate clauses. We measure this by the percentage of hypotaxis in the text.
• Value of the the type-token proportion. Let n the number of tokens (words) in the text and v the number of different tokens, then the type-token proportion r is the fraction between v and n.
• The Entropy of the text. The length of each text source is set to a fixed number of words.
Furthermore, we have concerned with stop words and calculated (per text) the number of words occurring exactly once, the stop word itself that occurs most frequently, and its frequency and percentage. Using the thesaurus of the University of Leipzig 2 , we additionally calculate the frequency class, which refers to how often a token occurs in comparison to any occurrence.
Attribute Filtering
A problem in using the selected attributes is that some of them may dependent from others. Furthermore, this depends on the genre very strict, so that attributes of a genre must be preprocessed individually. We therefore have used the statistical method of Plots to pairwise visualize the attribute's behavior. For example, the attributes Entropy, Type-Token Ratio, and Average word frequency are dependent whereas Number of Hapax Legomena and Yules characteristics are not. Figure 1 shows the preprocessed filtering result of texts from the genre Fairy Tale, i.e., the pairwise distribution of independent attributes. The presented plot is symmetric.
Selected Fingerprinting Results
We have enriched this calculation with diverse statistical methods like principal component analysis or bivariate statistics to visualize and calculate the most interesting and reliable attributes and have applied machine learning in a different way through demographic clustering or a genetic algorithm. Some evaluations have been done with the IBM Intelligent Miner V8, some with the statistical program R, and Clus-Gen: this a self-programmed genetic simulation to classify the whole text corpus to classes. The idea focus on the assumption that a representative median vector for a collection of texts -that come from the same author -dynamically exists; and that the texts of one author are representative enough for all texts of the corresponding author. We have initiated the process of fingerprinting while interpreting all discovered results.
Fairy Tale against Common Speech and Technical Language
To get an overview of each genre per se and to characterize these texts as well, Figure 2 shows the result of a bivariate statistics against the attributes Genre. We observe that 42% of the text set are from Fairy Tale, 32% from Common Speech, and 26% from Technical Language. The variables are ordered following their chi-square values, meaning that the discrepancy between the distribution of the corresponding attribute inside a genre (inner ring for Genre, non-colorized distribution of the other attributes) to the whole text population (outer ring for Genre, colorized distribution of the other attributes) represents the significance and therefore position of an at-tribute in each region: the more different the distribution the more it is positioned to the left. In this respect, AnzHL represents the most important attribute to Fairy Tale (Märchen) and is more significant to its region than in any other genre. Furthermore, the attribute Number of Hapax Legomena (AnzHL) is characterized by a distribution of high values in Technical Language, but is low distributed in Fairy Tale. On the other side, the attribute Number of different tokens (AnzVerschWo) has a distribution with high values in Technical Language but with lower values in Common Language. Following this, we can assume that in Fairy Tale the words occur more seldom only once (AnzHL); the number of different words (AnzVerschWo) is rather low while having longer sentences (VarianzSaLä). Texts from Technical Language, however, use a more extended vocabulary (AnzVerschWo), where words occur more often only once (AnzHL). With text clustering, we have observed several clusters representing only texts of an individual genre. For Fairy Tale, relatively many attributes have a distribution that is surprisingly higher than in all texts, for example the number of adjectives (An-zAdjektive), the number of Hapax Legomena (AnzHL) and the Yules characteristics (YulesK ); on the other side, the number of verbs (AnzVerben) and the frequency class (HäufKla) are quite low distributed. Five authors share this cluster, the most used words are ich and und. The number of parataxis are relatively lower, the number of hypotaxis relatively higher. We may conclude that the general style is descriptive and figurative, because many adjectives and synonyms are used. It is explainable, since longer sentences exist that are often nested.
Generally, the tests have shown a controversial face. Within the selected texts, the style of only some authors have been constantly, meaning that there exist a set of attributes being equally distributed. This is for the authors Parkin and Bassewitz as their texts have been selected within a textbook. On the other side, the texts of Leander are widespread, although they are also taken from an individual text collection. Figure 3 shows a cluster that only bases on texts from the author Parkin. We observe a low number of adjectives (AnzAdjektive) in all the clustered texts, a high number of parataxis (proParaTax ), and a high number of we's (AnzWir ). Parkin's style is further characterized by a high type-token ratio (TypeTokenRatio), a low number of different words (AnzVerschWo), and a low number of Hapax Legomena (AnzHL). He prefers parataxis, the averaged length of sentences is low as well as the number of different words: he therefore tends to repeat words, favors nouns but not adjectives at all.
Parkin's Style
Bassewitz's Style
The Figure 4 shows a snapshot of a demographic clustering result that bases on
• four texts of the authors Bassewitz, taken from Peterchen's Mondfahrt
• six texts of the author Leander, taken from Träumereien an französischen Kaminen within the genre Fairy Tale. Bassewitz's texts are characterized by a high-valued frequency class (HäufKla) and a disproportionately high occurrence of verbs. Additionally, the word type of the longest word belongs almost to the same class as well as the word type of the most frequently word, the most frequently stop word, and the number of Hapax Legomena.
Leander's Style
The texts of Leander are more mixed, not aiming at an uniform style. We observe that Leander's texts are more different than those from Bassewitz -although both are taken from textbook. Taking the genetic simulation program, we have received classification rates by an adaptive calculation of a new median vector. With this, the classification rates are above 90% for the whole test corpus. Inside the genres, the classification rate for Fairy tail has been around 93% but for Common speech and Technical language nearly 100%.
Conclusions
At first glance, it seems not even conceivable that an author's style or even more, a fingerprint, can be discovered: the number of evaluated texts is quite low and a legal forecast therefore not feasible. But the evaluations prove that through the selection of linguistic attributes, an author can be described within his texts; and even more, the usage of texts within an author's textbook like Bassewitz's Peterchen's Mondfahrt may certainly have its eligibility to characterize his style and to allow to score texts in this microcosm. For the authors Parkin and Bassewitz this is be observed as Parkin's style is documented by a couple of attributes sharing an individual distributive behavior, whereas Bassewitz uses -for example -throughout words that occur seldom. To extend these tests to a larger text corpus, to enrich the given set of attributes by subjective linguistic attributes, and to generalize our results to other text corpora will be one of our next responsibilities. In this respect, we understand a subjective linguistic attribute as a personal statement of the author himself, like for example I believe that or I certainly agree to express the personal beliefs and intentions. Furthermore, open questions arise like how representative these results are or how appropriate a scoring engine -that assigns for example a text of Parkin correctly -is not tested yet. Our next steps will concern these questions, we also follow up on further examinations. Generally, we strongly believe in this way of style analysis and author recognition, and hope to discover attributes that uniformly relies on our hypothesis.
Figure 1 :
1Received list of independent attributes for the genre Fairy Tale, plotted in a quadratic and symmetric comparison matrix.
Figure 2 :
2Bivariate Statistics with three variables against the genre (Fairy Tale (top), Common Speech (middle), and Technical Language (bottom)).
Figure 3 :
3The Parkin-Cluster, Genre Fairy Tale.
Figure 4 :
4The Bassewitz-Cluster and the Leander-Clusters, Genre Fairy Tale.
This work has been supported by the University of Luxembourg within the project TRIAS -Logic of T rust and Reliability for I nformation Agents in S cience
see Wortschatz -http://wortschatz.uni-leipzig.de/
G Avneri, S Argamon, M Koppel, Routing documents according to their style. Intl. Workshop on Innovative Internet Information Systems. G. Avneri, S. Argamon, M. Koppel: Routing documents according to their style. Intl. Workshop on Innovative Internet Information Systems, 1998.
Outside the cave of shadows: using syntactic annotation to enhance authorship attribution. T F Baayen, H V Halteren, Literary and Linguistic Computing. 3T. F. Baayen, H. v. Halteren: Outside the cave of shadows: using syntactic annotation to enhance authorship attribution. Literary and Linguistic Computing, (3):121-130, 1996.
Outside the cave of shadows: using syntactic annotation to enhance authorship attribution. H Baayen, H Halteren, F Tweedie, Literary and Linguistic Computing. 3H. Baayen, H. von Halteren, F. Tweedie: Outside the cave of shadows: using syntac- tic annotation to enhance authorship attribution. Literary and Linguistic Computing, (3):121-130, 1996.
Curves of pauline and pseudo-pauline style i+ii. C , Unitarian Review. 30452460C. Maskol: Curves of pauline and pseudo-pauline style i+ii. Unitarian Review 30:452460, 1988.
Distributed authorship resolution using relative entropy for Markov Chain of letters in texts. D Khmelev, 4th Intl Conference on Quantitative Linguistics Association. D. Khmelev: Distributed authorship resolution using relative entropy for Markov Chain of letters in texts. 4th Intl Conference on Quantitative Linguistics Association, 2000.
F Dimpel, Computergestützte textstatistische Untersuchungen an mittelhochdeutschen Texten. Tübingen: Francke. F. Dimpel: Computergestützte textstatistische Untersuchungen an mittelhochdeutschen Texten. Tübingen: Francke, 2004.
Mannion: Goldsmith's Periodical Essays: A Statistical Analysis of Eleven Doubtful Cases. P Dixon, D , Literary and Linguistic Computing. 8P. Dixon, D. Mannion: Goldsmith's Periodical Essays: A Statistical Analysis of Eleven Doubtful Cases. Literary and Linguistic Computing, 8:1 19, 1993.
Tge missing link. : F Smadja, Journal of the Association for Literary and Linguistic Computing. 43: F. Smadja: Tge missing link. Journal of the Association for Literary and Linguistic Computing, 4(3), 1989.
D Holmes, R S Forsyth, Features finding for text classification. Literary and Linguistics Computing. 11D. Holmes, R. S. Forsyth: Features finding for text classification. Literary and Linguistics Computing, 11(4):163-174, 1996.
Another Perspective on Vocabulary Richness. D L Hoover, Journal on Computers and the Humanities. SpringerD. L. Hoover: Another Perspective on Vocabulary Richness. Journal on Computers and the Humanities, Springer, pp. 151-178, 2004.
Koppel: Exploiting stylistic idiosyncrasies for authorship attribution. J Schler, M , Proceedings of IJCAI'03 Workshop on Computational Approaches to Style Analysis and Synthesis. IJCAI'03 Workshop on Computational Approaches to Style Analysis and SynthesisJ. Schler, M. Koppel: Exploiting stylistic idiosyncrasies for authorship attribution. In Proceedings of IJCAI'03 Workshop on Computational Approaches to Style Analysis and Synthesis, 2003.
Stylometry and Method. The Case of Euripides. N M Laan, Oxford Journals, Literary and Linguistic Computing. N. M. Laan: Stylometry and Method. The Case of Euripides. Oxford Journals, Literary and Linguistic Computing, pp. 271-278, 1995.
Re-Counting Plato: A Computer Analysis of Plato's Style. G Ledger, Clarendon PressG. Ledger: Re-Counting Plato: A Computer Analysis of Plato's Style. Clarendon Press. 1990.
Authorship Attribution: the Case of Oliver Goldsmith. D Mannion, P Dixon, The Statistician. 46D. Mannion, P. Dixon: Authorship Attribution: the Case of Oliver Goldsmith. Journal of the Royal Statistical Society (Series D): The Statistician, 46:1-18, 1997.
Mosteller: Applied Bayesian and classical inference. D L Wallace, F , SpringerD. L. Wallace, F. Mosteller: Applied Bayesian and classical inference. Springer, 1984.
T Mcenery, M Oakes, Authorship Identification and Computational Stylometry. Handbook of Natural Language Processing. T. Mcenery, M. Oakes: Authorship Identification and Computational Stylometry. Handbook of Natural Language Processing. pp. 545-562, 2000.
Literary Detection. How to prove Authorship and Fraud in Literature and Documents. M A Queen, New YorkM. A. Queen: Literary Detection. How to prove Authorship and Fraud in Literature and Documents. New York, 1978.
M P Oakes, Statistics for Corpus Linguistics. Edinburgh Textbooks in Empirical Linguistics. Edinburgh University PressM. P. Oakes: Statistics for Corpus Linguistics. Edinburgh Textbooks in Empirical Linguistics. Edinburgh University Press. 1998.
The State of Authorship Attribution Studies. Some Problems and Solutions. J Rudman, Kluwer Academic PublishersJ. Rudman: The State of Authorship Attribution Studies. Some Problems and Solu- tions. Kluwer Academic Publishers, 1998.
Kokkinakis: Computer-based Authorship Attribution Without Lexical Measures. E Stamatatos, N Fakotakis, G , Journal on Computers and the Humanities. 35SpringerE. Stamatatos, N. Fakotakis, G. Kokkinakis: Computer-based Authorship Attribution Without Lexical Measures. Journal on Computers and the Humanities, Springer, 35:193- 214, 2001.
Fakotakis: Automatic text categorization in terms of genre and author. G Kokkinakis, E Stamatatos, N , Computational Linguistics. 264G. Kokkinakis, E. Stamatatos, N. Fakotakis: Automatic text categorization in terms of genre and author. Computational Linguistics, 26(4):471-495, 2000.
M W Smith, Recent experience and new developments of methods for the determination of authorship. Association for Literary and Linguistic Computing Bulletin. 11M. W. Smith: Recent experience and new developments of methods for the determina- tion of authorship. Association for Literary and Linguistic Computing Bulletin, 11:73-82, 1983.
The characteristic curves of composition. T Mendenhall, Science. T. Mendenhall: The characteristic curves of composition. Science, pp. 214:237249. 1887.
An appraisal of methods and models in computational stylistics, with particular reference to author attribution. D R Tallentire, Univesity of CambridgePhD ThesisD. R. Tallentire: An appraisal of methods and models in computational stylistics, with particular reference to author attribution. PhD Thesis, Univesity of Cambridge. 1972.
| [] |
[
"Incorporating Word Embeddings into Open Directory Project based Large-scale Classification",
"Incorporating Word Embeddings into Open Directory Project based Large-scale Classification"
] | [
"Kang-Min Kim kangmin89@korea.ac.kr \nKorea University\nSeoulRepublic of Korea\n",
"Aliyeva Dinara dinara_aliyeva@korea.ac.kr \nKorea University\nSeoulRepublic of Korea\n",
"Byung-Ju Choi \nKorea University\nSeoulRepublic of Korea\n",
"Sangkeun Lee \nKorea University\nSeoulRepublic of Korea\n"
] | [
"Korea University\nSeoulRepublic of Korea",
"Korea University\nSeoulRepublic of Korea",
"Korea University\nSeoulRepublic of Korea",
"Korea University\nSeoulRepublic of Korea"
] | [] | Recently, implicit representation models, such as embedding or deep learning, have been successfully adopted to text classification task due to their outstanding performance. However, these approaches are limited to small-or moderate-scale text classification. Explicit representation models are often used in a large-scale text classification, like the Open Directory Project (ODP)-based text classification. However, the performance of these models is limited to the associated knowledge bases. In this paper, we incorporate word embeddings into the ODPbased large-scale classification. To this end, we first generate category vectors, which represent the semantics of ODP categories by jointly modeling word embeddings and the ODP-based text classification. We then propose a novel semantic similarity measure, which utilizes the category and word vectors obtained from the joint model and word embeddings, respectively. The evaluation results clearly show the efficacy of our methodology in large-scale text classification. The proposed scheme exhibits significant improvements of 10% and 28% in terms of macroaveraging F1-score and precision at k, respectively, over state-of-the-art techniques. | 10.1007/978-3-319-93037-4_30 | [
"https://arxiv.org/pdf/1804.00828v1.pdf"
] | 4,624,276 | 1804.00828 | b80fff1be28f97089582b9c99547c98d007cc1d2 |
Incorporating Word Embeddings into Open Directory Project based Large-scale Classification
Kang-Min Kim kangmin89@korea.ac.kr
Korea University
SeoulRepublic of Korea
Aliyeva Dinara dinara_aliyeva@korea.ac.kr
Korea University
SeoulRepublic of Korea
Byung-Ju Choi
Korea University
SeoulRepublic of Korea
Sangkeun Lee
Korea University
SeoulRepublic of Korea
Incorporating Word Embeddings into Open Directory Project based Large-scale Classification
Text Classification, Word Embeddings
Recently, implicit representation models, such as embedding or deep learning, have been successfully adopted to text classification task due to their outstanding performance. However, these approaches are limited to small-or moderate-scale text classification. Explicit representation models are often used in a large-scale text classification, like the Open Directory Project (ODP)-based text classification. However, the performance of these models is limited to the associated knowledge bases. In this paper, we incorporate word embeddings into the ODPbased large-scale classification. To this end, we first generate category vectors, which represent the semantics of ODP categories by jointly modeling word embeddings and the ODP-based text classification. We then propose a novel semantic similarity measure, which utilizes the category and word vectors obtained from the joint model and word embeddings, respectively. The evaluation results clearly show the efficacy of our methodology in large-scale text classification. The proposed scheme exhibits significant improvements of 10% and 28% in terms of macroaveraging F1-score and precision at k, respectively, over state-of-the-art techniques.
Introduction
Text classification is the process of determining and assigning topical categories to text. It plays an important role in many web applications, such as contextual advertising [1], topical web search [2], and web search personalization [3]. Usually, text classification requires a sufficiently large taxonomy of topical categories to capture various topics in arbitrary texts. In addition, it is necessary to collect a large amount of training data for each category in the taxonomy.
Many studies have utilized an implicit representation model [4], such as embedding [5,6,7] or a deep neural network [8], which adopts dense semantic encodings and measures semantic similarity accordingly. Implicit representation models have been successfully adopted for text classification task. Such implicit representation models, however, may perform poorly in a large-scale text classification (as we shall show in Section 5.4). This is largely attributed to the fact that the training data for each category is relatively insufficient and distributed unevenly among classification categories. In addition, such approaches are not intuitively interpretable to humans.
In another line of work, many studies have been done with an explicit representation model [4], which uses popular knowledge bases, such as ProBase, Wikipedia, or the Open Directory Project (ODP) 1 . Because the explicit model represents knowledge in terms of vectors that are interpretable to both humans and machines, it is relatively easy for humans to tune and understand it. Another advantage of the explicit representation model is that it enables a large-scale text classification with the direct representation of a large-scale knowledge taxonomy already built-in.
To handle the large-scale text classification, several works [1,9,10] have utilized the ODP, which is a large-scale and taxonomy-structured web directory. These studies have used their explicit representation of text to represent ODP knowledge, based on bag-of-words [1,10] or bag-of-phrases [9] to develop ODPbased text classification techniques. They showed that ODP-based text classification techniques are effective at the large-scale text classification. The performance of previous ODP-based text classification, however, is limited to ODP and/or Wikipedia knowledge bases.
To alleviate the limitation of ODP-based text classification, we incorporate word embeddings into the ODP-based text classification. To this end, we propose two novel joint models of ODP-based classification and word2vec, a representative word embeddings technique. The joint models seek to project both words and ODP categories into the same vector space. Therefore, category vectors of ODP categories successfully identify words learned from external knowledge. In addition, we effectively measure the semantic relatedness between an ODP category and a document by utilizing both category and word vectors. In summary, our contributions are three-fold:
-We propose a novel methodology to handle the large-scale text classification, which utilizes both the explicit and implicit representation. -We develop two novel joint models of ODP-based classification and word2vec to generate category vectors that represent the semantics of ODP categories. In addition, we develop a new semantic similarity measure that utilizes both the category and word vectors. -We demonstrate the efficacy of the proposed methodology through extensive experiments on real-world datasets. The performance evaluation clearly shows that our approach significantly outperforms the state-of-the-art techniques in terms of macro-averaging F1-score and precision at k.
The remainder of this paper is organized as follows. We briefly describe the ODP-based knowledge representation and word2vec in Section 2. Section 3 describes the joint models of ODP-based classification and word2vec to generate category vectors. Section 4 details the similarity measure between a category and document. We present the performance evaluation results in Section 5. We discuss related research and conclude this work in Sections 6 and 7, respectively.
Preliminary
ODP-based Knowledge Representation
We employ the ODP-based text classification scheme [1] as our explicit representation model. To compute the centroid − → µ i of category c i , we calculate the averaged term vector of all ODP documents as:
− → µ i = 1 D ci d∈Dc i − → d(1)
where D ci is a set of ODP documents in c i , and − → d is a weighted vector represented as a tf-idf value. Due to the large-scale taxonomy structure of the ODP, however, each ODP category contains a different number of documents, sometimes resulting in sparsity or unavailability of training documents in a category. This issue is addressed in the works [1,10], in which they merge the centroid − → µ i of the descendant categories to build a classifier. As a result, this approach outperforms all other ODP-based text classifiers, and exhibits a stable performance in large-scale text classification [1,10]. Therefore, we utilize this approach to compute the centroid − → µ i of category c i . For example, as shown in Table 1, the category c 1 , Society/Government/ President is explicitly represented by the centroid vector. Given a document d, however, "Trump became prez", the ODP-based classification may not be able to classify the document d as the category c 1 . This is because, this approach cannot capture the semantic relations between words (e.g., prez and president).
Word2Vec
To complement the ODP-based classification, we adopt the word2vec [5,6], a popular word embeddings technique. In word2vec, each word vector is trained using a shallow neural networks language model, such as continuous bag-of-words (CBOW) or Skip-gram [5]. Skip-gram aims to predict context words given a target word in a sliding window. Mathematically, given a sequence of training words w 1 , w 2 , w 3 ..., w T , the objective of Skip-gram is to maximize the following average log probability:
1 T T i=1 i+k c=i−k,c =t log p(w c |w t )(2)
where k is the size of the context window centered at the target word, and w t and w c are the target and context words, respectively.
Trained word vectors with similar semantic meanings would be located at high proximity within the vector space. For example, the word vectors of president and prez would be located close to each other. On the other hand, the word vectors of president and casino would be located much more distantly in the embedding space. In addition, word vectors can be composed by an element-wise addition of their vector representations, e.g., Russian + river = Volga River. This property of the vectors is called "additive compositionality" [6]. Due to the simple structure of word2vec, many previous studies have proposed variants of the word2vec model to go beyond the word-level to achieve document-, topic-, or concept-level representations [7,11].
Joint Models of Explicit and Implicit Representation
In this section, we describe two joint models of ODP-based text classification and word2vec. These joint models generate category vectors, which represent the semantics of ODP categories. Each category vector not only semantically encodes the explicitly expressed ODP category, but also understands semantically related words that do not appear in the ODP knowledge base. This is because they are projected into the same semantic space as word vectors learned in an additional volume of knowledge outside the ODP.
Generating Category Vector with Algebraic Operation
Given the centroid vector of an ODP category and word vectors of the pretrained word2vec model, our first approach generates the category vector by using the vector scalar multiplication and vector addition methods, as follows.
First, we multiply the term weights of each word in the ODP category by each word vector of the words. Second, the weighted word vectors are composed as a category vector using element-wise addition. This type of vector algebra is quite simple, but it can also clearly represent the semantics of an ODP category. This is because word vectors are not only multiplied by a precisely trained term weight from the centroid vector, but also have additive compositionality.
The logic for generating the category vector of the ODP category is as follows:
− → C i = w∈Wi − → µ i (w) · − → w (3) where − → C i is the category vector of c i , W i is the set of words of c i , − → w is
the word vector (obtained from the pre-trained word2vec model) of word w, and − → µ i (w) is the term weight of w in c i . For example, in Figure 1(a), the word vectors of president, government, and trump are multiplied by 0.44, 0.31, and 0.10, respectively, then the weighted word vectors are added. Finally, we obtain the category vector of the category Society/Government/President. Vector representations of documents to be classified are generated in the same manner.
Generating Category Vector with Embedding
Our second approach extends word2vec to represent category vectors, instead of using the pre-trained word2vec model to compose word vectors in ODP categories. We first assign appropriate ODP categories for each word in a text corpus. Then, we train the category vectors of the assigned ODP categories by applying a modified Skip-gram model. The category vector of an ODP category is expected to represent the collective semantics of words under this category. The process of generating category vectors with embedding is as follows. First, we identify candidate ODP categories for the target word. If an ODP category is largely associated with the target word, the ODP-based text classification selects this category as a candidate. The ODP-based text classification determines the degree of association by using the term weight of the target word in each ODP category. For example, when T rump is the target word, the ODP-based classification identifies categories such as Game/Gambling and Society/Government/President, as shown in Figure 1(b). We then select the most appropriate ODP category in the current context by using the ODP-based text classification. For example, when the context is "US President Trump urged congress", the most appropriate category is Society/Government/President. Finally, we apply the modified Skip-gram algorithm, which trains the category vector corresponding to the most appropriate category.
The objective of category embedding is to maximize the following average log probability:
1 T T i=1 i+k c=i−k,c =t log p(w c |w t )p(w c |c t )(4)
Unlike the Skip-gram model, where the target word w t is used only to predict context words, the category embedding model also uses the ODP category c t of the target word to predict context words.
Semantic Similarity Measure
We develop a novel semantic similarity measure, on the basis of category and word vectors, which captures both the semantic relations between words and the semantics of ODP categories.
Using Word-level Semantics
First, we propose a semantic similarity measure that considers word-level semantics by using only the word vectors. The word vectors can be used to calculate the semantic relatedness between two words. The key idea of this measure is to align words with similar meanings in a category and document, although the words represented in this category and document are different.
Before describing the proposed measure, we explain how to compute the similarity between category c i and document d by means of the existing ODPbased text classification as follows:
cos(c i , d) = nc i j=1 n d k=1 δ(w j − w k ) · − → µ i (w j ) · − → d (w k ) − → µ i · − → d(5)
where w j and w k are non-zero terms in centroid vector − → µ i of c i and term vector − → d , respectively, while n ci and n d are the number of non-zero terms in − → µ i and − → d , respectively. δ(·) is the Dirac function defined by δ(0) = 1 and δ(other) = 0 [12].
The cosine similarity between the centroid vector of category and the term vector of document could increase when w j and w k are equal. However, in Table 1, we observe that prez has a very similar meaning to president, which is a very important word in the category Society/Government/President. Therefore, we propose a new measure that increases the similarity between proper − → µ i and − → d by utilizing word2vec. By substituting the Dirac function δ(·) with the word similarity φ(·), it is possible to consider semantic relatedness between two words and calculate the weight more densely:
sim(c i , d) = nc i j=1 n d k=1 φ(w j , w k ) · − → µ i (w j ) · − → d (w k ) − → µ i · − → d(6)
where φ(·) is the word similarity function. Given two words w j and w k , we define the word similarity function φ(w j , w k ) in Eq. (6) as follows:
φ(w j , w k ) = cos( − → w j , − → w k ) if cos( − → w j , − → w k ) > θ, 0 otherwise(7)
where − → w j and − → w k are the word vectors of w j and w k , cos( − → w j , − → w k ) is the cosine similarity between − → w j and − → w k , and θ is a threshold, which is empirically set to 0.6 in our analysis. The similarity between − → µ i and − → d increases not only when w j and w k are equal, but also they have similar semantics. For example, prez and president have highly similar semantics in Table 1. The semantic similarity using word-level semantics, thus, is additionally computed by 0.51 × 0.44 × φ(prez, president), unlike the original cosine similarity.
Using Category-and Word-level Semantics
In this paper, we develop a robust similarity measure by utilizing both the category and word vectors. A category vector is utilized as a pseudo word in the process of computing semantic similarity. A new measure can be expressed as follows:
sim (c i , d) = nc i +1 j=1 n d +1 k=1 φ(w j , w k ) · − → µ i (w j ) · − → d (w k ) − → µ i · − → d(8)
In Eq. (8), the category vector is inserted into the corresponding category as the (n ci + 1) th word. This is motivated by the fact that category vectors share the same semantic space with word vectors. Similarly, the document vector is inserted into the corresponding document as the (n d +1) th word. We will examine how to insert a category vector as a pseudo word by determining the weight (i.e., pseudo term weight) α of the category vector through many parameter experiments in Section 5.4. In addition to the ODP dataset, we train our category embedding model and word2vec model on the "One Billion Word Language Modeling Benchmark" dataset released by Google 2 . The word and category vectors are 300-dimensional, while the window size is set to 5 with 15 negative samples.
Experiments
Datasets
Test Datasets
We build two test datasets, ODP and NYT, to evaluate our methodology. The ODP test dataset consists of webpages collected from the original ODP. The webpages in each category are randomly divided into a training set and a test set at a ratio of seven to three. In particular, we build two kinds of ODP test datasets. In the large-scale classification task, we collect 24,121 webpages from 2,735 ODP categories in our taxonomy, while collecting 24,046 webpages from 13 ODP categories in the moderate-scale classification task. In addition to the ODP test datasets, we select six categories related to the New York Times: art, business, food, health, politics, and sports, as the source for our second test dataset. We randomly collect 20 news articles from each of these categories. Table 2 shows the statistics of datasets.
Evaluation Metrics
For the ODP test dataset, we use the macro-averaging precision, recall, and F 1 -score [13] as the classification performance metric. We adopt the macroaveraging, which assigns equal weights to each category instead of each test document, because the distribution of the ODP training dataset is highly skewed [1,10]. For the NYT test dataset, we use precision at k. Three participants manually assess the top-k ODP categories obtained by text classifiers in three scales: relevant, somewhat relevant, and not relevant.
Experimental Setup
We evaluate the performance of six methods. We adopt the ODP-based text classifier for our experiments, because the ODP-based text classification [1,10] outperforms many well-known classification methods such as Naive Bayes, k-Nearest Neighbor and Support Vector Machine. Other baselines include the paragraph vector [7] and convolutional neural networks-based text classifier [8], which are state-of-the-art methods on multi-class text classification. In our experiments, we compare the following methods:
-ODP (baseline): This is the ODP-based text classification only [1].
-P V (baseline): This is the text classification method using paragraph vectors [7]. The learned vector representations have 1000 dimensions. We represent ODP categories by averaging the document embeddings for each document in a category. We use the cosine similarity to calculate the similarity between a category and document. -CN N (baseline): This is the convolutional neural networks-based text classifier [8]. The dimension of word embedding is 300, and the number of filters for the CNN is 900. Weights other than the word embedding layer are initialized by the Gaussian distribution, with a mean of 0 and a standard deviation of 0.01. We use the ReLU for nonlinearity. Optimization is performed using SGD with a mini-batch size of 64 with RMSProp for acceleration. -ODP CV : This is our proposed text classification method using category vectors, which are generated by the joint model of ODP-based text classification and word2vec. We use the cosine similarity to calculate the similarity between a category and document vector.
-ODP W V : This is our proposed ODP-based text classification combined with the similarity measure of word-level semantics. -ODP CV +W V : This is our proposed ODP-based text classification combined with the similarity measure of both category-and word-level semantics.
Experimental Results
We first compare the two methods to generate category vectors with the ODP dataset (2,735 categories). In Table 3, ODP CV (Algebra) denotes the text classification utilizing the category vector generated by algebraic operations, while ODP CV (Embedding) denotes the text classification utilizing the category vector generated by embedding. Unexpectedly, we observe that a simple ODP CV (Algebra) clearly outperforms a relatively elaborate ODP CV (Embedding) . Thus, we adopt ODP CV (Algebra) in the remaining experiments, which is simply denoted by ODP CV . Next, we perform a parameter setting to determine the term weight α of a category vector as a pseudo word. Figure 2 shows the classification performance obtained by ODP CV +W V based on different α values. We find that the curve reaches a peak at α = 0.9. This result shows that the category vector plays a major role in the performance of ODP CV +W V . However, we observe that when the weight of category vector is 1.0, the performance drops sharply. This means that the word overlap feature is still helpful. In the remaining experiments, α is set to 0.9 for ODP CV +W V . Table 4(a) summarizes the experimental results for text classification on the ODP test dataset with 2,735 target classes. We observe that ODP CV +W V outperforms all the other proposed methods, as well as the baselines. ODP CV +W V performs better than ODP over 9%, 12%, and 10% on average in terms of precision, recall, and F1-score, respectively. Our experimental results show that P V [7] performs worse than ODP . In addition, it turns out that CN N [8] performs the worst among the six methods. This can be explained by the fact the distribution of webpages is skewed toward a few categories in the original ODP [1]. Actually, we observe that 73% of ODP categories contain fewer than five webpages.
We also compare the performance of CN N with the ODP baseline on the ODP test dataset with 13 target categories. From Table 4(b), we observe that CN N exhibits a better performance than ODP in the moderate-scale text classification. From Table 4, we confirm that CN N is indeed limited to the moderatescale text classification. Table 5 shows the evaluation results on the NYT test dataset. Again, the performance of ODP CV +W V outperforms ODP , P V , CN N , ODP CV and ODP W V over 28%, 119%, 216%, 12%, and 10% in terms of precision at k on average, respectively. We also observe that both ODP CV and ODP W V outperform ODP . These results clearly demonstrate that both category and word vectors are effective at text classification. Specifically, ODP CV +W V , which utilizes both category and word vectors, achieves the best performance in all experiments. We also perform the t-test for the classification results, and find that ODP CV +W V results are statistically significant with p < 0.01.
Analysis
We also qualitatively examine the meaning of category vectors to analyze why adding category vectors improves the performance of ODP-based text classification. From Table 6, we observe that the category vector expresses the meaning of category quite well. First, from the parent category Home/Cooking/Baking and Confections and child category Home/Cooking/Baking and Confections/Breads, we observe that their category vectors share the core semantically rich words (e.g., Recipe, Baking, Cookies), while they have their own unique semantically rich words (e.g., Dessert, Bread). These observations imply that the category vector actually understands the semantics better than the centroid vector.
Interestingly, we also observe that the category vector identifies semantically related words that do not appear in the ODP knowledge base (e.g., Henin, a Belgian former professional tennis player, in the category Sports/Tennis/Players). Thus, category vectors combined with the ODP-based classification successfully enable us to improve the performance of text classification.
Related Work
For the large-scale text classification, many approaches have been developed to handle data sparsity on a knowledge base. Data sparsity on a hierarchical taxonomy was firstly addressed in [14]. This work applied a statistical technique to estimate the parameters of data-sparse child categories with their data-rich ancestor categories. In [1,10], they proposed the merge-centroid (MC) classification that utilizes enriched training data for each category based on webpages classified into their ancestor and/or descendants in the ODP. In another line of work [9], they enriched semantic information in the ODP by incorporating another knowledge base, Wikipedia. A simple convolutional neural network approach [8] has been proven to be an effective text classifier. Still, it exhibits limitations in the large-scale text classification, which is verified in our analysis. A few work [15,16] has recently studied large-scale multi-label text classification using deep neural networks. However, they do not utilize the explicit representation model built from knowledge base. To the best of our knowledge, our current work is one of only a few works that utilizes both the explicit and implicit knowledge representation, which enables us to perform the large-scale text classification quite well.
Conclusion
In this paper, we have proposed novel joint models of the explicit and implicit representation techniques to handle the large-scale text classification. Specifically, we have incorporated the well-known word2vec model into the ODP-based classification framework. Our approach involves two tasks. First, we generate category vectors, which represent the semantics of ODP categories. Second, we develop a new semantic similarity measure that utilizes both category and word vectors. We have verified the large-scale classification performance of the proposed methodology using real-world datasets. The performance evaluation results confirm that our scheme significantly outperforms baseline methods. We plan to apply the proposed methodology to different applications, including contextual and mobile advertising.
Fig. 1 :
1Illustration of Category Vector Generation with Algebraic Operation (a) and Embedding (b)
Fig. 2 :
2Classification Performance based on Different α Values
Table 1 :
1Example of ODP-based Representation. A document d, "Trump became prez", to be classified, and a category c 1 , Society/Government/President are considered.term weights
vector
trump president prez government ...
term vector of d
0.67
0
0.51
0
...
centroid vector of c1 0.10
0.44 0.05
0.31
...
Table 2 :
2Statistics of DatasetsTraining Datasets We use the RDF dump from the original ODP dataset released on January 8, 2017, which contains 802,379 categories and 3,624,444 webpages. To obtain a well-organized ODP taxonomy, we apply heuristic rules suggested in[1] and build our own taxonomy with 2,735 categories. Thus, the final training dataset used in our experiments consists of 52,046 webpages. To construct the moderate-scale classification dataset, we use only 13 top-level categories from the ODP taxonomy by excluding two categories, Top/News and Top/Adult, which contain fewer than 100 webpages. Thus, the training dataset used in the moderate-scale classification consists of 51,856 webpages.Training dataset Test dataset
ODP
No. Categories
2,735/13
2,735/13
(large-scale/moderate-scale) No. Webpages 52,046/51,856 24,121/24,046
NYT
No. Articles
-
120
Table 3 :
3Comparison of Category Vector Generations on the ODP Dataset (2,735
categories)
Precision Recall F1-score
ODP CV (Algebra)
0.449 0.458 0.453
ODP CV (Embedding) 0.278 0.195 0.230
Table 4 :
4Classification Performance on the ODP Dataset.(a) large-scale (2,735 categories)
Precision Recall F1-score
ODP [1]
0.431 0.440 0.436
P V [7]
0.331 0.398 0.361
CN N [8]
0.402 0.232 0.294
ODPCV
0.449 0.458 0.453
ODPW V
0.451 0.440 0.446
ODPCV +W V 0.468 0.494 0.481
(b) moderate-scale (13 categories)
Precision Recall F1-score
ODP [1] 0.667 0.707 0.687
CN N [8] 0.736 0.661 0.696
Table 5 :
5Classification Performance on the NYT Dataset (2,735 categories).
Table 6 :
6Nearest words of Category Vector (Explicit + Implicit) and Highly Weighted Words in Centroid Vector (Explicit) of ODP Categories Cupcake, Bake, ... Bake, Pastries, Bread, Mix, ... Home/Cooking/Baking Bread, Recipe, Baking, Flour, Bread, Recipe, Sourdough, and Conf ections/Breads Biscuit, Cookies, Pancake, ... Baking, Yeast, Quick, ... Sports/T ennis/P layers Tennis, Wimbledon, Nadal, Tennis, Wimbledon, Winners, Henin, Federer, Sharapova, ... Players, Detailed, Seed, ...Category
Nearest Words of Category Vector Highly Weighted Words in Centroid Vector
(Explicit + Implicit)
(Explicit)
Home/Cooking/Baking
Recipe, Baking, Cookies, Cake
Recipe, Baking, Cookies, Cake
and Conf ections
Dessert,
http://www.curlie.org
https://code.google.com/archive/p/word2vec/
Semantic contextual advertising based on the open directory project. J H Lee, J Ha, J Y Jung, S Lee, ACM Trans. on the Web. 7422Lee, J.H., Ha, J., Jung, J.Y., Lee, S.: Semantic contextual advertising based on the open directory project. ACM Trans. on the Web 7(4) (2013) 24:1-24:22
Robust classification of rare queries using web knowledge. A Broder, M Fontoura, E Gabrilovich, A Joshi, V Josifovski, T Zhang, SIGIR. Broder, A., Fontoura, M., Gabrilovich, E., Joshi, A., Josifovski, V., Zhang, T.: Robust classification of rare queries using web knowledge. In: SIGIR. (2007) 231- 238
Using odp metadata to personalize search. P A Chirita, W Nejdl, R Paiu, C Kohlschütter, SIGIR. Chirita, P.A., Nejdl, W., Paiu, R., Kohlschütter, C.: Using odp metadata to per- sonalize search. In: SIGIR. (2005) 178-185
Understanding short texts. Z Wang, H Wang, ACL (Tutorial). Wang, Z., Wang, H.: Understanding short texts. In: ACL (Tutorial). (2016)
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, ICLR (Workshop). Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word repre- sentations in vector space. In: ICLR (Workshop). (2013)
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, In: NIPS. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed rep- resentations of words and phrases and their compositionality. In: NIPS. (2013) 3111-3119
Distributed representations of sentences and documents. Q V Le, T Mikolov, ICML.Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. In: ICML. (2014) 1188-1196
Convolutional neural networks for sentence classification. Y Kim, EMNLP. Kim, Y.: Convolutional neural networks for sentence classification. In: EMNLP. (2014) 1746-1751
Utilizing wikipedia knowledge in open directory project-based text classification. H Shin, G Lee, W J Ryu, S Lee, Shin, H., Lee, G., Ryu, W.J., Lee, S.: Utilizing wikipedia knowledge in open directory project-based text classification. In: SAC. (2017) 309-314
Toward robust classification using the open directory project. J Ha, J H Lee, W J Jang, Y K Lee, S Lee, DSAA.Ha, J., Lee, J.H., Jang, W.J., Lee, Y.K., Lee, S.: Toward robust classification using the open directory project. In: DSAA. (2014) 607-612
Contextual text understanding in distributional semantic space. J Cheng, Z Wang, J R Wen, J Yan, Z Chen, CIKM. Cheng, J., Wang, Z., Wen, J.R., Yan, J., Chen, Z.: Contextual text understanding in distributional semantic space. In: CIKM. (2015) 133-142
Unsupervised sparse vector densification for short text similarity. Y Song, D Roth, In: NAACL. Song, Y., Roth, D.: Unsupervised sparse vector densification for short text simi- larity. In: NAACL. (2015) 1275-1280
An evaluation of statistical approaches to text categorization. Y Yang, Inf. Retr. 11Yang, Y.: An evaluation of statistical approaches to text categorization. Inf. Retr. 1(1) (1999) 69-90
Improving text classification by shrinkage in a hierarchy of classes. A Mccallum, R Rosenfeld, T M Mitchell, A Y Ng, ICML.McCallum, A., Rosenfeld, R., Mitchell, T.M., Ng, A.Y.: Improving text classifica- tion by shrinkage in a hierarchy of classes. In: ICML. (1998) 359-367
Large-scale multilabel text classification -revisiting neural networks. J Nam, J Kim, E L Mencìa, I Gurevych, J Fürnkranz, In: ECML PKDD. Nam, J., Kim, J., Mencìa, E.L., Gurevych, I., Fürnkranz, J.: Large-scale multi- label text classification -revisiting neural networks. In: ECML PKDD. (2014) 437-452
Improved neural network-based multi-label classification with better initialization leveraging label co-occurrence. G Kurata, B Xiang, B Zhou, NAACL.Kurata, G., Xiang, B., Zhou, B.: Improved neural network-based multi-label clas- sification with better initialization leveraging label co-occurrence. In: NAACL. (2016) 521-526
| [] |
[
"Using Artificial Intelligence to Identify State Secrets",
"Using Artificial Intelligence to Identify State Secrets"
] | [
"Renato Rocha ",
"Souza ",
"Flavio Codeço Coelho ",
"Rohan Shah ",
"Matthew Connelly ",
"\nEscola de Matemática Aplicada\nDepartment of History\nFundação Getulio Vargas. Praia de Botafogo\n190 Rio de Janeiro -RJ22250-900Brasil\n",
"\nState Department's Bureau of Intelligence and Research (AINR.)\nColumbia University\n1180 Amsterdam Avenue10027New YorkNY, Bolivia, Brazil, Chile, Colombia, Ecuador, ParaguayPeru, Uruguay,\n"
] | [
"Escola de Matemática Aplicada\nDepartment of History\nFundação Getulio Vargas. Praia de Botafogo\n190 Rio de Janeiro -RJ22250-900Brasil",
"State Department's Bureau of Intelligence and Research (AINR.)\nColumbia University\n1180 Amsterdam Avenue10027New YorkNY, Bolivia, Brazil, Chile, Colombia, Ecuador, ParaguayPeru, Uruguay,"
] | [] | Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting a long-standing debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with greater rigor and precision. Using machine-learning methods, we examined nearly a million State Department cables from the 1970s to identify features of records that are more likely to be classified, such as international negotiations, military operations, and high-level communications. Even with incomplete data, algorithms can use such features to identify 90% of classified cables with <11% false positives. But our results also show that there are longstanding problems in the identification of sensitive information. Error analysis reveals many examples of both overclassification and underclassification. This indicates both the need for research on inter-coder reliability among officials as to what constitutes classified material and the opportunity to develop recommender systems to better manage both classification and declassification. | null | [
"https://arxiv.org/pdf/1611.00356v1.pdf"
] | 15,401,573 | 1611.00356 | fe632e39a84a4d76ea819f6c49e03e44e856217c |
Using Artificial Intelligence to Identify State Secrets
Renato Rocha
Souza
Flavio Codeço Coelho
Rohan Shah
Matthew Connelly
Escola de Matemática Aplicada
Department of History
Fundação Getulio Vargas. Praia de Botafogo
190 Rio de Janeiro -RJ22250-900Brasil
State Department's Bureau of Intelligence and Research (AINR.)
Columbia University
1180 Amsterdam Avenue10027New YorkNY, Bolivia, Brazil, Chile, Colombia, Ecuador, ParaguayPeru, Uruguay,
Using Artificial Intelligence to Identify State Secrets
1
Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting a long-standing debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with greater rigor and precision. Using machine-learning methods, we examined nearly a million State Department cables from the 1970s to identify features of records that are more likely to be classified, such as international negotiations, military operations, and high-level communications. Even with incomplete data, algorithms can use such features to identify 90% of classified cables with <11% false positives. But our results also show that there are longstanding problems in the identification of sensitive information. Error analysis reveals many examples of both overclassification and underclassification. This indicates both the need for research on inter-coder reliability among officials as to what constitutes classified material and the opportunity to develop recommender systems to better manage both classification and declassification.
More than one hundred countries have adopted laws or administrative measures giving citizens a right to information about what governments do in their name. But certain categories are typically excluded, whether because of national security or personal privacy. Distinguishing between what citizens are entitled to know and what officials are obliged to withhold is therefore a growing challenge, one that is compounded by the rapid growth in the volume of government records. In the U.S. and other countries records managers and archivists are struggling to cope, especially because electronic records like e-mail require new methods for processing and preservation (Moss 2012). This quantitative growth and qualitative transformation poses profound questions about the nature of democratic accountability in the age of "big data." As the conditions for research in historical records change, it will require rethinking how we study international relations (Allen and Connelly 2016).
The Freedom of Information Act (FOIA) request that revealed former Secretary of State Hillary Clinton's private e-mail server has put these challenges in stark relief.
The FBI concluded that Clinton was "extremely careless" in overlooking sensitive information and failing to preserve public records (Comey 2016). Clinton's defenders argue that what is classified is "almost random," and that her practices were not unlike those of her predecessors (Lowell 2016). Both sides agree that it took too long to review (and redact) the 54,000 pages of e-mail she turned over to authorities, only for the FBI to re-open the inquiry ten days before the election when additional e-mail emerged. In response to another FOIA request for the e-mail of three of Clinton's top aides, which amounted to 450,000 pages, the State Department estimated that it would take seventy-five years to review all the material (LoBianco 2016). But this is just the tip of the iceberg. It is thought that the State Department generates two billion e-mail every single Draft only --Not for circulation without authors' permission
year (McAllister 2010 ) .
Current declassification methods are clearly inadequate to cope with the volume of potentially sensitive information that is now being generated (Public Interest Declassification Board 2012) .
But it is likely that many of these e-mail along with other electronic records will be lost before anyone has a chance to review them. The Office of the Inspector General found that the e-mail of many senior officials from just a few years ago may never be recovered because of lost passwords or corrupted files (State Department OIG 2016).
What can we reasonably expect from government officials charged with protecting sensitive information, and is it possible to develop systems that might help officials manage both classification and declassification more rapidly and more reliably? To date, there has been virtually no government investment in research that might answer these questions. The U.S. spent over $16 billion in 2015 to protect classified information. Just the increase in spending on secrecy over the last two years --$4.5 billion --is nearly ten times more than the government spends on administering the Freedom of Information Act (Department of Justice 2015), and more than ten times as much as the entire budget of the U.S. National Archives (ISOO 2015).
But we have yet to see a single controlled experiment to determine to what extent officials agree on what information should be classified.
Theorists have long speculated as to the reasons why even democratic regimes invest so much more in secrecy than transparency. Excessive secrecy, or "overclassification," has long been recognized as contrary to the ostensible purpose of protecting sensitive information. It dilutes the meaning of classification and undermines respect for security procedures (Commission on Draft only --Not for circulation without authors' permission Protecting and Reducing Government Secrecy 1997). In recent years, the persistence of high levels of secrecy --notwithstanding the Obama administration's pledge to be the most transparent in history --has renewed interest in whether a preference for concealment is intrinsic to bureaucracy (Sagar 2013, Pozen 2010. As Max Weber famously argued:
This superiority of the professional insider every bureaucracy seeks further to increase through the means of keeping secret its knowledge and intentions.
Bureaucratic administration always tends to exclude the public, to hide its knowledge and action from criticism as well as it can. (Weber 1922) In his classic history of official secrecy in Britain, David Vincent points out that Weber was really describing ideal types. Historical research reveals that not every bureaucracy has sought to maximize secrecy, and the amount of secrecy within the same bureaucracy can ebb and flow over time. (Vincent 1999) In 1997, during a period of relatively greater transparency in U.S.
foreign policy following the end of the Cold War, the landmark report of the Moynihan Commission on Government Secrecy also drew on historical research in concluding that secrecy is best seen as a form of regulation (Commission on Protecting and Reducing Government Secrecy 1997). The problem since the very founding of the Republic is that, unlike other forms of regulation, there is no effective mechanism for determining whether the Executive is managing secrecy appropriately (Hoffman 1981;Sagar 2013 The release of the first generation of U.S. government electronic records presents new opportunities to analyze this problem using well-developed methods from natural language processing (NLP) and machine-learning. We report on the performance of algorithms developed to automatically identify sensitive information. We trained and tested these algorithms using some one million diplomatic cables from the 1970s that were originally marked as "secret,"
"confidential," "limited official use," or "unclassified." Cables were the main form of secure communications in this period, and since the original metadata include not only the classification level, but also the sender, recipient, date, and subject, these cables --and research analyzing these cables --are quite relevant to management challenges for newer types of communications, notably e-mail. They can show whether classification has historically been random or predictable, and analyzing the errors machines make in identifying sensitive information can help us understand the errors that humans make.
This article begins with a discussion of the materials and methods used in the experiment. We
Draft only --Not for circulation without authors' permission then identify the features most characteristic of classified communications. We analyze how different theories of official secrecy can help explain the data, and how computational methods yield new insights. We then present the results obtained when we apply these insights to different classification tasks, depending on whether we define secrecy more or less broadly.
Finally, we discuss the implications of our findings both in terms of the recent Clinton e-mail controversy and future prospects for technology that could better manage both classification and declassification.
Materials and Methods:
Our task was to use features from the metadata and the message text to predict whether a given document would be classified or unclassified. In essence, we take a set of observations in which the category membership is known to create a training set. We then use this data to train a statistical model that scores the probability of new observations taken from a random sample to belong to the same category. Ironically, in data science this is known as a classification task.
We obtained our data from the Central Foreign Policy Files (CFPF) of the U.S. State Department, which are available from the U.S. National Archives in the form of XML files on DVDs. The CFPF was also the source of the classified diplomatic cables released by Wikileaks in 2010-11. But the best available estimate is that the 251,000 Wikileaks cables constitute only 5-6% of all the cables produced in 2005-10 ( Gill and Spirling, 2015). Even that estimate requires assuming that the release constituted a random sample. But it seems unlikely that a random sample of U.S. diplomatic cables would feature more mentions of the Wrangel Islands than
Russia .
Draft only --Not for circulation without authors' permission There are also gaps in the declassified documents from the 1970s, and neither corpus includes the relatively few documents classified as "Top Secret." But the great advantage of using declassified historical data is that we can analyze the lacunae using both the appraisal records of archivists and the metadata that is available even for corrupted or still-classified records. Affairs, they reviewed a sample that totalled over seven thousand cables to decide which TAGS merited preservation even if the cables were not cross-referenced with other, more obviously significant subject TAGS.
The CFPF is therefore a curated corpus, but the effect of prioritizing the preservation of historically significant records was to reduce the relative proportion that was unclassified.
Of the top forty TAGS ranked by the relative number of cables classified "secret," only one was from the Operations, Administration, or Consular Affairs categories, and that was for the Draft only --Not for circulation without authors' permission Administration of the State Department's Bureau of Intelligence and Research (AINR.)
Even these records, which archivists decided to retain, were relatively mundane --travel reservations and the like. The largest groups of cables that archivists decided not to preserve were those related to Visas, Personnel, and Financial Management (Langbart 2007).
In training the algorithms to identify classified communications, we also had to exclude non-cable records that did not include the message text in digital form, such as paper records, many of which were delivered via diplomatic pouch ("p-reel" records). In addition, there are 410,539 "withdrawn" cables where only the metadata has been declassified. We also had to exclude cables that had no text or just "n/a" for the features we used in building the classifier (see Table 5, and see also the Supplementary Information).
The available metadata shows that about the same percentage of the withdrawn cables were originally classified as secret as compared to the cables that were released (5.2% versus 5.3%).
The reason is that many records are withheld because they contain personally identifiable information and not national security information. Even fewer P-reel documents were originally classified secret (2.6%). Conversely, when we identified features in the metadata characteristic of the most secret communications, we found little difference in the rankings whether or not we included metadata from the withdrawn cables, and whether these communications were cables or p-reel records.
But there is one category of missing data that is quite distinct in terms of secrecy: 128,026
cables were meant to be preserved because they have TAGS deemed to be historically significant, but there is no message text for these cables in the State Department Automated
Draft only --Not for circulation without authors' permission Data System --only metadata. According to the U.S. National Archives, the State Department lost some of the data during migration between different hardware and software platforms, but "some telegrams were intentionally deleted from the electronic repository." (National Archives and Records Administration 2016). There are intriguing patterns in the missing data, to be discussed below.
For purposes of the experiment, the main problem with incomplete data is that it increases the difficulty of developing algorithms to automatically identify classified communications. But since we still have the metadata even for these withdrawn and incomplete records, we can use them
to begin analyzing what kinds of information is typically classified. We will do that by identifying the features in the metadata that are most useful in predicting which cables are secret, such as who sent or received the cable, what topic it concerned, and what keywords the authors used to categorize it. We then compute which embassies, topics, and keywords had the highest proportion of secret cables.
In preparing the remaining 981,083 cables for the machine classification experiment, we carried out many standard NLP operations to process the raw textual data, starting with tokenization.
Compound names of places in textual fields were aggregated, enabling them to be treated as a single token (i. e. NEW YORK was transformed to NEWYORK). This step was especially important in the case of the from and to fields, which represent the names of the embassies.
These fields were aggregated in a new field, sender/recipient , for the vectorization process. We also eliminated all the trailing punctuation and words with length of 1. While underscores and hyphens in the middle of words were maintained, hyphenation was otherwise eliminated from textual fields. So too were stopwords using the NLTK (Bird 2009) English stopwords list.
Draft only --Not for circulation without authors' permission Some of the available metadata indicated how these records were reviewed before release. But to test the feasibility of a system to recommend the appropriate classification under real world conditions, we only used data that would be available to a human --or a computer --before a cable is sent. In addition to the message text (the body field), we found the following features most useful in distinguishing more classified cables from the rest of the corpus: 1)
sender/recipient , a combination of the "from" and "to" fields, which typically is an embassy or office; 2) concepts, standard keywords used to categorize each document; 3) subjec t, a short description; 4) office , the part of the State Department with responsibility for either originating or acting on a cable; 5) TAGS , for Traffic Analysis by Geography and Subject.
For each one of the features, and many feature combinations, we applied different forms of vectorization and tested the prediction power of the resulting vectors using several types of classifiers and ensembles. Most machine learning algorithms require fixed length vectors as inputs. For bodies of text, vectorization steps were performed using a Bag of Words (BoW) (Harris 1954) approach, using plain term counts or some form of weighting such as TfIdf (Term Frequency -Inverse Document Frequency, Spärck 1972).
All these vector transformations have parameters: the size of the vocabulary; the n-gram range taken in consideration (single tokens are unigrams; two consecutive tokens are bigrams, etc.); the choice of whether to eliminate stopwords; the minimum and maximum document frequencies; among others specific to each feature. We undertook extensive tests and hyperparameter optimization (Bergstra 2012) to determine the best parameters (Table S.4).
Draft only --Not for circulation without authors' permission For this corpus TfIdf did not perform better than plain term frequencies in the BoW vectors.
Despite having bigger vocabulary sizes, the fields subject , concepts, and body have been represented using a vocabulary that only considers the top N words ordered by term frequency across the corpus. For the TAGS and Office fields, all distinct tokens that appeared in more than five cables were taken into account. For the new field sender/recipient, we have discarded those that appeared in fewer than six cables, which can be indicative of misspellings.
We have used cross validation to test the performance of different models and avoid over-fitting,
i.e. choosing a model that would perform poorly on data outside the sample. This entailed setting aside a random selection of cables and --after training our classifier with the rest of the data --testing the accuracy of its predictions for the randomly selected cables. We used the stratified k-fold method, maintaining the proportion of classes ( "unclassified," "limited official use," "confidential," "secret") in each fold. The k-fold itself uses k folds for training and test sets.
We have chosen three folds for each classifier, trained with the training set and evaluated with the test set.
Rather than using just one classifier for this task, we tested twelve different classification models to evaluate their overall performance in terms of the AUC\ROC metric (Hanley 1982, Davis 2006, and see SI). Based on these tests, we built an ensemble of the seven best-performing algorithms using different weights for each: Stochastic Gradient Descent, Logistic Regression, Ridge, Bagging, Extremely Randomized Trees, AdaBoost and Multinomial Naive Bayes . This can be likened to assembling experts and giving them more or less weight depending on their comparative reliability.
Draft only --Not for circulation without authors' permission Upon determining that using all features as independent vectors produced the best results, we applied this method to four different tasks: (U)nclassified vs (L)imited Official Use, (C)onfidential, and (S)ecret; U, L vs C, S; U, L, C vs S; and U vs C,S (Table 4). One could make further adjustments to the parameters, but we do not believe they would bring more than marginal gains, and the generalizability of the model would be compromised. Instead, we believe that this model could be used with similar kinds of data to achieve similar results, such as future releases from the same Central Foreign Policy Files.
What Makes Secret Cables Different?
The scale and multi-dimensional nature of data from formerly classified diplomatic communications illustrates the need to use machine-learning methods to identify patterns and anomalies for otherwise hard-to-observe phenomena (Monroe et al. 2015). In this experiment, there were 40,700 possible features for each of the 918,083 records (Table S4). This much data precludes a purely qualitative analysis. Instead, it rewards an approach that allows for multiple combinations of variables and differently weighting each variable depending on the task at hand. And once we determine the features that are most useful in identifying classified records, we can look more closely at each one to indicate the kind of patterns that combine to create accurate predictions. 1
One feature we did not find to be useful in predicting whether a record was more or less likely to be classified was the month and year it was dispatched. This is partly because it is difficult to divide up the data temporally, since patterns of activity change with weekends and holidays, and some reporting from or to military staff headquarters, and those working at two embassies --Cairo and
Tel Aviv --that were a focus of U.S. diplomacy leading up to the Camp David Accords (see Table 1).
Until recently, even the strongest proponents of transparency conceded that diplomacy must sometimes be conducted in secret, particularly when it comes to the use of force and statecraft.
Draft only --Not for circulation without authors' permission Woodrow Wilson himself conducted the negotiations for the Versailles Treaty behind closed doors even after promising " open covenants of peace, openly arrived at" in his famous Fourteen
Points . (Nicholson 1964) As François de Callières noted three centuries ago in a classic guide to diplomatic practice, secrecy is "the life of negotiations." (De Callières 1716) Even before the founding of the United States, the Continental Congress deemed the dispatches of the first American diplomats to be secret by default. The Constitutional Convention --itself conducted in secret --sought a strong executive who could act with secrecy and dispatch. (Hoffman 1981) Jeremy Bentham argued that transparency made government more efficacious, but agreed it should be "suspended" if it were calculated " To favour the projects of an enemy." (Bentham 1843) While railing against the "growing evil" of official secrecy, Giuseppe Mazzini conceded it was necessary to "Let diplomacy have its secrets, for diplomacy is but a refined mode of modern warfare..." (Mazzini 1844) Diplomats have defended secrecy by arguing that confidential negotiations are an alternative to warfare, and far preferable ( Nicholson 1964) .
The same pattern of protecting records related to international negotiations and military operations is evident in an analysis of the 862 Concepts, or keywords, State Department officials used to organize diplomatic cables (see Table 2). But so too is the tendency --already evident in cables sent to the White House --for the most senior officials to pursue special protection for their own communications. Ironically, the already notorious problem of overclassification made this more difficult. It was not just outside critics like Jimmy Carter who complained about the problem of excessive secrecy. Richard Nixon expressed frustration at the way even categories like "top secret" had become so overused as to lose real value or meaning.
He favored creating a new designation indicative of an even higher level of exclusivity (Blanton
2003)
Draft only --Not for circulation without authors' permission
In 1974 the State Department created special designations for communications involving the most senior officials. Cables with the "CAT-C," "CAT-B," and "CAT-A" concepts continued to be highly classified during the Carter administration. Conversely, Concepts associated with few or no secret cables (<1%) are indicative of the kinds of subjects --like "music" and "meats" --that senior officials considered less important or less urgent (see Table 3).
There is also evidence that certain records were removed from the State Department Intriguingly, the cables missing from the database also tend to be more highly classified, and often involve the most senior officials. Electronic messages classified as "Secret" were more than three times more likely to go missing compared to Unclassified and Limited Official Use messages (22% versus 6.5% Kissinger was incensed when, later that month, a U.S. foreign service officer sent a cable that apparently confirmed this violation of U.S. law. Kissinger worried about how many people had already seen it. " That will leak in three months and it will come out that Kissinger overruled his pristine bureaucrats and violated the law….Everything on paper will be used against me." (Memorandum of Conversation 1975) (Kirkpatrick, 1979).
In fact, there is a striking regional variation in how much U.S. diplomats wrote about human rights --even more striking when we limit the analysis to countries that imported U.
Results:
The great advantage of using a machine-learning approach to identify what specific records are more likely to be classified is that it can leverage many subtle interrelationships in multidimensional data. In this case, we used it to identify cables originally classified as Secret;
Confidential; Limited Official Use; and Unclassified, which indicate the degree of sensitivity accorded to these communications by the officials who drafted them. We grouped them in various ways to measure the performance of different classifiers depending on how broadly or how narrowly one defines a state secret. Of all the features, the relative frequency of different words in the body was the most useful in identifying sensitive information. High recall or precision (but not both) is achievable with some of the other features. For instance, it is possible to identify 95% of the cables that are classified as Secret, Confidential, or Limited Official Use just by using the sender/recipient data. But fully a third of the identified cables would be false Draft only --Not for circulation without authors' permission positives. Other features produce better overall performance (see Fig. 4). But the best results came from combining all feature as independent vectors. Table 4 shows several performance measures when we categorize sensitive information more or less broadly. This can be assessed in terms of accuracy (Walther 2005) (how often is the classifier correct?), recall (also known as true positive rate: how many of the classified cables does it identify?), precision (or false positive rate: how many of the identified cables were actually classified?), and the average f1 score (the harmonic mean of precision and recall) for the two classes. We also present the AUC or ROC Score (Hanley 1982) -the area under the curve when plotting the true positive rate (y-axis) against the false positive rate (x-axis) as you vary the threshold for assigning observations to a given class. All of these metrics are derived from the true/false positive and negative counts.
There is a notable improvement in performance when we exclude data that does not clearly belong to either category. The "Limited Official Use" designation has long been a cause for complaint. White House Executive Orders from the period classify as "Secret" information that " could reasonably be expected to cause serious damage to the national security," and include specific examples. "Confidential" was the classification for information that could cause "damage" but not "serious damage" (Federation of American Scientists 1972). Limited Official Use, on the other hand, had no such definition. When the George W. Bush administration attempted such a definition, the Government's leading FOIA expert admitted it was "so broad that it is almost harder to think of information that does not fall within its scope than information that does" (Metcalfe 2009). This is consistent with our inability to distinguish these cables despite using many different features and multiple classifiers. It is not clear whether any NLP or Draft only --Not for circulation without authors' permission machine learning methods would accurately distinguish this category.
But the results also show there is some logic in official secrecy, at least to the extent that State Department cables from the 1970s are indicative. Even when we include Limited Official Use cables, just a few kinds of metadata and the relative frequency of words in the message text are enough to identify 90% of classified cables with relatively few (<11%) false positives. Whether a diplomatic communication should be classified may therefore be relatively predictable --more so than, say, predicting heart disease by data-mining medical records (Chaurasia 2014).
If we were able to use all the diplomatic cables from the period, including both those that were lost and those that are still classified, we would likely achieve higher accuracy. But even with all of the data, these methods will sometimes miss crucial contextual elements, such as whether the subject of a seemingly innocuous travel reservation or visa application was on a sensitive mission (See Fig. 5). Other errors might instead reflect the intrinsic subjectivity of at least some classification decisions, especially considering that officials often receive inconsistent or inadequate guidance.
When we began to examine the false positives and false negatives --i.e. the cables we predicted would be classified, but were unclassified, and vice versa --we found many instances
Discussion:
There is an upper limit to what any supervised learning algorithm can achieve in machine classification: it cannot be more accurate than the intrinsic consistency of the data allows. To the extent humans misclassify their communications --or simply disagree about how they should be classified --so too will algorithms.
In the debate over the Clinton email, many appear to assume that the post-hoc review created what data scientists would call "ground truth" in identifying classified communications. But one could argue that Clinton and her aides were no less expert in recognizing what communications deserved to be classified and contained on a secure system. It might instead be seen as a natural experiment in inter-coder reliability in identifying state secrets. If so, this would be the government's first such experiment in 75 years of creating official secrets, or at least the first in which it has published the results.
But this experiment is flawed and incomplete, especially if it is expected to answer the question Draft only --Not for circulation without authors' permission of how careless Clinton and her aides were in handling email that were deemed to be classified.
These "false negatives" constitute 6.9% of the total reviewed. Even if we accepted this as ground truth, we cannot begin to estimate either precision or recall --and hence accuracy -without reviewing the communications Clinton and her aides correctly classified as sensitive and protected on a secure system (true positives), and those which they overclassified (false positives) . Moreover, one would need to compare the error rate with the error rate for the rest of the State Department. Certainly, the idea that government officials make no errors in identifying sensitive national security information is not supported by the data.
The historical data also show with new precision the extent to which officials have preserved, or failed to preserve, the official record. Whereas press coverage of the Clinton controversy has focused on very recent practices of using private e-mail, the historical record shows that diplomats have for centuries used both official and unofficial channels to control access to their communications. What has changed in recent years is that, with the use of electronic records, we can now use computational methods to analyze different kinds of communications. When records are "lost" it becomes more difficult, but these acts can also leave statistically conspicuous gaps in the public record that merit further scrutiny. Without this kind of analysis, a democracy loses the capacity to hold individuals and public institutions to account, a great asset against less democratic types of regimes that offsets their ability to act with even greater "secrecy and despatch." When instead officials do not preserve the public record a democracy is doubly disadvantaged, losing both the capacity to identify and correct errors and incompetence, while still remaining relatively open and prone to leaking compared to autocratic states (Colaresi 2012).
Draft only --Not for circulation without authors' permission Based on what we have learned from machine classification, it should be feasible to develop systems in which classified and unclassified communication streams would continually generate data that would be predictive of the appropriate classification level for new communications.
Such a system would automatically default to the predicted classification, requiring manual override if the sender wished to classify at a higher or lower level (such as communications about a new classified program, or a subject that is no longer sensitive).
The same type of system could be used for accelerating the release of electronic records, harnessing data from previous declassification decisions to prioritize records for close scrutiny that are most likely to have sensitive national security or personal information. These systems could therefore "nudge" officials to classify or declassify communications appropriately, and also reveal which officials tend to diverge from the norm when dealing with similar types of subjects, language, etc. Such a system could therefore be self-correcting, and become increasingly accurate as sources of error were continually identified and corrected for.
What is most needed now is government support for applied research on technology to improve management of sensitive information (Public Interest Declassification Board 2012). This should include rigorous, double-blind testing of how consistent humans are in both classification and declassification. Without determining the expected error rate among humans, we will never know the baseline against which to test computer-assisted approaches --or evaluate claims that this or that official was negligent in failing to recognize and protect state secrets.
References
Draft only --Not for circulation without authors' permission
Proportion of Cables Classified as Secret 1974-1978
The total includes all cables, including those that are still withheld, dating from the first year in which we have relatively complete data. , 1973-1978 The number of cables by day with error messages in the place of message text.
Fig. 2:
Number of Missing Cables
Draft only --Not for circulation without authors' permission Draft only --Not for circulation without authors' permission
Performance of Classifiers Using Different Features
To determine which features are most useful, we tested the performance of classifiers for each one individually.
Fig. 5:
How Draft only --Not for circulation without authors' permission Supporting Information
Data acquisition and preparation
The Central Foreign Policy Files available from the US. National Archives include cables as well as so-called p-reel records, i.e. communications sent physically via diplomatic pouch. Both are either available in full or have been "withdrawn" because the record either has sensitive national security information or personal information. Both p-reel records and withdrawn cables have limited metadata, but neither has message text. We limited our analysis to cables that have been fully released.
The metadata includes the original classification ( origclass ), i.e. Secret, Confidential, Limited
Official Use and Unclassified. Cables with null, degenerated or misspelled names of classes in origclass were left out of the analysis. We first analyzed the cables to decide which fields could be useful as input features. The feature engineering involved analysis of the textual quality; discovering most common values for each class; and defining the subset valid for the tests. The fields that were added a posteriori to the creation of the cable were not used as they could convey information related to the classification. The main feature of interest is the body -the full text of the cable -although we have also dealt with other fields (Table S2). Some of them present errors in the digitization process and have, as their bodies, just small error messages, as illustrated in Table S3.
Draft only --Not for circulation without authors' permission The tests aimed at predicting the level of sensitivity in a binary fashion; therefore, we have tested binary aggregations (called here scenarios) of the four classes grouping them according to broad or narrow definitions of secrecy: (S)ecret, (C)onfidential, (L)imited Official Use and (U)nclassified . Additional tests using four classes as targets were also performed, but the nature of the cables' subjects and classification led to low recall and precision measurements.
Once the features were selected, the database was queried, and the retrieved data were joined in a Python Pandas (McKinney 2010) Dataframe structure, we processed the raw textual data by eliminating hyphenation, tokenizing (separating words and discarding punctuation), and removing common, non-informative words such as "and", "the", etc. Tests were made using stemmed forms of words, but it didn't enhance the performance and the stemming was discarded. The field date was transformed in a boolean field weekday -indicating whether the date fell in a weekend or not; and another field year+month , used to test hypothesis on the temporal nature of classification (i.e. a larger proportion might be classified during weekends, or periods of crisis). This did not prove useful for the whole span, although it could be promising for analysis of shorter periods of time.
Feature vectors
For each one of the features, and some feature combinations, we have applied a few forms of (Salton 1988); in our case it is $ \mathrm{tfidf}(t,d,D) = \mathrm{tf}(t,d) + \mathrm{tf}(t,d) \times \mathrm{idf}(t, D) $ where t is the term, d is the document, and D is the corpus of documents. In our particular implementation, we have discarded terms that appeared in less than 2 documents, eliminating some of the misspellings and hapaxes.
We undertook extensive tests and hyperparameter optimization ( Bergstra 2012 ) to determine the best parameters, and summarize the results in Table S.4.
Classifiers and Ensembles
Draft only --Not for circulation without authors' permission Table S6 presents the performance of the classifier for the three combinations of classes using each feature as input as well as two feature combinations. The fields body , subject , concepts , tags , embassy and office were added together in a new field/feature all_text , which was tested as an alternative to combining all the other features by concatenating those vectors. All vectors were produced using plain count BoW. All measures are calculated as a mean of the three folds using stratified k-fold.
Evaluation of Results
Draft only --Not for circulation without authors' permission Upon determining that using all feature as independent vectors produced the best results, we applied this method to a corpus in which all the cables are clearly identified as classified or unclassified, i.e. with no Limited Official Use (L) cables in either the training or test set. The ROC Score of ~0.93 --the best result of all the tasks --indicates how these ambiguous cables limit the performance of classifiers for identifying state secrets. The misclassified cables --both false positives and false negatives --were also presented to two domain experts with the classification removed. They were more likely to agree with the algorithm in judging cables to have information that would have been sensitive in the era they were created. Using all features yielded the best results in all scenarios, closely followed by the summation of textual fields, then body and subject. The other features did not show good performance taken alone.
Discussion
Results over 80% in classification tasks are generally regarded as good (e.g. the Netflix challenge, in which the best result was ~85%) (Netflix 2009). But whether such results are adequate depends on the task at hand. If our priority is to protect sensitive information, and we wish to minimize the number of false negatives, i.e., cables that are predicted to be unclassified Draft only --Not for circulation without authors' permission
Fig. 1 :
1Fig. 1:
Fig. 3 :
3Percentage of Cables by Country Related to Human Rights and Percent Classified as SecretCountries with a severe human rights situation according to Freedom House are color-coded orange or red. The size of the bubble is proportionate to the number of cables with different country TAGS. The Y axis is a log scale showing which countries have a higher proportion of cables related to human rights. The X axis shows which countries have a higher proportion of cables classified as secret.
Fig. 4 :
4Fig. 4:
Figure S1 :Figure S2 :
S1S2AUC scores for all featuresDraft only --Not for circulation without authors' permission Precision vs. recall for each feature and the combination of features
).A key challenge is therefore in determining what kinds of information really do require safekeeping, especially since the evidence from the past is largely anecdotal. Developing a richer, more robust theory of official secrecy that can both account for variation in classification Draft only --Not for circulation without authors' permission practices and inform more effective regulation requires empirical research. As Vincent acknowledges, while qualitative research in archives can reveal how officials keep secrets, it is less suited to revealing the larger patterns in what kinds of information they typically conceal.Computational analysis of declassified records can identify both patterns and anomalies, and show whether practices are consistent enough as to become predictable. This is both a good test of theory and the essential precondition for any technology that would assist in the management of sensitive information. When it comes to the Clinton e-mail, such methods can show what is normal and what might be considered negligent in how officials manage large numbers of potentially sensitive communications.
TAGS" the original State Department drafters had assigned to each record since the system was created in 1973. This metadata field, which also included TAGS for each country, was intended to facilitate "Traffic Analysis by Geography and Subject." ArchivistsThe National Archives appraisal of the electronic records of the CFPF shows how archivists and
records managers endeavored to preserve all historically significant records among the 27
million that had accumulated by 2006. This corpus was both quantitatively and qualitatively
different from the paper records archivists were accustomed to dealing with. They analyzed it by
utilizing the 188 subject "used it to identify and preserve all records from the 93 subject TAGS belonging to the broad
categories of Political Affairs, Military Affairs, Social Affairs, Economic Affairs, Technology and
Science. For the 95 other subject TAGS on things like Operations, Administration, and Consular
units --like months --are of variable length. But it's also because there was no clear trend or pattern in the proportion of classified diplomatic communications. The overall level of secrecy was little changed during the first two years of the Carter administration as compared to the time that Henry Kissinger was Secretary of State. (See Fig. 1) This is surprising, considering how the context of diplomacy in the period 1973-78 was becoming increasingly hostile to official secrecy, after the revelations of the secret bombing of Cambodia, Watergate, and the Church Committee hearings on covert CIA operations. Shortly after assuming office in 1977, Jimmy Carter famously rejected Kissinger's penchant for secrecy, promising " a foreign policy that the American people both support and, for a change, know about and understand ." EchoingWoodrow Wilson, he insisted that " Our policy must shape an international system that will last longer than secret deals. We cannot make this kind of policy by manipulation. Our policy must be open; it must be candid…"(Carter 1977) In fact, over the period 1973-78 communications regarding the same kinds of subjects continued to be classified: military matters and international negotiations. The posts that transmitted the highest percentage of secret cables include officials negotiating the Strategic Arms Limitation Treaties and the Mutual and Balanced Force Reduction agreements , those
Automated Data System. According to the National Archives, many of the cables where we have complete metadata but no message text are available on microfilm. Moreover, we do not know when the data were migrated, and the electronic versions of messages were lost. But it's notable that most of these cables do not date to when the State Department first set up the system, when one might expect it would have been troubleshooting ways to reliably transfer data between different hardware and software platforms. Instead, most date to 1975-76, and coincide with some of Henry Kissinger's most controversial actions at a time in which he was coming under increasing criticism for his conduct as Secretary of State.
Kissinger advised Suharto to construe it as self-defense and delay the operation until Ford had returned home. But, he said, it was particularly important that "whatever you do succeeds quickly." (Jakarta toState 1975) ). As for the CAT-C cables, we only have the electronic message
text for 38%. For the rest, there is only an error message, e.g.:
Draft only --Not for circulation without authors' permission
MRN: 1975JAKART014946 SEGMENT NUMBER: 000001 EXPAND ERROR ENCOUNTERED;
TELEGRAM TEXT FOR THIS SEGMENT IS UNAVAILABLE
This particular record, an account of a meeting in Jakarta between Gerald Ford, Henry
Kissinger, and Indonesian president Suharto, survived and was printed out in hard copy. But
there are almost no State Department cables in the database from December 1-15, 1975, the
first of several conspicuous gaps. When the cable was finally declassified in full in 2001, after
the fall of the Suharto regime, it showed that Ford gave a green light to Suharto's stated plan to
conquer East Timor using U.S. arms. Realizing this would be in violation of American law,
This second cable is one of the 119 cables sent from Jakarta that same month which are now missing from the database. many different subjects during these same periods. Even if it can be established that someone deliberately deleted the messages from the database, the data does not permit us to impute specific motives. But it is notable that the gaps end with the end of Kissinger's term as secretary empirical support for these beliefs. For instance, the first two years of the Carter administration witnessed rapid growth in the number of cables with the SHUM TAGS, designating those related to Human Rights. This was consistent with the Carter administration's stated policy of promoting democratic values. There was also tremendous growth in the number of human rights The index is calculated by grading the country across 10 political rights indicators, such as the electoral process. There are also 15 civil liberties indicators, such as freedom of expression, that are broadly derived from the Universal Declaration of Human Rights.with the human rights situation in a country had little to do with the human rights situation per se, at least as it was assessed by Freedom House. In fact, U.S. diplomats largely ignored the human rights situation in many allied countries with repressive governments. This is contrary to what outside critics like Jean Kirkpatrick argued at the time. They claimed that Carter had a blanket policy of criticizing human rights violations, which made little difference with Cold War adversaries and only served to delegitimate allied governmentsOther notable gaps in the electronic record include March 18-31 1976, when Kissinger
supported the military coup in Argentina; May 25-31 1976, when he favored the Syrian invasion
of Lebanon; and June 1976, when he met with the Prime Minister of South Africa in the midst of
the Soweto Uprising against Apartheid. Of course, Kissinger and his staff were dealing with
Draft only --Not for circulation without authors' permission
of State (See Fig. 2).
Since Bentham and John Stuart Mill, philosophers have long postulated that secrecy serves to
cover up the abuse of power, and that liberty requires transparency. As Louis Brandeis
famously observed, "sunlight is...the best of disinfectants." (Brandeis 1914) In analyzing the
relationship between these different features, we find intriguing patterns that provide some
organizations. Some, like Freedom House, also reported on the situation in each country, giving
them a 'Freedom in the World Score' from 1 (best, e.g. Norway) to worst (7, e.g. North Korea).
When we compare the percentage of cables diplomats wrote about human rights when
reporting about different countries in 1977-78 (i.e. by counting the cables in which country
TAGS co-reference SHUM TAGS, and comparing it to all cables with these country TAGS) we
find only a very weak relationship with the score these countries received from Freedom House
(an r of 0.2108108261.) In other words, whether U.S. diplomats were more or less concerned
Draft only --Not for circulation without authors' permission
S. arms in this period. Middle Eastern and Latin American countries both tended to have repressive governments, with an average Freedom House score of 4.7 and 4.5 respectively. But there was far more reporting about the human rights situation in Latin America compared to the Middle East. Of cables concerning Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Paraguay, These findings are consistent with earlier research by Qian and Yanagizaw showing a divergence between the annual State Department human rights reports and the assessments of Amnesty International. They found that the State Department showed favoritism towards itsCold War allies, which they identified according to how consistently other countries voted with the U.S. at the U.N. General Assembly. But they also control for regional diversity in their assessment(Qian and Yanagizaw 2009). We find that, in fact, the specific situation of allied countries in the Cold War is what is most strongly associated with whether the State Department showed any interest in human rights, and not just whether it criticized human rights violations in Draft only --Not for circulation without authors' permission published reports. When countries were well within the U.S. sphere of influence, and there was little risk of Soviet intervention, such as Latin America in 1973-78, foreign service officers frequently reported on detention, torture, and civil liberties. But they showed little interest in such issues in Middle Eastern countries, where the U.S. and the USSR were engaged in intense competition for influence and the risk of international conflict was high.Peru, Uruguay, and Venezuela, 5.96% reference Human Rights. For Egypt, Iran, Israel, Jordan,
Kuwait, North Yemen, Saudi Arabia, Turkey, and the UAE, it's <1% (.85%.) (See Fig. 3)
Not coincidentally, countries of the Middle East were also far more likely to have a high
percentage of secret cables. Almost fifteen percent (14.82%) of all cables with Middle Eastern
country TAGS were classified as secret in 1973-78. For Latin American Countries, it was two
percent (2.26%)
of human error. For unclassified cables which the algorithm identified as having the highest probability of being classified, this included hundreds of examples where the cables were miscategorized as unclassified in the metadata, such as a report on Japanese government sensitivity about U.S. inspection of its nuclear facilities. The message text itself clearly showed it was originally confidential (Amembassy to Secstate 1977). Other errors included cables that Draft only --Not for circulation without authors' permission were originally secret when received at the State Department but were resent to another post as unclassified, such as a report on what Lebanese Christian leaders said about ceasefire negotiations with the PLO (Secstate to Cinceur 1977). And there were many examples ofunclassified cables that, according to experts with security clearances we consulted, would have
been highly sensitive at the time. This included, for instance, what a confidential informant told
U.S. diplomats in Cyprus about the kidnapping of the President's son (Fig. 2) (Secstate to
Usdel 1977).
We Measure Performance in Identifying State SecretsClassifiers can be optimized for either recall or precision. When the end-user has low tolerance
Draft only --Not for circulation without authors' permission
for risk, they will seek to maximize recall, i.e. the percentage of all cables with sensitive
information they identify as classified, even if the result is lower precision, i.e. the percentage of
all cables identified as classified that actually have sensitive information. But the false positives
and false negatives include examples of human error in classification.
Table 1 :
1Origin and Destination of Cables Most Likely To be Classified as SecretAfter filtering out the from/to pairs that did not send at least one hundred secret cables from
1973-78, these are the ones that had the highest percentage of secret cables overall.
From
To
Secret
Cables
Total
Cables
Percent
Secret
NATO
STATE, SECDEF, INFO:
MBFR VIENNA, BONN,
LONDON, USNMR SHAPE,
USCINCEUR
426
436 97.71%
CAIRO
STATE SS MULTIPLE
104
109 95.41%
SALT TALKS
STATE
671
723 92.81%
GENEVA
USSALTTWO
STATE
434
486
89.30%
TEL AVIV
STATE SS MULTIPLE
145
163 88.96%
STATE
SECRETARY WHITE
HOUSE
121
146 82.88%
MBFR VIENNA
STATE DOD
775
1075 72.09%
STATE
WHITE HOUSE
1464
2035 71.94%
STATE
JCS MULTIPLE
179
250 71.60%
STATE
USCINCEUR
131
258 50.78%
Table 2 :
2Concepts (or Keywords) With the Highest Proportion of Secret CablesAfter filtering out the concepts that did not appear in at least 1,000 cables from 1973-78, these
are the ones that had the highest percentage of secret cables overall.
Draft only --Not for circulation without authors' permission
Table 3 :
3Concepts With the Lowest Proportion of Secret CablesThese are the more common "concepts" (>1,000 total cables) assigned to cables in which
officials deemed <1% merited classification as "secret."
Concept
Secret
Cables
Total
Cables
CIVIL AVIATION
75
16,715
SCIENTIFIC VISITS
41
11,758
TEXTILES
7
7,157
MUSIC
26
6,439
SEMINARS
25
5,756
EXCEPTIONS LIST
13
5,312
SCIENTIFIC MEETINGS
11
5,081
LABOR UNIONS
24
5,009
ENVIRONMENT
11
4,763
MEATS
5
4,213
Table 4 :
4Performance of Classifiers Using All Six Features for DifferentTasks.Classifica
-tion Task
ROC/
AUC
Score
Accuracy
Score
Precision
Score for Un-
classified
Precision
Score for
Classified
Recall for
Un-
classified
Recall for
Classified
Average
f1-score
(U vs L,C,S) 0.859 0.87
0.834
0.891
0.81
0.9
0.896
(U,L vs C,S) 0.85
0.87
0.884
0.843
0.92
0.78
0.809
(U,L,C vs S) 0.806 0.966
0.974
0.802
0.99
0.62
0.697
(U vs C, S) 0.928 0.929
0.926
0.931
0.94
0.92
0.926
Table 5 :
5Counts and Classification Types of Cables with Information on Cables Considered BlankSituation
Total in
Database
Unclassified
Limited
Official Use
Confidential
Secret
declassified
cables
1,758,279
876,797
411,973
375,690
93,635
Error messages
for body
119,744
53,935
21,744
25,233
18,832
blank body
8282
2,726
1,645
1,924
1,987
blank or n/a
concepts
634,967
445,300
114,507
65,502
9,658
blank or n/a
subject
26,109
16,490
5,820
2,914
885
blank or n/a
from
17
7
6
3
1
blank or n/a to
9,740
6,027
1,572
1,698
443
Used for
classifier
981,083
368,043
280,251
270,477
62,312
Table S1 :
S1Cable Counts and Classification According to Cable TypeCable_typ
e field
Total in
Database
Unclassified
Limited
Official
Use
Confidential Secret
Null or
degenerated
Full cables
1,758,279
876,797
411,973
375,690
93,635
184
P-reel
505,030
419,052
33,887
28,695
14,716
0
Withdrawn
cables
410,539
298,932
25,988
25,816
21,474
38,329
P-reel
withdrawn
8,920
3,538
570
2,277
2,304
231
Table S2 :
S2Explanation of fields used in ClassifierUsed Fields Descriptionorigclass The original classification level of the cable body Full text of the cable subject Topic dealt with in the document. TAGS Traffic Analysis by Geography and Subject from Which person/what office sent the document. to Which person/what office received the document. office Which State Department office or bureau was responsible for the document.Not all the cables that present cable_type field equal to full have useful content in the body .concepts
Keywords attributed to the document
date
Document creation date
Table S3 :
S3Overview of Error Types in Full Cable Dataset and Count of ClassificationType of errors in
digitization
Total in
Database
Unclassified Limited
Official
Use
Confidential Secret
"Error Reading Text
Index"
46,876
13,931
6,241
11,592
15,112
"Expand Error
Encountered"
72,850
39,988
15,499
13,640
3,723
"Encryption Error"
42
22
10
10
0
Total (errors are not
exclusive)
119,744
53,935
21,744
25,233
18,832
vectorization and tested the prediction power of the resulting vectors. Like Blei et al. (Blei 2003), we define the following terms: A word or token is the basic unit of discrete data, defined to be an item from a vocabulary indexed by {1,...,N}. Words are represented using vectors that have a single component equal to one and all other components equal to zero. A document vector is a sequence of N word counts denoted by w = (w1,w2,...,wN), where wn is the nth word in the sequence and N is the size of the vocabulary. A corpus is a collection of M documents denoted by D = {[w11, w21, ..., wN1],[w12, w22, ..., wN2], ..., [w1M, w2M, ..., wNM]}. With Bag of Words (BoW), documents are represented by vectors of dimension N, where N is the size of the vocabulary or the subset N of the most frequent words of the vocabulary; each column represents a word and the value is the frequency of that word in the document. TfIdf (Term Frequency -Inverse Document Frequency) also represents documents as vectors of dimension N, where N is the vocabulary size or the subset N of the most high-valued words in the vocabulary, based on the TfIdf metric: instead of word counts in the columns, it utilizes a method for emphasizing words that occur frequently in a given document (Tf), whilst de-emphasising words that occur frequently in many documents or in the whole corpus of documents (Idf). The count of the words in the vector is substituted by some weighting factor Draft only --Not for circulation without authors' permission
Table S . 4 :
S4Overview of Parameters of Analysis against Feature TypeFeature
Number of
Tokens
Vocabular
y size (N)
Max vector
size (N')
Best n-gram
range
Best
vectorization
subject
6,894,992
180,480
8,000 (1,1)
Term
frequencies
concepts
4,929,265
13,192
650 (1,2)
Term
frequencies
body
259,276.062
1,929,902
15,000 (1,1)
Term
frequencies
TAGS
3,272,125
939
844 (1,1)
Term
frequencies
Embassy
( from / to )
2,234,457
4,874
1,036 (1,1)
Term
frequencies
office
1,937,707
261
170 (1,1)
Term
frequencies
All Text
278,544,608
1,968,680
15,000 (1,1)
Term
frequencies
Table S6 :
S6Performance of Classifier in Different Capacities by Feature TypeFeature
Class
Combination
ROC/AU
C Score
Accuracy
Score
Precision
(class 0/1)
Recall
(class 0/1)
Average
f1-score
Subject
(U vs L,C,S)
0.79
0.82
0.81/0.82
0.68/0.91
0.74/0.86
(U,L vs C,S)
0.80
0.83
0.85/0.77
0.89/0.72
0.87/0.74
(U,L,C vs S)
0.70
0.96
0.99/0.80
0.99/0.40
0.98/0.53
Concepts
(U vs L,C,S)
0.72
0.75
0.69/0.77
0.59/0.84
0.63/0.81
(U,L vs C,S)
0.74
0.78
0.80/0.74
0.89/0.58
0.84/0.65
(U,L,C vs S)
0.68
0.91
0.96/0.75
0.99/0.36
0.97/0.48
Body
(U vs L,C,S)
0.83
0.84
0.79/0.87
0.78/0.88
0.79/0.87
(U,L vs C,S)
0.81
0.84
0.85/0.82
0.92/0.70
0.88/0.75
(U,L,C vs S)
0.68
0.95
0.96/0.76
0.99/0.36
0.98/0.49
TAGS
(U vs L,C,S)
0.74
0.78
0.75/0.79
0.61/0.88
0.67/0.83
(U,L vs C,S)
0.75
0.79
0.82/0.72
0.87/0.63
0.84/0.67
(U,L,C vs S)
0.62
0.95
0.95/0.73
0.99/0.25
0.97/0.38
Embassies
(From/To)
(U vs L,C,S)
0.57
0.67
0.71/0.66
0.19/0.95
0.30/0.78
(U,L vs C,S)
0.59
0.69
0.70/0.65
0.93/0.24
0.80/0.35
(U,L,C vs S)
0.57
0.94
0.94/0.72
1.00/0.14
0.97/0.24
Office
(U vs L,C,S)
0.67
0.73
0.76/0.72
0.42/0.92
0.54/0.81
(U,L vs C,S)
0.62
0.73
0.71/0.83
0.97/0.27
0.82/0.41
(U,L,C vs S)
0.62
0.95
0.95/0.79
1.00/0.25
0.97/0.38
All_Text
(U vs L,C,S)
0.85
0.86
0.82/0.88
0.81/0.89
0.81/0.89
(U,L vs C,S)
0.84
0.87
0.88/0.84
0.92/0.76
0.90/0.80
(U,L,C vs S)
0.78
0.92
0.97/0.78
0.99/0.57
0.98/0.66
All
Features
(U vs L,C,S)
0.86
0.87
0.83/0.89
0.81/0.90
0.82/0.90
Draft only --Not for circulation without authors' permission
(independe
nt vectors)
(U,L vs C,S)
0.85
0.87
0.88/0.84
0.92/0.78
0.90/0.81
(U,L,C vs S)
0.81
0.97
0.97/0.80
0.99/0.61
0.98/0.69
(U vs C, S)
0.93
0.93
0.93/0.93
0.94/0.92
0.93/0.93
This part of the analysis is limited to cables grouped by origin/destination, TAGS, and concepts, since other useful fields --like Office, and the words in the message --were not available for records that were withdrawn or where the message text is missing.
Draft only --Not for circulation without authors' permission
AcknowledgementsThis research was made possible through grants from the John D. and Catherine T. MacArthur Foundation, the Columbia Global Policy Initiative, and the Fundação Getulio Vargas. The authors also thank Eric Gade and Thomas Nyberg for expert assistance, and David Blei, Michael Gill, Richard Immerman, Robert Jervis, Daniel Krasner, Aaron Plasek, Owen Rambow, and Arthur Spirling for helpful discussions.
Diplomatic History After the Big Bang: Using De Callières, Francois. David Allen, Matthew Connelly, The Art of Diplomacy, ed. HMA Keens-Soper and KW SchweizerAllen, David and Matthew Connelly. "Diplomatic History After the Big Bang: Using De Callières, Francois. "The Art of Diplomacy, ed. HMA Keens-Soper and KW Schweizer." (1983).
Summary of Annual FOIA Reports for Fiscal Year 2015. Department Of Justice, Executive Order 11652Department of Justice. "Summary of Annual FOIA Reports for Fiscal Year 2015." Accessed October 20, 2016. https://www.justice.gov/oip/reports/fy_2015_annual_foia_report_summary/download Federation of American Scientists. "Executive Order 11652." Accessed October 4, 2016. Available at https://perma.cc/YGH7-4H43 .
Estimating the Severity of the WikiLeaks US Diplomatic Cables Disclosure. Michael Gill, Arthur Spirling, Political Analysis. 232Gill, Michael, and Arthur Spirling. "Estimating the Severity of the WikiLeaks US Diplomatic Cables Disclosure." Political Analysis 23, no. 2 (2015): 299-305.
The meaning and use of the area under a receiver operating characteristic (ROC) curve. James A Hanley, Barbara J Mcneil, Radiology. 1431Hanley, James A., and Barbara J. McNeil. "The meaning and use of the area under a receiver operating characteristic (ROC) curve." Radiology 143, no. 1 (1982): 29-36.
A method of comparing the areas under receiver operating characteristic curves derived from the same cases. James A Hanley, Barbara J Mcneil, Radiology. 1483Hanley, James A., and Barbara J. McNeil. "A method of comparing the areas under receiver operating characteristic curves derived from the same cases." Radiology 148, no. 3 (1983): 839-843.
. Zellig Harris, Sabbettai, Distributional Structure. 1023Harris, Zellig Sabbettai. "Word." Distributional Structure 10, no. 23 (1954): 146-162.
Information Security Oversight Office Report to the President. Daniel Hoffman, National Security Archive. Available at. East Timor RevisitedHoffman, Daniel. Governmental Secrecy and the Founding Fathers. Westport: Praeger. 1981. Information Security Oversight Office. "Information Security Oversight Office Report to the President 2015." Accessed October 5, 2016. https://www.archives.gov/isoo/reports/2015- annual-report.pdf Jakarta to State. 1975. "East Timor Revisited." National Security Archive. Available at (http://nsarchive.gwu.edu/NSAEBB/NSAEBB62/#doc4) Retrieved October 5, 2016
Dictatorships and double standards. Jeane Kirkpatrick, Commentary. 68534Kirkpatrick, Jeane. "Dictatorships and double standards." Commentary 68, no. 5 (1979): 34.
Appraisal of records covered by N1-59-07-3-P. David Langbart, William Fischer, Lisa Roberson, Department of State, Bureau of AdministrationLangbart, David, William Fischer and Lisa Roberson. (2007) Appraisal of records covered by N1-59-07-3-P. Department of State, Bureau of Administration.
State Dept.: 75-year wait for Clinton aide emails. Tom Lobianco, Laura Koran, CNN Politics. LoBianco, Tom, and Laura Koran. "State Dept.: 75-year wait for Clinton aide emails" CNN Politics , June 6 2016. Accessed October 20, 2016. http://www.cnn.com/2016/06/06/politics/clinton-emails-75- years/index.html?sr=twCNN060616clinton-emails-75-years1136PMVODtopPhoto&linkId=25277
The Broken System of Classifying Government Documents. A D Lowell, NY Times, Section A. Lowell, AD. "The Broken System of Classifying Government Documents." NY Times, Section A, p 25. February 29, 2016.
The Documentary Big Bang, the Digital Records Revolution, and the Future of the Historical Profession. William Mcallister, 41PassportDraft only --Not for circulation without authors' permissionMcAllister, William. "The Documentary Big Bang, the Digital Records Revolution, and the Future of the Historical Profession." (2010) Passport 41:12-19. Draft only --Not for circulation without authors' permission
Data structures for statistical computing in python. Wes Mckinney, Proceedings of the 9 th Python in Science Conference. the 9 th Python in Science Conference445McKinney, Wes. "Data structures for statistical computing in python." In Proceedings of the 9 th Python in Science Conference , vol. 445, pp. 51-56. 2010.
Mazzini and the Ethics of Politicians. Giuseppe Mazzini, Westminster Review. LXXXIIMazzini, Giuseppe. 1844. "Mazzini and the Ethics of Politicians," Westminster Review, LXXXII (Sept.-Dec. 1844), 225-251.
The Secret Life of Henry Kissinger: Minutes of a 1975 Meeting with Lawrence Eagleburger. The Nation. Memorandum of ConversationMemorandum of Conversation. 1975. "The Secret Life of Henry Kissinger: Minutes of a 1975 Meeting with Lawrence Eagleburger." The Nation. (October)
The nature of government secrecy. Daniel J Metcalfe, Government Information Quarterly. 262Metcalfe, Daniel J. "The nature of government secrecy." Government Information Quarterly 26, no. 2 (2009): 305-310.
No! Formal theory, causal inference, and big data are not contradictory trends in political science. Burt L Monroe, Jennifer Pan, Margaret E Roberts, Maya Sen, Betsy Sinclair, PS: Political Science & Politics. 4801Monroe, Burt L., Jennifer Pan, Margaret E. Roberts, Maya Sen, and Betsy Sinclair. "No! Formal theory, causal inference, and big data are not contradictory trends in political science." PS: Political Science & Politics 48, no. 01 (2015): 71-74.
. Daniel Moynihan, Patrick, REPORT of the COMMISSION ON PROTECTING AND REDUCING GOVERNMENT SECRECY. Moynihan, Daniel Patrick. "REPORT of the COMMISSION ON PROTECTING AND REDUCING GOVERNMENT SECRECY" Accessed October 20, 2016
Where have all the files gone? Lost in action points every one?. Michael Moss, Journal of Contemporary History. 474Moss, Michael. "Where have all the files gone? Lost in action points every one?." Journal of Contemporary History 47, no. 4 (2012): 860-875.
Frequently Asked Questions. General Records of the Department of State Central Foreign Policy File. 59National Archives and Records AdministrationNational Archives and Records Administration (2016) Frequently Asked Questions, Record Group 59, General Records of the Department of State Central Foreign Policy File, 1973- 1979. Available at https://perma.cc/48MU-34DK . Accessed October 4, 2016.
. Harold Nicholson, Diplomacy, Oxford University PressNew YorkNicholson, Harold. Diplomacy . New York: Oxford University Press. 1964.
Netflix Prize. Netflix, Netflix (2009) Netflix Prize. Available at https://perma.cc/U9XV-Z2SV . Accessed October 4, 2016.
Transforming the Security Classification System. David Pozen, Public Interest Declassification Board. 622257Deep SecrecyPozen, David. "Deep Secrecy." Stanford Law Review 62, no. 2 March (2010): 257. Accessed October 20, 2016. https://www.stanfordlawreview.org/print/article/deep-secrecy/ Public Interest Declassification Board (2012) Transforming the Security Classification System. Available at https://perma.cc/U3G9-KBGU . Accessed October 4, 2016.
The Strategic Determinants of U.S. Human Rights Reporting: Evidence from the Cold War. Nancy Qian, David Yanagizawa, Journal of the European Economic Association. 7Qian, Nancy, and David Yanagizawa. "The Strategic Determinants of U.S. Human Rights Reporting: Evidence from the Cold War." Journal of the European Economic Association 7, no.
Secrets and Leaks: The Dilemma of State Secrecy. Rahul Sagar, Princeton University PressSagar, Rahul. Secrets and Leaks: The Dilemma of State Secrecy . Princeton University Press, 2013.
Draft only --Not for circulation without authors' permission We performed extensive tests with different classifiers: Linear Models (Logistic Regression, Passive Aggressive, Stochastic Gradient Descent. Gerard Salton, Christopher Buckley, Ridge RegressionTerm-weighting approaches in automatic text retrieval. Perceptron)Salton, Gerard, and Christopher Buckley. "Term-weighting approaches in automatic text retrieval." Information processing & management 24, no. 5 (1988): 513-523. Draft only --Not for circulation without authors' permission We performed extensive tests with different classifiers: Linear Models (Logistic Regression, Passive Aggressive, Stochastic Gradient Descent, Ridge Regression; Perceptron);
Extremely Randomized Trees ) and combined techniques such as bagging and boosting (Bagging Classifier, Gradient Boosting, AdaBoost and weighted voting approaches )best estimators ( Stochastic Gradient Descent, Logistic Regression, Ridge, Bagging, Extremely Randomized Trees, AdaBoost and Multinomial Naive Bayes). K-Nearest Neighbors, Bayesian Approaches (Bernoulli Naive Bayes. Multinomial Naive Bayes) ; ensembles of classifiers ( Random Forests. and using weights 2,2,1,1,1,1,1 for our main experimentK-Nearest Neighbors ; Bayesian Approaches (Bernoulli Naive Bayes, Multinomial Naive Bayes) ; ensembles of classifiers ( Random Forests, Extremely Randomized Trees ) and combined techniques such as bagging and boosting (Bagging Classifier, Gradient Boosting, AdaBoost and weighted voting approaches )best estimators ( Stochastic Gradient Descent, Logistic Regression, Ridge, Bagging, Extremely Randomized Trees, AdaBoost and Multinomial Naive Bayes) and using weights 2,2,1,1,1,1,1 for our main experiment.
Bagging predictors. Leo Breiman, Machine learning. 242Breiman, Leo. "Bagging predictors." Machine learning 24, no. 2 (1996): 123-140.
Random forests. Leo Breiman, Machine learning. 451Breiman, Leo. "Random forests." Machine learning 45, no. 1 (2001): 5-32.
Online passive-aggressive algorithms. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, Yoram Singer, Journal of Machine Learning Research. 7MarCrammer, Koby, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. "Online passive-aggressive algorithms." Journal of Machine Learning Research 7, no. Mar (2006):
LIBLINEAR: A library for large linear classification. Fan, Kai-Wei Rong-En, Cho-Jui Chang, Xiang-Rui Hsieh, Chih-Jen Wang, Lin, Journal of machine learning research. 9Fan, Rong-En, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. "LIBLINEAR: A library for large linear classification." Journal of machine learning research 9, no. Aug (2008): 1871-1874.
Discriminatory analysis-nonparametric discrimination: consistency properties. Evelyn Fix, Joseph L HodgesJr, California Univ BerkeleyFix, Evelyn, and Joseph L. Hodges Jr. Discriminatory analysis-nonparametric discrimination: consistency properties . California Univ Berkeley, 1951.
Large margin classification using the perceptron algorithm. Yoav Freund, Robert E Schapire, Machine learning. 373Freund, Yoav, and Robert E. Schapire. "Large margin classification using the perceptron algorithm." Machine learning 37, no. 3 (1999): 277-296.
Greedy function approximation: a gradient boosting machine. Jerome H Friedman, Annals of statistics. Friedman, Jerome H. "Greedy function approximation: a gradient boosting machine." Annals of statistics (2001): 1189-1232.
Extremely randomized trees. Pierre Geurts, Damien Ernst, Louis Wehenkel, Machine learning. 631Geurts, Pierre, Damien Ernst, and Louis Wehenkel. "Extremely randomized trees." Machine learning 63, no. 1 (2006): 3-42.
On linear discriminant analysis with adaptive ridge classification rules. Wei-Liem Loh, Journal of Multivariate Analysis. 532Draft only --Not for circulation without authors' permissionLoh, Wei-Liem. "On linear discriminant analysis with adaptive ridge classification rules." Journal of Multivariate Analysis 53, no. 2 (1995): 264-278. Draft only --Not for circulation without authors' permission
A comparison of event models for naive bayes text classification. Andrew Mccallum, Kamal Nigam, AAAI-98 workshop on learning for text categorization. 752McCallum, Andrew, and Kamal Nigam. "A comparison of event models for naive bayes text classification." In AAAI-98 workshop on learning for text categorization , vol. 752, pp. 41-48.
The boosting approach to machine learning: An overview. Robert E Schapire, Nonlinear estimation and classification. New YorkSpringerSchapire, Robert E. "The boosting approach to machine learning: An overview." In Nonlinear estimation and classification , pp. 149-171. Springer New York, 2003.
Dual coordinate descent methods for logistic regression and maximum entropy models. Yu, Fang-Lan Hsiang-Fu, Chih-Jen Huang, Lin, Machine Learning. 851-2Yu, Hsiang-Fu, Fang-Lan Huang, and Chih-Jen Lin. "Dual coordinate descent methods for logistic regression and maximum entropy models." Machine Learning 85, no. 1-2 (2011): 41-75.
Transforming classifier scores into accurate multiclass probability estimates. Bianca Zadrozny, Charles Elkan, Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining. the eighth ACM SIGKDD international conference on Knowledge discovery and data miningACMZadrozny, Bianca, and Charles Elkan. "Transforming classifier scores into accurate multiclass probability estimates." In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining , pp. 694-699. ACM, 2002.
Multi-class adaboost. Ji Zhu, Hui Zou, Saharon Rosset, Trevor Hastie, Statistics and its Interface. 23Draft only --Not for circulation without authors' permissionZhu, Ji, Hui Zou, Saharon Rosset, and Trevor Hastie. "Multi-class adaboost." Statistics and its Interface 2, no. 3 (2009): 349-360. Draft only --Not for circulation without authors' permission
| [] |
[
"LEARNING PYTHON CODE SUGGESTION WITH A SPARSE POINTER NETWORK",
"LEARNING PYTHON CODE SUGGESTION WITH A SPARSE POINTER NETWORK"
] | [
"Avishkar Bhoopchand avishkar.bhoopchand.15@ucl.ac.uk \nDepartment of Computer Science\nUniversity College London\n\n",
"EarlTim Rocktäschel t.rocktaschel@cs.ucl.ac.uk \nDepartment of Computer Science\nUniversity College London\n\n",
"Barr e.barr@cs.ucl.ac.uk \nDepartment of Computer Science\nUniversity College London\n\n",
"Sebastian Riedel s.riedel@cs.ucl.ac.uk \nDepartment of Computer Science\nUniversity College London\n\n"
] | [
"Department of Computer Science\nUniversity College London\n",
"Department of Computer Science\nUniversity College London\n",
"Department of Computer Science\nUniversity College London\n",
"Department of Computer Science\nUniversity College London\n"
] | [] | To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.Under review as a conference paper at ICLR 2017 in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-range Python dependencies by selectively attending over the introduction of identifiers as determined by examining the Abstract Syntax Tree. The model is a form of pointer network(Vinyals et al., 2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-range dependencies and free form generation to deal with local phenomena, based on the current context.Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python code crawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-range dependencies for code suggestion of this dynamic programming language efficiently, and (iii) We provide a qualitative analysis demonstrating that this model is indeed able to learn such long-range dependencies. | null | [
"https://arxiv.org/pdf/1611.08307v1.pdf"
] | 16,045,259 | 1611.08307 | 6cb40055bd871ee2178cea3d535c5c52d63ac3af |
LEARNING PYTHON CODE SUGGESTION WITH A SPARSE POINTER NETWORK
Avishkar Bhoopchand avishkar.bhoopchand.15@ucl.ac.uk
Department of Computer Science
University College London
EarlTim Rocktäschel t.rocktaschel@cs.ucl.ac.uk
Department of Computer Science
University College London
Barr e.barr@cs.ucl.ac.uk
Department of Computer Science
University College London
Sebastian Riedel s.riedel@cs.ucl.ac.uk
Department of Computer Science
University College London
LEARNING PYTHON CODE SUGGESTION WITH A SPARSE POINTER NETWORK
Under review as a conference paper at ICLR 2017
To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.Under review as a conference paper at ICLR 2017 in accuracy, it still makes avoidable mistakes. Hence, we introduce a model that leverages long-range Python dependencies by selectively attending over the introduction of identifiers as determined by examining the Abstract Syntax Tree. The model is a form of pointer network(Vinyals et al., 2015a), and learns to dynamically choose between syntax-aware pointing for modeling long-range dependencies and free form generation to deal with local phenomena, based on the current context.Our contributions are threefold: (i) We release a code suggestion corpus of 41M lines of Python code crawled from GitHub, (ii) We introduce a sparse attention mechanism that captures very long-range dependencies for code suggestion of this dynamic programming language efficiently, and (iii) We provide a qualitative analysis demonstrating that this model is indeed able to learn such long-range dependencies.
INTRODUCTION
Integrated development environments (IDEs) are essential tools for programmers. Especially when a developer is new to a codebase, one of their most useful features is code suggestion: given a piece of code as context, suggest a likely sequence of next tokens. Typically, the IDE suggests an identifier or a function call, including API calls. While extensive support exists for statically-typed languages such as Java, code suggestion for dynamic languages like Python is harder and less well supported because of the lack of type annotations. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code.
Recently, methods from statistical natural language processing (NLP) have been used to train code suggestion systems from code usage in large code repositories (Hindle et al., 2012;Allamanis & Sutton, 2013;Tu et al., 2014). To this end, usually an n-gram language model is trained to score possible completions. Neural language models for code suggestion (White et al., 2015;Das & Shah, 2015) have extended this line of work to capture more long-range dependencies. Yet, these standard neural language models are limited by the so-called hidden state bottleneck, i.e., all context information has to be stored in a fixed-dimensional internal vector representation. This limitation restricts such models to local phenomena and does not capture very long-range semantic relationships like suggesting calling a function that has been defined many tokens before.
To address these issues, we create a large corpus of 41M lines of Python code by using a heuristic for crawling high-quality code repositories from GitHub. We investigate, for the first time, the use of attention (Bahdanau et al., 2014) for code suggestion and find that, despite a substantial improvement 2 METHODS We first revisit neural language models, before briefly describing how to extend such a language model with an attention mechanism. Then we introduce a sparse attention mechanism for a pointer network that can exploit the Python abstract syntax tree of the current context for code suggestion.
NEURAL LANGUAGE MODEL
Code suggestion can be approached by a language model that measures the probability of observing a sequence of tokens in a Python program. For example, for the sequence S = a 1 , . . . , a N , the joint probability of S factorizes according to
P θ (S) = P θ (a 1 ) · N t=2 P θ (a t | a t−1 , . . . , a 1 )(1)
where the parameters θ are estimated from a training corpus. Given a sequence of Python tokens, we seek to predict the next M tokens a t+1 , . . . , a t+M that maximize Equation 1 arg max at+1, ..., a t+M P θ (a 1 , . . . , a t , a t+1 , . . . , a t+M ).
In this work, we build upon neural language models using Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997). This neural language model estimates the probabilities in Equation 1 using the output vector of an LSTM at time step t (denoted h t here) according to
P θ (a t = τ | a t−1 , . . . , a 1 ) = exp (v T τ h t + b τ ) τ exp (v T τ h t + b τ )(3)
where v τ is a parameter vector associated with token τ in the vocabulary.
Neural language models can, in theory, capture long-term dependencies in token sequences through their internal memory. However, as this internal memory has fixed dimension and can be updated at every time step, such models often only capture local phenomena. In contrast, we are interested in very long-range dependencies like referring to a function identifier introduced many tokens in the past. For example, a function identifier may be introduced at the top of a file and only used near the bottom. In the following, we investigate various external memory architectures for neural code suggestion.
ATTENTION
A straight-forward approach to capturing long-range dependencies is to use a neural attention mechanism (Bahdanau et al., 2014) on the previous K output vectors of the language model. Attention mechanisms have been successfully applied to sequence-to-sequence tasks such as machine translation (Bahdanau et al., 2014), question-answering (Hermann et al., 2015), syntactic parsing (Vinyals et al., 2015b), as well as dual-sequence modeling like recognizing textual entailment (Rocktäschel et al., 2016). The idea is to overcome the hidden-state bottleneck by allowing referral back to previous output vectors. Recently, these mechanisms were applied to language modelling by Cheng et al. (2016) and .
Formally, an attention mechanism with a fixed memory M t ∈ R k×K of K vectors m i ∈ R k for i ∈ [1, K], produces an attention distribution α t ∈ R K and context vector c t ∈ R k at each time step t according to Equations 4 to 7. Furthermore, W M , W h ∈ R k×k and w ∈ R k are trainable parameters. Finally, note that 1 K represents a K-dimensional vector of ones.
M t = [m 1 . . . m K ] ∈ R k×K (4) G t = tanh(W M M t + 1 T K (W h h t )) ∈ R k×K (5) α t = softmax(w T G t ) ∈ R 1×K (6) c t = M t α T t ∈ R k(7)
For language modeling, we populate M t with a fixed window of the previous K LSTM output vectors. To obtain a distribution over the next token we combine the context vector c t of the attention mechanism with the output vector h t of the LSTM using a trainable projection matrix W A ∈ R k×2k . The resulting final output vector n t ∈ R k encodes the next-word distribution and is projected to the size of the vocabulary |V |. Subsequently, we apply a softmax to arrive at a probability distribution y t ∈ R |V | over the next token. This process is presented in Equation 9 where W V ∈ R |V |×k and b V ∈ R |V | are trainable parameters.
n t = tanh W A h t c t ∈ R k (8) y t = softmax(W V n t + b V ) ∈ R |V |(9)
The problem of the attention mechanism above is that it quickly becomes computationally expensive for large K. Moreover, attending over many memories can make training hard as a lot of noise is introduced in early stages of optimization where the LSTM outputs (and thus the memory M t ) are more or less random. To alleviate these problems we now turn to pointer networks and a simple heuristic for populating M t that permits the efficient retrieval of identifiers in a large history of Python code.
SPARSE POINTER NETWORK
We develop an attention mechanism that provides a filtered view of a large history of Python tokens. At any given time step, the memory consists of context representations of the previous K identifiers introduced in the history. This allows us to model long-range dependencies found in identifier usage. For instance, a class identifier may be declared hundreds of lines of code before it is used. Given a history of Python tokens, we obtain a next-word distribution from a weighed average of the sparse pointer network for identifier reference and a standard neural language model. The weighting of the two is determined by a controller.
Formally, at time-step t, the sparse pointer network operates on a memory M t ∈ R k×K of only the K previous identifier representations (e.g. function identifiers, class identifiers and so on). In addition, we maintain a vector m t = [id 1 , . . . , id K ] ∈ N K of symbol ids for these identifier representations (i.e. pointers into the large global vocabulary).
As before, we calculate a context vector c t using the attention mechanism (Equation 7), but on a memory M t only containing representations of identifiers that were declared in the history. Next, we obtain a pseudo-sparse distribution over the global vocabulary from
s t [i] = α t [j] if m t [j] = i −C otherwise (10) i t = softmax(s t ) ∈ R |V |(11)
where −C is a large negative constant (e.g. −1000). In addition, we calculate a next-word distribution from a standard neural language model and we use a controller to calculate a distribution λ t ∈ R 2 over the language model and pointer network for the final weighted next-word distribution y * t via
y t = softmax(W V h t + b V ) ∈ R |V |(12)h λ t = h t x t c t ∈ R 3k (13) λ t = softmax(W λ h λ t + b λ ) ∈ R 2 (14) y * t = [y t i t ] λ t ∈ R |V |(15)
Here, x t is the representation of the input token, and W λ ∈ R 2×3k and b λ ∈ R 2 a trainable weight matrix and bias respectively. This controller is conditioned on the input, output and context representations. This means for deciding whether to refer to an identifier or generate from the global vocabulary, the controller has access to information from the encoded next-word distribution h t of the standard neural language model, as well as the attention-weighted identifier representations c t from the current history.
Figure 1 overviews this process. In it, the identifier base_path appears twice, once as an argument to a function and once as a member of a class (denoted by *). Each appearance has a different id in the vocabulary and obtains a different probability from the model. In the example, the model correctly chooses to refer to the member of the class instead of the out-of-scope function argument, although, from a user point-of-view, the suggestion would be the same in both cases.
LARGE-SCALE PYTHON CORPUS
Previous work on code suggestion either focused on statically-typed languages (particularly Java) or trained on very small corpora. Thus, we decided to collect a new large-scale corpus of the dynamic programming language Python. According to the programming language popularity website Pypl (Carbonnelle, 2016), Python is the second most popular language after Java. It is also the 3rd most common language in terms of number of repositories on the open-source code repository GitHub, after JavaScript and Java (Zapponi, 2016).
We collected a corpus of 41M lines of Python code from GitHub projects. Ideally, we would like this corpus to only contain high-quality Python code, as our language model learns to suggest code from how users write code. However, it is difficult to automatically assess what constitutes high-quality code. Thus, we resort to the heuristic that popular code projects tend to be of good quality, There are two metrics on GitHub that we can use for this purpose, namely stars (similar to bookmarks) and forks (copies of a repository that allow users to freely experiment with changes without affecting the original repository). Similar to Allamanis & Sutton (2013) and , we select Python projects with more than 100 stars, sort by the number of forks descending, and take the top 1000 projects. We then removed projects that did not compile with Python3, leaving us with 949 projects. We split the corpus on the project level into train, dev, and test. Table 1 presents the corpus statistics.
NORMALIZATION OF IDENTIFIERS
Unsurprisingly, the long tail of words in the vocabulary consists of rare identifiers. To improve generalization, we normalize identifiers before feeding the resulting token stream to our models. That is, we replace every identifier name with an anonymous identifier indicating the identifier group (class, variable, argument, attribute or function) concatenated with a random number that makes the identifier unique in its scope. Note that we only replace novel identifiers defined within a file. Identifier references to external APIs and libraries are left untouched. Consistent with previous corpus creation for code suggestion (e.g. Khanh Dam et al., 2016;White et al., 2015), we replace numerical constant tokens with $NUM$, remove comments, reformat the code, and replace tokens appearing less than five times with an $OOV$ (out of vocabulary) token.
EXPERIMENTS
Although previous work by White et al. (2015) already established that a simple neural language model outperforms an n-gram model for code suggestion, we include a number of n-gram baselines to confirm this observation. Specifically, we use n-gram models for n ∈ {3, 4, 5, 6} with Modified Kneser-Ney smoothing (Kneser & Ney, 1995) from the Kyoto Language Modelling Toolkit (Neubig, 2012).
We train the sparse pointer network using mini-batch SGD with a batch size of 30 and truncated backpropagation through time (Werbos, 1990) with a history of 20 identifier representations. We use an initial learning rate of 0.7 and decay it by 0.9 after every epoch. As additional baselines, we test a neural language model with LSTM units with and without attention. For the attention language models, we experiment with a fixed-window attention memory of the previous 20 and 50 tokens respectively, and a batch size of 75.
All neural language models were developed in TensorFlow (Abadi et al., 2016) and trained using cross-entropy loss. While processing a Python source code file, the last recurrent state of the RNN is fed as the initial state of the subsequent sequence of the same file and reset between files. All models use an input and hidden size of 200, an LSTM forget gate bias of 1 (Jozefowicz et al., 2015), gradient norm clipping of 5 (Pascanu et al., 2013), and randomly initialized parameters in the interval (−0.05, 0.05). As regularizer, we use a dropout of 0.1 on the input representations. Furthermore, we use a sampled softmax (Jean et al., 2015) with a log-uniform sampling distribution and a sample size of 1000.
RESULTS
We evaluate all models using perplexity (PP), as well as accuracy of the top prediction (Acc) and the top five predictions (Acc@5). The results are summarized in Table 2.
We can confirm that for code suggestion neural models outperform n-gram language models by a large margin. Furthermore, adding attention improves the results substantially (2.3 lower perplexity and 3.4 percentage points increased accuracy). Interestingly, this increase can be attributed to a superior prediction of identifiers, which increased from an accuracy of 2.1% to 21.4%. An LSTM with an attention window of 50 gives us the best accuracy for the top prediction. We achieve further improvements for perplexity and accuracy of the top five predictions by using a sparse pointer network that uses a smaller memory of the past 20 identifier representations.
QUALITATIVE ANALYSIS
Figures 3a-d show a code suggestion example involving an identifier usage. While the LSTM baseline is uncertain about the next token, we get a sensible prediction by using attention or the sparse pointer network. The sparse pointer network provides more reasonable alternative suggestions beyond the correct top suggestion.
Figures 3e-h show the use-case referring to a class attribute declared 67 tokens in the past. Only the Sparse Pointer Network makes a good suggestion. Furthermore, the attention weights in 3i demonstrate that this model distinguished attributes from other groups of identifiers. We give a full example of a token-by-token suggestion of the Sparse Pointer Network in Figure 4 in the Appendix.
RELATED WORK
Previous code suggestion work using methods from statistical NLP has mostly focused on n-gram models. Much of this work is inspired by Hindle et al. (2012) who argued that real programs fall into a much smaller space than the flexibility of programming languages allows. They were able to capture the repetitiveness and predictable statistical properties of real programs using language models. Subsequently, Tu et al. (2014) improved upon Hindle et al.'s work by adding a cache mechanism that allowed them to exploit locality stemming from the specialisation and decoupling of program modules. Tu et al.'s idea of adding a cache mechanism to the language model is specifically designed to exploit the properties of source code, and thus follows the same aim as the sparse attention mechanism introduced in this paper.
While the majority of preceding work trained on small corpora, Allamanis & Sutton (2013) created a corpus of 352M lines of Java code which they analysed with n-gram language models. The size of the corpus allowed them to train a single language model that was effective across multiple different project domains. White et al. (2015) later demonstrated that neural language models outperform n-gram models for code suggestion. They compared various n-gram models (up to nine grams), including Tu et al.'s cache model, with a basic RNN neural language model. Khanh Dam et al. (2016) compared White et al.'s basic RNN with LSTMs and found that the latter are better at code suggestion due to their improved ability to learn long-range dependencies found in source code. Our paper extends this line of work by introducing a sparse attention model that captures even longer dependencies.
The combination of lagged attention mechanisms with language modelling is inspired by Cheng et al. (2016) who equipped LSTM cells with a fixed-length memory tape rather than a single memory cell. They achieved promising results on the standard Penn Treebank benchmark corpus (Marcus et al., 1993). Similarly, Tran et al. added a memory block to LSTMs for language modelling of English, German and Italian and outperformed both n-gram and neural language models. Their memory encompasses representations of all possible words in the vocabulary rather than providing a sparse view as we do.
An alternative to our purely lexical approach to code suggestion involves the use of probabilistic context-free grammars (PCFGs) which exploit the formal grammar specifications and well-defined, deterministic parsers available for source code. These were used by to extract idiomatic patterns from source code. A weakness of PCFGs is their inability to model context-dependent rules of programming languages such as that variables need to be declared before being used. Maddison & Tarlow (2014) added context-aware variables to their PCFG model in order to capture such rules. Ling et al. (2016) recently used a pointer network to generate code from natural language descriptions. Our use of a controller for deciding whether to generate from a language model or copy an identifier using a sparse pointer network is inspired by their latent code predictor. However, their inputs (textual descriptions) are short whereas code suggestion requires capturing very long-range dependencies that we addressed by a filtered view on the memory of previous identifier representations.
CONCLUSIONS AND FUTURE WORK
In this paper, we investigated neural language models for code suggestion of the dynamically-typed programming language Python. We released a corpus of 41M lines of Python crawled from GitHub and compared n-gram, standard neural language models, and attention. By using attention, we observed an order of magnitude more accurate prediction of identifiers. Furthermore, we proposed a sparse pointer network that can efficiently capture long-range dependencies by only operating on a filtered view of a memory of previous identifier representations. This model achieves the lowest perplexity and best accuracy among the top five predictions. The Python corpus and code for replicating our experiment is released at https://github.com/uclmr/pycodesuggest.
The presented methods were only tested for code suggestion within the same Python file. We are interested in scaling the approach to the level of entire code projects and collections thereof, as well as integrating a trained code suggestion model into an existing IDE. Furthermore, we plan to work on code completion, i.e., models that provide a likely continuation of a partial token, using character language models (Graves, 2013). Figure 4: Full example of code suggestion with a Sparse Pointer Network. Boldface tokens on the left show the first declaration of an identifier. The middle part visualizes the memory of representations of these identifiers. The right part visualizes the output λ of the controller, which is used for interpolating between the language model (LM) and the attention of the pointer network (Att).
APPENDIX
Figure 1 :
1Sparse pointer network for code suggestion on a Python code snippet, showing the nextword distributions of the language model and identifier attention and their weighted combination through λ
Figure 2 :
2Example of the Python code normalization. Original file on the left and normalized version on the right.
Pointer Network attention over memory of identifier representations.
Figure 3 :
3Code suggestion example involving a reference to a variable (a-d), a long-range dependency (e-h), and the attention weights of the Sparse Pointer Network (i).
Table 1 :
1Python corpus statistics.Dataset #Projects
#Files
#Lines
#Tokens Vocabulary Size
Train
489 118 298 26 868 583
88 935 698
2 323 819
Dev
179
26 466
5 804 826
18 147 341
Test
281
43 062
8 398 100
30 178 356
Total
949 187 826 41 071 509 137 261 395
Table 2 :
2Perplexity (PP), Accuracy (Acc) and Accuarcy among top 5 predictions (Acc@5).Model
Train PP Dev PP Test PP
Acc [%]
Acc@5 [%]
All
IDs Other
All
IDs Other
3-gram
12.90
24.19
26.90 13.19
-
-50.81
-
-
4-gram
7.60
21.07
23.85 13.68
-
-51.26
-
-
5-gram
4.52
19.33
21.22 13.90
-
-51.49
-
-
6-gram
3.37
18.73
20.17 14.51
-
-51.76
-
-
LSTM
9.29
13.08
14.01 57.91
2.1
62.8 76.30
4.5
82.6
LSTM w/ Attention 20
7.30
11.07
11.74 61.30 21.4
64.8 79.32 29.9
83.7
LSTM w/ Attention 50
7.09
9.83
10.05 63.21 30.2
65.3 81.69 41.3
84.1
Sparse Pointer Network
6.41
9.40
9.18 62.97 27.3
64.9 82.62 43.6
84.5
ACKNOWLEDGMENTSThis work was supported by Microsoft Research through its PhD Scholarship Programme, an Allen Distinguished Investigator Award, and a Marie Curie Career Integration Award.
Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, Xiaoqiang Zhang, abs/1605.08695CoRRMartín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray, Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zhang. Tensorflow: A system for large-scale machine learning. CoRR, abs/1605.08695, 2016. URL http://arxiv.org/abs/1605.
Mining idioms from source code. Miltiadis Allamanis, Charles Sutton, http:/doi.acm.org/10.1145/2635868.2635901Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014. the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014New York, NY, USAACMMiltiadis Allamanis and Charles Sutton. Mining idioms from source code. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 472-483, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-3056-5. doi: 10.1145/2635868.2635901. URL http://doi.acm.org/10.1145/2635868.2635901.
Mining source code repositories at massive scale using language modeling. Miltiadis Allamanis, Charles A Sutton, 978-1-4673-2936-1MSR. Thomas Zimmermann, Massimiliano Di Penta, and Sunghun KimIEEE Computer SocietyMiltiadis Allamanis and Charles A. Sutton. Mining source code repositories at massive scale using language modeling. In Thomas Zimmermann, Massimiliano Di Penta, and Sunghun Kim (eds.), MSR, pp. 207-216. IEEE Computer Society, 2013. ISBN 978-1-4673-2936-1. URL http://dblp.uni-trier.de/db/conf/msr/msr2013.html#AllamanisS13a.
Learning natural coding conventions. Miltiadis Allamanis, Earl T Barr, Christian Bird, Charles Sutton, http:/doi.acm.org/10.1145/2635868.2635883Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. the 22Nd ACM SIGSOFT International Symposium on Foundations of Software EngineeringNew York, NY, USAACMMiltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Learning natural coding conventions. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 281-293, New York, NY, USA, 2014. ACM. ISBN 978- 1-4503-3056-5. doi: 10.1145/2635868.2635883. URL http://doi.acm.org/10.1145/ 2635868.2635883.
Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/abs/ 1409.0473.
Pypl popularity of programming language. Pierre Carbonnelle, 30Pierre Carbonnelle. Pypl popularity of programming language. http://pypl.github.io/ PYPL.html, 2016. URL http://pypl.github.io/PYPL.html. [Online; accessed 30- August-2016].
Long short-term memory-networks for machine reading. Jianpeng Cheng, Li Dong, Mirella Lapata, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsJianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 551-561. Association for Computational Linguistics, 2016. URL http:// aclweb.org/anthology/D16-1053.
Contextual code completion using machine learning. Subhasis Das, Chinmayee Shah, Subhasis Das and Chinmayee Shah. Contextual code completion using machine learning. 2015.
Generating sequences with recurrent neural networks. CoRR. Alex Graves, abs/1308.0850Alex Graves. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850, 2013. URL http://arxiv.org/abs/1308.0850.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaKarl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and compre- hend. In Advances in Neural Information Processing Systems 28: Annual Confer- ence on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pp. 1693-1701, 2015. URL http://papers.nips.cc/paper/ 5945-teaching-machines-to-read-and-comprehend.
On the naturalness of software. Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, Premkumar Devanbu, 978-1-4673-1067-3Proceedings of the 34th International Conference on Software Engineering, ICSE '12. the 34th International Conference on Software Engineering, ICSE '12Piscataway, NJ, USAIEEE PressAbram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In Proceedings of the 34th International Conference on Software Engineering, ICSE '12, pp. 837-847, Piscataway, NJ, USA, 2012. IEEE Press. ISBN 978-1-4673-1067-3. URL http://dl.acm.org/citation.cfm?id=2337223.2337322.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735- 1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735.
On using very large target vocabulary for neural machine translation. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, Yoshua Bengio, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1-10, Beijing, China, July 2015. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P15-1001.
An empirical exploration of recurrent network architectures. Rafal Jozefowicz, Wojciech Zaremba, Ilya Sutskever, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). David Blei and Francis Bachthe 32nd International Conference on Machine Learning (ICML-15)JMLR Workshop and Conference ProceedingsRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In David Blei and Francis Bach (eds.), Proceedings of the 32nd Inter- national Conference on Machine Learning (ICML-15), pp. 2342-2350. JMLR Workshop and Conference Proceedings, 2015. URL http://jmlr.org/proceedings/papers/v37/ jozefowicz15.pdf.
A deep language model for software code. H Dam, T Tran, T Pham, ArXiv e-printsH. Khanh Dam, T. Tran, and T. Pham. A deep language model for software code. ArXiv e-prints, August 2016.
Improved backing-off for m-gram language modeling. R Kneser, H Ney, 10.1109/ICASSP.1995.479394Acoustics, Speech, and Signal Processing. 1ICASSP-95R. Kneser and H. Ney. Improved backing-off for m-gram language modeling. In Acoustics, Speech, and Signal Processing, 1995. ICASSP-95., 1995 International Conference on, volume 1, pp. 181-184 vol.1, May 1995. doi: 10.1109/ICASSP.1995.479394.
Latent predictor networks for code generation. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, Fumin Wang, Phil Blunsom, arXiv:1603.06744arXiv preprintWang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Andrew Senior, Fumin Wang, and Phil Blunsom. Latent predictor networks for code generation. arXiv preprint arXiv:1603.06744, 2016.
Structured generative models of natural source code. J Chris, Daniel Maddison, Tarlow, International Conference on Machine Learning. Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. In International Conference on Machine Learning, 2014.
Building a large annotated corpus of english: The penn treebank. Mitchell P Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, COMPUTATIONAL LINGUISTICS. 192Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS, 19(2):313-330, 1993.
Kylm -the kyoto language modeling toolkit. Graham Neubig, 23Graham Neubig. Kylm -the kyoto language modeling toolkit. http://www.phontron.com/ kylm/, 2012. URL http://www.phontron.com/kylm/. [Online; accessed 23-July- 2016].
On the difficulty of training recurrent neural networks. Razvan Pascanu, Tomas Mikolov, Yoshua Bengio, Proceedings of the 30th International Conference on Machine Learning. the 30th International Conference on Machine LearningAtlanta, GA, USARazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pp. 1310-1318, 2013. URL http://jmlr.org/ proceedings/papers/v28/pascanu13.html.
Reasoning about entailment with neural attention. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, Phil Blunsom, In ICLR. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom. Reasoning about entailment with neural attention. In ICLR, 2016.
Recurrent memory networks for language modeling. M Ke, Arianna Tran, Christof Bisazza, Monz, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego California, USANAACL HLT 2016Ke M. Tran, Arianna Bisazza, and Christof Monz. Recurrent memory networks for language modeling. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pp. 321-331, 2016. URL http://aclweb.org/anthology/N/ N16/N16-1036.pdf.
On the localness of software. Zhaopeng Tu, Zhendong Su, Premkumar Devanbu, http:/doi.acm.org/10.1145/2635868.2635875Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. the 22Nd ACM SIGSOFT International Symposium on Foundations of Software EngineeringNew York, NY, USAACMZhaopeng Tu, Zhendong Su, and Premkumar Devanbu. On the localness of software. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, pp. 269-280, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-3056-5. doi: 10.1145/2635868.2635875. URL http://doi.acm.org/10.1145/2635868.2635875.
Pointer networks. Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, Advances in Neural Information Processing Systems. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692-2700, 2015a.
Grammar as a foreign language. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey E Hinton, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Quebec, CanadaMontrealOriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Mon- treal, Quebec, Canada, pp. 2773-2781, 2015b. URL http://papers.nips.cc/paper/ 5635-grammar-as-a-foreign-language.
Backpropagation through time: what it does and how to do it. J Paul, Werbos, Proceedings of the IEEE. 7810Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550-1560, 1990.
Toward deep learning software repositories. Martin White, Christopher Vendome, Mario Linares-Vásquez, Denys Poshyvanyk, Proceedings of the 12th Working Conference on Mining Software Repositories, MSR '15. the 12th Working Conference on Mining Software Repositories, MSR '15Piscataway, NJ, USAIEEE PressMartin White, Christopher Vendome, Mario Linares-Vásquez, and Denys Poshyvanyk. Toward deep learning software repositories. In Proceedings of the 12th Working Conference on Mining Software Repositories, MSR '15, pp. 334-345, Piscataway, NJ, USA, 2015. IEEE Press. URL http://dl.acm.org/citation.cfm?id=2820518.2820559.
Carlo Zapponi, Githut -programming languages and github. 19Carlo Zapponi. Githut -programming languages and github. http://githut.info/, 2016. URL http://githut.info/. [Online; accessed 19-August-2016].
| [
"https://github.com/uclmr/pycodesuggest."
] |
[
"DEVELOPING AND EVALUATING A PROBABILISTIC LR PARSER OF PART-OF-SPEECH AND PUNCTUATION LABELS*",
"DEVELOPING AND EVALUATING A PROBABILISTIC LR PARSER OF PART-OF-SPEECH AND PUNCTUATION LABELS*"
] | [
"Ted Briscoe \nComputer Laboratory\nUniversity of Cambridge Pembroke Street\nCB2 3QGCambridgeUK\n",
"John Carroll \nComputer Laboratory\nUniversity of Cambridge Pembroke Street\nCB2 3QGCambridgeUK\n"
] | [
"Computer Laboratory\nUniversity of Cambridge Pembroke Street\nCB2 3QGCambridgeUK",
"Computer Laboratory\nUniversity of Cambridge Pembroke Street\nCB2 3QGCambridgeUK"
] | [] | We describe an approach to robust domain-independent syntactic parsing of unrestricted naturally-occurring (English) input. The technique involves parsing sequences of part-of speech and punctuation labels using a unification-ba.5ed grammar coupled with a proba bilistic LR parser. We describe the coverage of several corpora using this grammar and re port the results of a parsing experiment using probabilities derived from bracketed training data. We report the first substantial experiments to � the contribution of punctuation to deriving an accurate syntactic analysis, by parsing identical texts both with and without naturally-occurring punctuation marks.* Some of this work was carried out while the first author was visting Rank Xerox, Grenoble. The work was also supported by DTI/SALT project 41/5808 'Integrated Language Database'. Geoff Nunberg provided encouragement and much advice on the analysis of punctuation, and Greg Grefenstette undertook the original tokenisation and segmentation of Susanne. Bernie Jones and Kiku Ribas made helpful comments on an earlier draft. We are responsible for any mistakes. | null | [
"https://www.aclweb.org/anthology/1995.iwpt-1.8.pdf"
] | 13,505,514 | cmp-lg/9510005 | d54f8f06a7aaab5e4c922053988868ef2a5349e4 |
DEVELOPING AND EVALUATING A PROBABILISTIC LR PARSER OF PART-OF-SPEECH AND PUNCTUATION LABELS*
Ted Briscoe
Computer Laboratory
University of Cambridge Pembroke Street
CB2 3QGCambridgeUK
John Carroll
Computer Laboratory
University of Cambridge Pembroke Street
CB2 3QGCambridgeUK
DEVELOPING AND EVALUATING A PROBABILISTIC LR PARSER OF PART-OF-SPEECH AND PUNCTUATION LABELS*
We describe an approach to robust domain-independent syntactic parsing of unrestricted naturally-occurring (English) input. The technique involves parsing sequences of part-of speech and punctuation labels using a unification-ba.5ed grammar coupled with a proba bilistic LR parser. We describe the coverage of several corpora using this grammar and re port the results of a parsing experiment using probabilities derived from bracketed training data. We report the first substantial experiments to � the contribution of punctuation to deriving an accurate syntactic analysis, by parsing identical texts both with and without naturally-occurring punctuation marks.* Some of this work was carried out while the first author was visting Rank Xerox, Grenoble. The work was also supported by DTI/SALT project 41/5808 'Integrated Language Database'. Geoff Nunberg provided encouragement and much advice on the analysis of punctuation, and Greg Grefenstette undertook the original tokenisation and segmentation of Susanne. Bernie Jones and Kiku Ribas made helpful comments on an earlier draft. We are responsible for any mistakes.
Introduction
This work is part of an effort to develop a robust, domain-independent syntactic parser capable of yielding the one correct analysis for unrestricted naturally-occurring input. Our goal is to develop a system with performance comparable to extant part-of-speech taggers, returning a syntactic analysis from which predicate-argument structure can be recovered, and which can support semantic interpretation. The requirement for a domain-independent analyser favours statistical techniques to resolve ambiguities, whilst the latter goal favours a more sophisticated grammatical formalism than is typical in statistical approaches to robust analysis of corpus material. Briscoe and Carroll (1993) describe a probablistic parser using a wide-coverage unification based grammar of English written in the Alvey Natural Language To ols (ANLT) meta g rammat ical formalism (Briscoe et al. , 1987), generating around 800 rules in a syntactic variant of the Definite Clause Grammar formalism (DCG, Pereira & Warren, 1980) extended with iterative (Kleene) operators. The ANLT grammar is linked to a lexicon containing about 64K entries for 40K lexemes, including detailed subcategorisation information appropriate for the grammar, built semi-automatically from a learners' dictionary (Carroll & Grover, 1989). The resulting parser is efficient, capable of constructing a parse forest in what seems to be roughly quadratic time, and efficiently returning the ranked n-most likely analyses (Carroll, 1993(Carroll, , 1994. The probabilistic model is a refinement of probabilistic context-free grammar (PCFG) conditioning CF 'backbone' rule application on LR state and lookahead item. Unification of the 'residue' of features not incorporated into the backbone is performed at parse time in conjunction with reduce operations. Unification failure results in the associated derivation being assigned a probability of zero. Probabilities are assigned to transitions in the LALR( 1) action table via a proc� of supervised training based on computing the frequency with which transitions are traversed in a corpus of parse histories. The result is a probabilistic parser which, unlike a PCFG, is capable of probabilistically discriminating derivations which differ only in terms of order of application of the same set of CF backbone rules, due to the parse context defined by the LR table.
Experiments with this s ys tem revealed three major problems which our current research is addressing. Firstly, although the system is able to rank parses with a 75% chance that the correct anal ys is will be the most highly ranked, further improvement will require a 'lexicalised' system in which (minimally) probabilities are associated with alternative subcategorisation possibilities of individual lexical items. Currently, the relative frequency of subcategorisation possibilities for individual lexical items is not recorded in wide-coverage lexicons, such as ANLT or COMLEX (Grishman et al., 1994). Secondly, removal of punctuation from the input (after segmentation int.o text sentences) worsens performance as punctuation both reduces syntactic ambiguity (Jones, 1994) and signals non-syntactic (discourse) relations between text units (Nun berg, 1990). Thirdly, the largest source of error on unseen input is the omission of appropriate subcategorisation values for lexical items (mostly verbs) , preventing the system from finding the correct analysis. The current coverage of this system on a general corpus (e.g. Brown or LOB) is estimated to be around 20% by Briscoe (1994). We have developed a variant probabilistic LR parser which does not rely on subcategorisation and uses punctuation to reduce ambiguity. The anal ys es produced by this parser could be utilised for phrase-finding applications, recovery of subcategorisation frames, and other 'intermediate' level parsing problems.
Part-of-speech Tag Sequence Grammar
Several robust parsing systems exploit the comparative success of part-of-speech (PoS) taggers, such as Fidditch (Hindle, 1989) or MITFP (de · Marcken, 1990), by reducing the input to a determinate sequence of extended PoS labels of the type which can be practically disambiguated in context using a (H)MM PoS tagger (e.g. Church, 1988). Such approaches, by definition, cannot exploit subcategorisation, and probably achieve some of their robustness as a result. However, such parsers typically also employ heuristic rules, such as 'low' attachment of PPs to produce unique 'canonical' analyses. This latter step complicates the recovery of predicate argument structure and does not integrate with a probabilistic approach to parsing.
We utilised the ANLT metagrammatical formalism to develop a feature-based, declara tive description of PoS label sequences for English. This grammar compiles into a DCG-like grammar of approximately 400 rules. It has been designed to enumerate possible valencies for predicates (verbs, adj ectives and nouns) by including separate rules for each pattern of possible complementation in English. The distinction between arguments and adj uncts is expressed, following X-bar theory (e.g. Jackendoff, 1977) , by Chomsky-adjunction of adjuncts to maximal projections {XP � XP Adjunct) as opposed to government of arguments (i.e. arguments are sisters within Xl projections; XI � XO Argl. .. ArgN) . Although the grammar enumerates complementation possibilities and checks for global sentential well-formedness, it is best de scribed as 'intermediate' as it does not attempt to associate 'displaced' constituents with their canonical position / grammatical role.
The other difference between this grammar and a more conventional one is that it incor porates some rules specifically designed to overcome limitations or idiosyncrasies of the PoS tagging process. For example, past particles functioning adjectivally, as in Th e disembodied head, are frequently tagged as past participles (VVN) i.e. Th e_AT disembodied_ VVN head_NNJ ,
so the grammar incorporates a rule . which parses past participles as adjectival premodifiers in this context. Similar idiosyncratic rules are incorporated for dealing with gerunds, adjective noun conversions, idiom sequences, and so forth.
This grammar was developed and refined in a corpus-based fashion (e.g. see Black, 1993) by testing against sentences from the Susanne corpus (Sampson, 1994), a 138K word treebanked and balanced subset of the Brown corpus 1 .
Text Grammar and Punctuation
Nunberg (1990) develops a partial 'text' grammar for English which incorporates -many con straints that (ultimately) restrict syntactic and semantic interpretation. For example, textual adjunct clauses introduced by colons scope over following punctuation, as (la) illustrates; whilst textual adjuncts introduced by dashes cannot intervene between a bracketed adjunct and the textual unit to which it attaches, as in (lb). (1) a *He told them his reason: he would not renegotiate his contract, but he did not explain to the team owners. ( vs. but would stay) b *She left -who could blame her -(during the chainsaw scene) and went home.
We have developed a declarative grammar in the ANLT metagrannnatical formalism, based on Nunberg's procedural description. This grammar captures the bulk of the text-sentential constraints described by Nunberg with _ a · grammar which compiles into 26 DCG-like rules. Text grammar analyses are useful because they demarcate some of the syntactic boundaries in the text sentence and thus reduce· ambiguity, and because they identify the units for which a syntactic analysis should, in principle, be found; for example, in (2), the absence of dashes would mislead a parser into seeking. a syntactic relationship between three and the following names, whilst in fact there is only a discourse relation of elaboration between this text adjunct and pronominal three.
(
2) The three -Miles J. Cooperman, Sheldon Te ller, and Richard Austinand eight other defendants were charged in six indictments with conspiracy to violate federal narcotic law. The rules of the text grammar divide into three groups: · those introducing -text-sentences, those defining text adjunct introduction and those defining text adj uncts (Nunberg, 1990). An example of each type of rule is given in (3a-c). (3) a T /txt-scl : TxtS -+ (Tu[+sc] )* Tu [-sc] ( +pex l +pqu) b Ta/dash-: Tu[�sc] -+ T[.::sc, -cl, -da] Ta[+da, da-] c T /Ua-da-_t : Ta[+da, da :.. ] -+ +pda Tu[-sc, -da] These rules are phrase structure schemata employing iterative operators, optionality and dis junction, preceded by a mnemonic name. Non-terminal categories are text sentences, units or adj uncts which carry features mostly representing the punctuation marks which occur as daughters in the rules ( e.g. +sc represents presence of a semi-colon marker), whilst terminal punctuation is represented as +pxx (e.g. +pda, dash) . .(3a) states that a text sentence can contain zero or more text units with a semi-colon at their right boundary followed by a text unit optionally followed by a question or exclamation mark. (3b) states that a text unit not containing a semi-colon can consist of a text unit or adj unct not containing dashes, colons or semi-colons followed by a text adjunct introduced by a dash. This type of 'unbalanced' adj unct can only be expanded by (3c) which states that it consists of a single opening dash followed by a text unit which does not itself contain dashes or semi-colons. The features on the first daugh ter of (3b) force dash adj uncts to have lower precedence and narrower scope than colons ?r semi-colons, blocking interpretations of multiple dashes as sequences of 'unbalanced' adjuncts. Nunberg (1990) invokes rules of (point) absorption which delete punctuation marks (inserted according to a simple context-free text grammar) when adj acent to other 'stronger' punctuation marks. For instance, he treats all dash interpolated text adj uncts as underlyingly balanced, but allows a rule of point absorption to convert (4a) into (4b).
(
4) a * Max fell -John had kicked him -. b Max fell -John had kicked him. The various rules of absorption introduce procedurality into the grammatical framework and require the positing of underlying forms which are not attested in text. For this reason, 'ab sorption' effects are captured through propagation of featural constraints in parse trees. For instance, ( 4a) is blocked by including distinct rules for the introduction of balanced and unbal anced text adj uncts and only licensing the latter text sentence finally. The text grammar has been tested on Susanne and covers 99.8% of sentences. (The failures are mostly text segmentation problems). The number of analyses varies from one (71 %) to the thousands (0. 1%) . Just over 50% of Susanne sentences contain some punctuation, so around 20% of the singleton parses are punctuated. The major source of ambiguity in the analysis of punctuation concerns the function of commas and their relative scope as a result of a decision to distinguish delimiters and separators (N unberg 1990:36). Therefore, a text sentence containing eight commas (and no other punctuation) will have 3170 analyses. The multiple uses of commas cannot be resolved without access to ( at least) the syntactic context of occurrence.
The Integrated Grammar
Despite Nunberg's observation that text grammar is distinct from syntax, text grammatical ambiguity favours interleaved application of text grammatical and syntactic constraints. The integration of text and PoS sequence grammars is straightforward and remains modular, in that the text grammar is 'folded into' the PoS sequence grammar, by treating text and syntactic categories as overlapping and dealing with the properties of each using disjoint sets of features, principles of feature propagation, and so forth. The text grammar rules are represented as left or right branching rules of 'Chomsky-adjunction' to lexical or phrasal constituents. Fo r example, the simplified rule for combining NP appositional or parenthetical text adj uncts is N2[+ta] --+ H2 Ta[+bal] which states that a NP containing a textual adj unct consists of a head NP followed by a textual adj unct with balanced delimiters (dashes, brackets or commas). Rules of this form ensure that -syntactic and textual analysis are mutually 'transparent' and orthogonal so, for example, any rules of semantic interpretation associated with syntactic rules continue to function unmodified. Such rules attach text adjuncts to the constituents over which they semantically scope, so it would be possible, in principle, to develop a semantics for them. In addition to the core text grammatical rules which carry over unchanged from the stand-alone text grammar, 44 syntactic rules (of pre-and post-posing, and coordination) now include (often optional) comma markers corresponding to the purely 'syntactic' uses of punctuation.
The approach to text grammar taken here is in many ways similar to that of Jones (1994). Briscoe & Carroll ( 1994).
However, he opts to treat punctuation marks as clitics on words which introduce additional featural information into standard syntactic rules. Thus, his grammar is thoroughly integrated and it would be harder to extract an independent text grammar or build a modular semantics. The coverage of the integrated version of the text grammar is described in more detail in
Parsing the Susanne and SEC Corpora
The integrated grammar has been used to parse Susanne and the quite distinct SEC Corpus (Taylor & Knowles, 1988), a 50K word treebanked corpus of transcribed British radio pro grammes punctuated by the corpus compilers. Both corpora were retagged with determinate punctuation and PoS labelling using _ the Acquilex HMM tagger (Elworthy, 1993(Elworthy, , 1994 trained on text tagged with a slightly modified version of CLAWS-II labels (Garside et al. , 1987).
Coverage and Average Ambiguity
To examine the efficiency and coverage of the grammar we applied it to our retagged versions of Susanne and SEC. We used the ANLT chart parser (Carroll, 1993), but modified just to count th_ e number of possible parses in the parse forests (Billot & Lang, 1989) rather than actually · unpacking them. We also imposed a per-sentence time-out of 30 seconds CPU time, running in Franz· Allegro Common Lisp 4.2 on an HP PA-RISC 715/100 workstation with 96 Mbytes of physical memory. \Ve define the 'coverage' of the grammar to be the inverse of the proportion of sentences for which no analysis was found-a weak measure since discovery of one or more global analyses does not entail that the correct analysis i� recovered. For both corpora, the majority of sentences analysed successfully received under 100 parses, although there is a long tail in the distribution. :t\.fonitoring this distribution is helpful during grammar development to ensure that coverage is increasing but the ambiguity rate is not. A more succinct though less intuitive measure of ambiguity rate for a given corpus is what we call the average parse base (APB), defined as the geometric mean over all sentences in the corpus of ::yp, where n is the number of words in a sentence, and p, the number of parses for that sentence 2 • Thus, given a sentence n tokens long, the APB raised to the nth power gives the number of analyses that the grammar can be expected to assigned to a sentence of that length in the corpus. Table 1 gives these measures for all of the sentences in Susanne and in SEC. As the grammar w� developed solely with reference to Susanne, coverage of SEC is quite robust. The two corpora differ considerably since the former is drawn from American written
text whilst the latter represents British transcribed spoken material. The corpora overall con tain material drawn from widely disparate genres / registers, and are more complex than those used in DARPA ATIS tests and more diverse than those used in MUC. The APBs for Susanne and SEC of 1.256 and 1.239 respectively indicate that sentences of average length in each cor pus could be expected to be assigned of the order of 97 and 126 analyses (i.e. 1.256 20 · 1 and 1.239 22·6). Black et al. (1993:156) quote a parse base of 1.35 for the IBM grammar for computer manuals applied to sentences 1-17 words long. Although, as mentioned above, Black's measure may not be exactly the same as our APB measure, it is probable that the IBM grammar assigns more anal ys es than ours for sentences of the same length. Black achieves a coverage of around 95%, as opposed to our coverage rate of 67-74% on much more heterogeneous data and longer sentences. The parser throughput on these tests, for sentences successfully analysed, is around 45 words per CPU second on an HP PA-RISC 715/100. Sentences of up to 30 tokens (words plus punctuation) are parsed in an average under 0.6 seconds each, whilst those around 60 tokens take on average 4.5 seconds. Nevertheless, the relationship between sentence length and processing time is fitted well by a quadratic function, supporting the findings of Carroll (1994) that in practice NL grammars do not evince worst-case parsing complexity.
Covera g e, Ambi g uity and Punctuation
We have also run experiments to evaluate the degree to which punctuation is contributing use ful information. Intuitively, we would expect the exploitation of text grammatical constraints to both reduce ambiguity and extend coverage (where punctuation cues discourse rather than syntactic relations between constituents) . Jones (1994) reports a preliminary experiment eval uating reduction of ambiguity by punctuation. However, the grammar he uses was developed only to cover the test sentences, drawn entirely from the SEC corpus which was punctuated post hoe by the corpus developers (Taylor and Knowles, 1988). We took all in-coverage sentences from Susanne of length 8-40 words inclusive containing internal punctuation; a total of 2449 sentences. The APB for this set was 1.273, mean length 22.5 words, giving an expected number of analyses for an average sentence of 225. We then re-
moved all sentence-internal punctuation from this set and re-parsed it. Around 8% of sentences now failed to receive an analysis. For those that did (mean length 20.7 words), the APB was now 1 .320, so an average sentence would be assigned 310 analyses, 38% more than before. On closer inspection, the increase in ambiguity is due to two factors: a) a significant proportion of sentences that previously received 1 ...:.. 9 analyses now receive more, and b) there is a much more substantial tail in the distribution of sentence length vs. number of parses, due to some longer sentences being assigned many more parses. Manual examination of 100 depunctuated examples revealed that in around a third of cases, although the system returned global analyses, the correct one was not in this set (Briscoe & Carroll, 1994). With a more constrained (sub categorised) syntactic grammar, many of these examples would not have received any global syntactic analysis.
Parse Selection
A probabilistic LR parser was trained with the integrated grammar by exploiting the Susanne treebank bracketing. An LR parser was applied to unlabelled brack eted sentences from the Susanne treebank, and a new treebank of 1758 correct and complete analyses with respect to the integrated grammar was constructed semi-automatically by manu ally· resolving the remaining ambiguities. 250 sentences from the new treebank were kept back for testing. The remainder, together with a further set of analyses from 2285 tree bank sentences that were not checked manually, were used to train a probabilistic version of the LR parser, using Good-Turing smoothing to estimate the probability of unseen transitions in the LALR( 1 ) table Carroll, 1993). The probabilistic parser can then return a ranking of all possible analyses for a sentence, or efficiently return just the n-most probable (Carroll, 1993).
The probabilistic parser was tested on the 250 sentences held out from the manually-created treebank (with mean length 18.2 tokens, mean number of parses per sentence 977, and APB 1. 25 2); in this test 85 sentences ( 34 % ). had the correct analysis ranked in the top three 3 • This figure rose to 51 % for sentences of less than 20 words. Considering just the highest ranked anal ysis for each sentence, in Sampson, Haigh & Atwell's (1989) measure of correct rule application the parser scored a mean of 83.5% correct over all 250 sentences. Table 2 shows the results of this test-with respect to the original Susanne bracketings-using the Grammar �valuation Interest Group scheme (GEIG, see e.g. Harrison et al., 1991). This compares unlabelled brack etings derived from corpus treebanks with those derived from parses for the same sentences by computing recall, the ratio of matched brackets over all brackets in the treebank; precision, the ratio of matched brackets over all brackets found by the parser; 'crossing ' brackets, the number of times a bracketed sequence output by the parser overlaps with one from the treebank but neither is properly contained in the other; and minC, the number of sentences for which all of the analyses had one or more crossings. The table also gives an indication of the best and worst possible performance of the disambiguation component of the system, showing the results obtained when parse selection is replaced by a simple random choice, and the results of eval uating the manually-created treebank against the corresponding Susanne bracketings. In this latter figure, the mean number of crossings is greater than zero mainly because of compound noun bracketing ambiguity which our grammar does not attempt to resolve, always returning a right-branching binary analysis. Black (1993:7) uses the crossing brackets measure to define a notion of structural consistency, where the structural consistency rate for the grammar is defined as the proportion of sentences for which at least one analysis contains no crossing brackets, and reports a rate of around 95% for the IBM grammar tested on the computer manual corpus. The problem with the GEIG scheme and with structural consistency is that both are still weak measures ( designed to avoid problems of parser/treebank representational compatibility) which lead to unintuitive numbers whose significance still depends heavily on details of the relationship between the representations compared (c.f. the compound noun issue mentioned above) .
Schabes et al. (1993) and Magerman (1995) report results using the GEIG evaluation scheme which are numerically superior to ours. However, their experiments are not strictly compati ble because they both utilise more homogeneous and probably simpler corpora. In addition, Schabes et al. do not recover tree labelling, , whilst Magerman has developed a parser designed to produce identical analyses to those used in the Penn 'Ireebank, removing the problem of spurious errors due to grammatical incompatibility. Both these approaches achieve better cov erage by constructing the grammar fully automatically. No one has yet shown that any robust parser is practical and useful for some NLP task. However, it seems likely that say rule-to-rule semantic interpretation will be easier with hand-constructed grammars with an explicit, de terminate ruleset. A more meaningful comparison will require application of different parsers to an identical and extended test suite and utllisation of a more stringent standard evaluation procedure sensitive to node labellings.
Parse Selection and Punctuation
In order to assess the contribution of punctuation to the selection of the correct analysis, w� applied the same trained version of the integrated grammar to the 106 sentences from the test set which contain internal punctuation, both with and without the punctuation marks in the input. A comparison of the GEIG evaluation metrics for this set of sentences punctuated and unpunctuated gives a measure of the contribution of punctuation to parse selection on this data. (The results for the unpunctuated set were computed against a version of the Susanne tree bank from which punctuation had also been � emoved.) As table 3 shows, recall declines by 10%, precision by 5% and there are an average of 1.27 more crossing brackets per sentence. These results indicate clearly that punctuation and text grammatical constraints can play an important role in parse selection. Briscoe and Carroll (1993) and Carroll (1993) showed that the LR model, combined with a gram mar exploiting subcategorisation constraints, could achieve good parse selection accuracy bu� at the expense of poor coverage of free text. The results reported here suggest that improved coverage of heterogeneous text can be achieved by exploiting textual and grammatical con straints on PoS and punctuation sequences. The experiments show that grammatical coverage can be greatly increased by relaxing subcategorisation constraints, and that text grammatical or punctuation-cued constraints can reduce ambiguity and increase coverage during parsing. To our knowledge these are the first experiments which objectively demonstrate the utility of punctuation for resolving syntactic ambiguity and improving parser coverage. They extend work by Jones (1994) and Briscoe and Carroll (1994) by applying a wide-coverage text grammar to substantial quantities of naturally-punctuated text and by quantifying the contribution of punctuation to ambiguity resolution in a well-defined probabilistic parse selection model.
Conclusions
Accurate enough parse selection for practical applications will require a more lexic. alised system. Magerman's ( 1995) parser is an extension of the history-based parsing approach devel oped at IBM (e.g. Black, 1993) in which rules are conditioned on lexical and other (essentially arbitrary) information available in the parse history. In future work, we intend to explore a more restricted and semantically-driven version of this approach in which, firstly, probabilities are associated with different subcategorisation possibilities, and secondly, alternative predicate argument structures derived from the grammar are ranked probabilistically. However, the mas sively increased coverage obtained here by relaxing subcategorisation constraints underlines the need to acquire accurate and complet� subcategorisation frames in a corpus-driven fas hion, before such constraints can be exploited robustly and effectively with free text.
minC Crossings Recall (%) Precision (%) Wi th punctuationTop-ranked 3 analyses, weighted = Top-ranked 3 analyses, weighted =78
3.25
74.38
40.78
Punctuation removed
82
4.52
65.54
35.95
Table 3 :
3GEIG evaluation metrics for test set of 106 unseen punctuated sentences (mean length with punctuation 21.4 words; without, 19.6)
The grammar currently covers more than 75% of the sentences. Many of the remaining failures for shorter text sentences are a consequence of the root S node requirement, since they represent elliptical noun or prepo sitional phrases in dialogue. Other failures on sentences are a consequence of incorporation of complementation constraints for auxiliary verbs into the grammar but the lack of any treatment of unbounded dependencies. Nevertheless, we tolerate these 'deficiencies', since they have the effect of limiting the number of analyses recov ered in other cases, and will not, for example, affect unduly the recovery of subcategorisation frames from the resulting analyses. ·
Black et al.(1993:13) define an apparently similar measure, parse base, as the "geometric mean of the number of parses per word for the entire corpus" , but in the immediately following sentence talk about raising it to the power of the number of words in a sentence, which . is inappropriate for a simple ratio.
This is a strong measure, since it not only accounts for structural identity between trees, but also correct rule application at every node.
The structure of shared forests in ambiguous parsing. S Billot, B Lang, Proceed ings of the 27th Meeting of Association fo r Computational Linguistics. eed ings of the 27th Meeting of Association fo r Computational LinguisticsVancouver, CanadaBillot, S. and Lang, B. 1989. The structure of shared forests in ambiguous parsing. In Proceed ings of the 27th Meeting of Association fo r Computational Linguistics, 143-151. Vancouver, Canada.
Statistically-Driven Computer Grammars of English: The IBM/ Lancaster Approach. E Black, R Garside, G Leech, Rodopi, AmsterdamBlack, E., Garside, R. and Leech, G. (eds.) 1993. Statistically-Driven Computer Grammars of English: The IBM/ Lancaster Approach. Rodopi, Amsterdam.
Prospects for practical parsing of unrestricted text: robust statistical parsing techniques. E Briscoe, Corpus-based Research into Language. Oostdijk, N & de Haan, PRodopi, AmsterdamBriscoe, E. 1994. Prospects for practical parsing of unrestricted text: robust statistical parsing techniques. In Oostdijk, N & de Haan, P. eds. Corpus-based Research into Language. Rodopi, Amsterdam: 97-120.
Generalised probabilistic LR parsing for unification-based grammars. E Briscoe, J Carroll, Computational Linguistics. 19Briscoe, E. and Carroll, J. 1993. Generalised probabilistic LR parsing for unification-based grammars. Computational Linguistics 19.1: 25-60.
Parsing {with) Punctuation. Rank Xerox Research Centre. E Briscoe, J Carroll, MLTT-TR-007GrenobleBriscoe, E. and Carroll, J. 1994. Parsing {with) Punctuation. Rank Xerox Research Centre, Grenoble, MLTT-TR-007.
A formalism and environment for the development of a large grammar of English. E Briscoe, C Grover, B Boguraev, J Carroll, Proceedings of the 10th International Joint Conference on Artificial In telligence. the 10th International Joint Conference on Artificial In telligenceMilan, ItalyBriscoe, E., Grover, C., Boguraev, B. and Carroll, J. 1987. A formalism and environment for the development of a large grammar of English. In Proceedings of the 10th International Joint Conference on Artificial In telligence, 703-708. Milan, Italy.
Practical unification-based parsing of natural language. J Carroll, TR-314Cambridge University, Computer LaboratoryCarroll, J. 1993. Practical unification-based parsing of natural language. Cambridge University, Computer Laboratory, TR-314.
Relating complexity to practical performance in parsing with wide-coverage unification grammars. J Carroll, Proceedings of the 32nd Meeting of Association fo r Computation(l,l Linguistics. the 32nd Meeting of Association fo r Computation(l,l LinguisticsLas Cruces, NMCarroll, J. 1994. Relating complexity to practical performance in parsing with wide-coverage unification grammars. In Proceedings of the 32nd Meeting of Association fo r Computation(l,l Linguistics, 287-294. Las Cruces, NM.
The derivation of a large computational lexicon for English from LDOCE. J Carroll, C Grover, Computational Lexicography for Natural Language Processing. Boguraev, B. and Briscoe, ELongman, LondonCarroll, J. and Grover, C. 1989. The derivation of a large computational lexicon for English from LDOCE. In Boguraev, B. and Briscoe, E. eds. Computational Lexicography for Natural Language Processing. Longman, London: 117-134.
A stochastic parts program and noun phrase parser for unrestricted text. K Church, Proceedings of the 2nd Conference on Applied Na tural Language Processing. the 2nd Conference on Applied Na tural Language ProcessingAustin, TexasChurch, K. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the 2nd Conference on Applied Na tural Language Processing, 136-143. Austin, Texas.
Part-of-speech tagging and phrasal tagging. D Elworthy, Cambridge University Computer LaboratoryAcquilex-11 Working Paper 10can be obtained from cide@cup.cam.ac.ukElworthy, D. 1993. Part-of-speech tagging and phrasal tagging. Acquilex-11 Working Paper 10, Cambridge University Computer Laboratory (can be obtained from cide@cup.cam.ac.uk) .
Does Baum-Welch re-estimation help taggers. D Elworthy, Proceedings of the 4th Conf. Applied NLP. the 4th Conf. Applied NLPStuttgart, Ger�anyElworthy, D. 1994. Does Baum-Welch re-estimation help taggers?. In Proceedings of the 4th Conf. Applied NLP. Stuttgart, Ger�any.
Computational analysis of English. R Garside, G Leech, G Sampson, LongmanLondonGarside, R. , Leech, G. and Sampson, G. 1987. Computational analysis of English. Longman, London.
Comlex syntax: building a computational lex icon. R Grishman, C Macleod, A Meyers, Proceedings of the International Conference on Computational Linguistics, COLING-94. the International Conference on Computational Linguistics, COLING-94Kyoto, JapanGrishman, R., Macleod, C. and Meyers, A. 1994. Comlex syntax: building a computational lex icon. In Proceedings of the International Conference on Computational Linguistics, COLING- 94, 268-272. Kyoto, Japan.
Th e Alvey Na tural Language To ols Grammar (4 th Release). C Grover, J Carroll, E Briscoe, 284Cambridge University Computer LaboratoryGrover, C., Carroll, J. and Briscoe, E. 1993. Th e Alvey Na tural Language To ols Grammar (4 th Release). Cambridge University Computer Laboratory, TR-284.
Evaluating syntax performance of parser/ grammars of English. P Harrison, S Abney, E Black, D Flickenger, C Gdaniec, R Grishman, D Hindle, B Lngria, M Marcus, B Santorini, T Strzalkowski, Proceedings of the Wo rkshop on Evaluating Natural Language Processing Systems. ACL. the Wo rkshop on Evaluating Natural Language Processing Systems. ACLHarrison, P. , Abney, S., Black, E., Flickenger, D., Gdaniec, C., Grishman, R., Hindle, D., lngria, B., Marcus, M., Santorini, B. and Strzalkowski, T. 1991. Evaluating syntax performance of parser/ grammars of English. In Proceedings of the Wo rkshop on Evaluating Natural Language Processing Systems. ACL.
Acquiring disambiguation rules from text. D Hindle, Proceedings of the 27th Annual Meeting of the Association fo r Computational Linguistics. the 27th Annual Meeting of the Association fo r Computational LinguisticsVancouver, CanadaHindle, D. 1989. Acquiring disambiguation rules from text. In Proceedings of the 27th Annual Meeting of the Association fo r Computational Linguistics, 118-25. Vancouver, Canada.
X-bar Syntax. R Jackendoff, MIT PressJackendoff, R 1977. X-bar Syntax. MIT Press;
. M A Cambridge, Cambridge, MA ..
Can punctuation help parsing. B Jones, Proceedings of the Coling94. the Coling94Kyoto, JapanJones, B 1994. Can punctuation help parsing?. In Proceedings of the Coling94. Kyoto, Japan.
Statistical decision-tree models for parsing. D Magerman, Proceedings of the 33rd annul Meeting of the Association fo r Computational Linguistics. the 33rd annul Meeting of the Association fo r Computational LinguisticsBoston, MAMagerman, D. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd annul Meeting of the Association fo r Computational Linguistics. Boston, MA.
Parsing the LOB corpus. C De Marcken, Proceedings of the 28th Annual Meeting of the Association fo r Computational Linguistics. the 28th Annual Meeting of the Association fo r Computational LinguisticsNew Yorkde Marcken, C. 1990. Parsing the LOB corpus. In Proceedings of the 28th Annual Meeting of the Association fo r Computational Linguistics, 243-251. New York.
The linguistics of punctuation. G Nunberg, CSLI Lecture Notes. 18Nunberg, G. 1990. The linguistics of punctuation. CSLI Lecture Notes 18, Stanford, CA.
Definite clause grammars for language analysis -a survey of the formalism and a comparison with augmented transition networks. F Pereira, D Warren, Artificial Intelligence. 13Pereira, F. and Warren, D. 1980. Definite clause grammars for language analysis -a survey of the formalism and a comparison with augmented transition networks. Artificial Intelligence 13.3: 231-278.
Susanne: a Doomsday book of English grammar. G Sampson, Corpus-based Research into Language. Oostdijk, N & de Haan, PRodopi, AmsterdamSampson, G. 1994. Susanne: a Doomsday book of English grammar. In Oostdijk, N & de Haan, P. eds. Corpus-based Research into Language. Rodopi, Amsterdam: 169-188.
Natural language analysis by stochastic optimiza tion: a progress report on Project APRIL. G Sampson, R Haigh, E Atwell, Journal of Experimental and Th eoretical Artificial Intelligence. 1Sampson, G., Haigh, R., and Atwell, E. 1989. Natural language analysis by stochastic optimiza tion: a progress report on Project APRIL. Journal of Experimental and Th eoretical Artificial Intelligence 1: 271-287.
Parsing of the Wall Street Journal with the inside outside algorithm. Y Schabes, M Roth, R Osborne, Proceedings of the Meeting of European Association fo r Computational Linguistics. the Meeting of European Association fo r Computational LinguisticsUtrecht, The NetherlandsSchabes, Y., Roth, M. and Osborne, R. 1993. Parsing of the Wall Street Journal with the inside outside algorithm. In Proceedings of the Meeting of European Association fo r Computational Linguistics. Utrecht, The Netherlands.
Manual of information to accompany the SEC corpus: the machine-readable corpus of spoken English. L Taylor, G Knowles, UK, MsUniversity of LancasterTaylor, L. and Knowles, G. 1988. Manual of information to accompany the SEC corpus: the machine-readable corpus of spoken English. University of Lancaster, UK, Ms ..
| [] |
[
"Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes",
"Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes"
] | [
"Sunil Gundapu sunil.g@research.iiit.ac.in \nLanguage Technologies Research Centre KCIS, IIIT Hyderabad Telangana\nLanguage Technologies Research Centre KCIS\nIIIT Hyderabad Telangana\nIndia, India\n",
"Radhika Mamidi radhika.mamidi@iiit.ac.in \nLanguage Technologies Research Centre KCIS, IIIT Hyderabad Telangana\nLanguage Technologies Research Centre KCIS\nIIIT Hyderabad Telangana\nIndia, India\n"
] | [
"Language Technologies Research Centre KCIS, IIIT Hyderabad Telangana\nLanguage Technologies Research Centre KCIS\nIIIT Hyderabad Telangana\nIndia, India",
"Language Technologies Research Centre KCIS, IIIT Hyderabad Telangana\nLanguage Technologies Research Centre KCIS\nIIIT Hyderabad Telangana\nIndia, India"
] | [] | The exponential rise of social media networks has allowed the production, distribution, and consumption of data at a phenomenal rate. Moreover, the social media revolution has brought a unique phenomenon to social media platforms called Internet memes. Internet memes are one of the most popular contents used on social media, and they can be in the form of images with a witty, catchy, or satirical text description. In this paper, we are dealing with propaganda that is often seen in Internet memes in recent times. Propaganda is communication, which frequently includes psychological and rhetorical techniques to manipulate or influence an audience to act or respond as the propagandist wants. To detect propaganda in Internet memes, we propose a multimodal deep learning fusion system that fuses the text and image feature representations and outperforms individual models based solely on either text or image modalities. | 10.48550/arxiv.2205.02937 | [
"https://arxiv.org/pdf/2205.02937v1.pdf"
] | 248,562,761 | 2205.02937 | 14dba1017d8a20cce34289f542c082b62f357423 |
Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes
Sunil Gundapu sunil.g@research.iiit.ac.in
Language Technologies Research Centre KCIS, IIIT Hyderabad Telangana
Language Technologies Research Centre KCIS
IIIT Hyderabad Telangana
India, India
Radhika Mamidi radhika.mamidi@iiit.ac.in
Language Technologies Research Centre KCIS, IIIT Hyderabad Telangana
Language Technologies Research Centre KCIS
IIIT Hyderabad Telangana
India, India
Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes
The exponential rise of social media networks has allowed the production, distribution, and consumption of data at a phenomenal rate. Moreover, the social media revolution has brought a unique phenomenon to social media platforms called Internet memes. Internet memes are one of the most popular contents used on social media, and they can be in the form of images with a witty, catchy, or satirical text description. In this paper, we are dealing with propaganda that is often seen in Internet memes in recent times. Propaganda is communication, which frequently includes psychological and rhetorical techniques to manipulate or influence an audience to act or respond as the propagandist wants. To detect propaganda in Internet memes, we propose a multimodal deep learning fusion system that fuses the text and image feature representations and outperforms individual models based solely on either text or image modalities.
observation and analysis. Another set of techniques use emotional language to motivate the viewer to accept the speaker only based on the emotional bond being created, which will stop any rational analysis of the argument.
In recent times, propagandists are generating Internet memes by using propaganda techniques to influence viewers or readers to believe in someone or something. American scientist Robert Peter W. Singer describes in his book "LikesWar" that social media propaganda and misinformation are emerging as a new weapon in the modern warfare 1 . Therefore, it is essential to find out propaganda campaigns in Internet memes to stop spreading them. So, in this paper we aims to build a multimodal fusion system that can identify the Internet memes in which propaganda techniques are used. The system takes pair (text, image) of an Internet meme as an input to determine which of the propaganda techniques is used in the text and visual content of the meme.
Related Work
Evolution of Propaganda: Propaganda techniques are not new. The term propaganda was used in the early 17th century and was first used to propagate Catholic beliefs and practices in the New World and later to manipulate people in public gatherings such as festivals, games, and theaters [ASamuel C Woolley and Philip N Howard, 2018]. But in the present technological world, this propaganda has progressed to computational propaganda [Bolsover and Howard, 2017b], where information is dispensed through technology such as social media platforms so that it is possible to reach welltargeted communities at high speeds. This propaganda shared on these platforms can be text, visual or text-vision combinations. Internet memes are critical in spreading multimodal propaganda on social media platforms [DiRESTA, 2018]. The present social media ecosystem and virality bots allow memes to spread effortlessly, switching from one target group to another. Currently, efforts to curb the spread of such memes are focused on analyzing social media networks and searching for fake accounts and bots to lessen the spread of such content [Cresci et al., 2017, Yang et al., 2019b.
Propaganda in Text Modality: In the natural language processing community, much research is done on propaganda by analyzing textual content , Rashkin et al., 2017. Rashkin et al. [2017] studied the propaganda at documet level by creating TSHP-17 dataset. This dataset developed with the help of English Gigaword corpus and labelled with four classes: satire, trusted, hoax, and propaganda. On this dataset, they trained a logistic regression model with n-gram level word representations. To analyze the propaganda at sentence level developed a new QProp dataset with two labels: propaganda, and non-propaganda. On this binary labeled corpora they trained various machine learning models like logistic regression, SVMs to discriminate propaganda from non-propaganda datapoints.
In a similar fashion, Habernal et al. [2017] created a dataset with 1300 data points with five propaganda beliefs, including irrelevant authority, ad hominem, and red herring, which are directly associated with propaganda techniques. examines the propaganda at the fragment level. For this, they created a PTC dataset by annotating the news articles with 18 propaganda techniques. There are two types of experiments done on this dataset. The first one is a two-class classification: Whether the given input news article using any of 18 techniques or not. Another task is multi-label classification and span detection: For the input text, find the span of text fragments where the propaganda techniques are used and identify the type of propaganda technique. Recently, Martino et al. [2020] on detection of computational propaganda from the perspective of NLP and Network Analysis mentioned the need for collective efforts between these communities. There is also a dedicated Big Data Journal on Political Big Data and Computational Propaganda [Bolsover and Howard, 2017a].
Propaganda in Multimodality: Originally, propaganda campaigns have appeared in text modality, but nowadays, they appear in every possible modality. Propaganda techniques are more accessible to spot in text modality than multimodalities such as memes because contextual information related to the propaganda can be included in more than one of the multiple modalities.
To understand aspects of visual propaganda, Seo [2014] analyzes the social media tweet images posted by the Hamas' Alqassam Brigades and Israel Defense Forces during the 2012 Gaza conflict. By selecting 10 Youtube videos, Abd Kadir et al. [2016] studied the comprehend relationships between A p p e a l to E m o ti o n s A p p e a l to a u th o ri ty A p p e a l to fe a r B a n d w a g o n B la c k -a n d -w h it e F a ll a c y C a u s a l O v e rs im p li fi c a ti o n D o u b t E x a g g e ra ti o n F la g -w a v in g G li tt e ri n g g e n e ra li ti e s L o a d e d L a n g u a g e S tr a w M a n N a m e C a ll Glenski et al. [2019] proposed two classification tasks to explore the multilingual content for deception detection. Both tasks are intended to identify the category of a social media post, but the first task has four output categories (propaganda, conspiracy, hoax, or clickbait), and the second task has five output categories with an extra category of disinformation.
Some quality works done on multimodality content before this propaganda problem , such as the spread of false information [Dupuis and Williams, 2019], hateful memes identification [Kiela et al., 2020, Lippe et al., 2020, Das et al., 2020, Gundapu and Mamidi, 2020, antisemitism [Chandra et al., 2021].
Multimodal Models and Fustion Techniques: Facebook Hateful Memes Challenge has been very helpful in developing different types of multimodal models and to fine-tune the state-of-art multimodal transformer models such as ViLBERT [Lu et al., 2019], Multimodal Bitransformers [Kiela et al., 2020], and VisualBERT . And also Vidgen et al. [2019] pointed that memes make perfect sense when both text and image content are taken into account. By considering this point, many authors [Baltrusaitis et al., 2019] [Yang et al., 2019a] [Gallo et al., 2018] have explored different multimodal fusion strategies to combine modalities.
Dataset
In this paper, we predominately focused on the task of propaganda techniques identification in Internet memes. We formulated this task as a multi-label classification problem because each input is labeled with more than one propaganda technique. We took the dataset from the 15th International Workshop on Semantic Evaluation (SemEval-2021) for this task [Dimitrov et al., 2021]. The dataset comprises of 950 pair (text, image) of social media posts, and each pair labeled with propaganda techniques. In the input pair (text and image), the text is extracted from the meme using OCR, and the image is a meme. We used 22 propaganda techniques for our task and put them in Table 1
Proposed Approaches
Individual Modality Models: In the beginning, we started to examine our task, "Identification of propaganda techniques in Internet memes," with individual modalities (text or image). For text data, we have developed various Machine Learning (ML) and Deep Learning (DL) models with different word embeddings. Nevertheless, BERT and RoBERTa gave superior results for this textual modality than the ML and DL models with the Glove [Pennington et al., 2014] and FastText [Bojanowski et al., 2016] embeddings. On the image data, we experimented with CNN and pre-trained image classification models. Pre-trained models gave significantly better results than traditional CNN models.And When comparing the results of individual modalities, text modality models gave more effective results than image modality models. This is because there is more information about the propaganda in the textual data than in the visual data in the memes.
Cross-Modality Multimodal Models: After analyzing the results of individual modalities, we started to fine-tune cross-modality by multimodal (Vision-and-Language) pretraining models like UNITER , VilBERT [Lu et al., 2019], and VisualBERT , and these pre-trained models performed better than individual modality models. Following individual and cross-modality models, we explored multimodal fusion approaches to combine information from text and vision input modalities in a principled way.
Multimodal Fusion System
In a multimodal fusion setting, ML/DL models are trained on separate modalities and integrated simultaneously. When individual modality features are merged into output layers, this is called Late Fusion. On the other hand, there is an inversion of late fusion called Early Fusion to fuse modalities. In this fusion, features are incorporated at the input level before being given in the model. We experimented on our task by fusing text encoder (encodes text data) and image encoder (encodes the image data) using early fusion and late fusion methods. After observing the results of these methods, merging modalities at their individual deepest (or early stage) attributes is not mandatorily the most appropriate way to solve our propaganda identification multimodal task.
In our work, we used an idea "MFAS: Multimodal Fusion Architecture Search" of considering features collected from the hidden layers of individual modalities that could effectively enhance performance with respect to only utilising a single fusion of late (or early) features. Figure Given a pair (text and image) for a social media post, we make use of Transformer based models to encode the text extracted from the Internet meme. We use the pre-trained image classification models to encode the image. Further, the MFAS module fuses the text and image encodings. Next, the fused encodings are transformed using a fully connected network to identify the propaganda techniques. The complete model is trained end-to-end utilising back-propagation. The principal motive of fine-tuning the pre-trained models on the datasets for our multimodal classification task is that our datasets are small. Below we describe each module in the architecture in detail.
Text Preprocessing
1. Conversion of chat contractions: Chat words/phrases are widely used on social networks to express emotions and are very helpful in identifying context. We have created a word contractions dictionary with 250 chat words to convert these chat words to their complete form. Examples: YOLO → you only live once, ASAP → as soon as possible.
2. URLs removal: The OCR extracted text consists of URLs like memegenerator.net, etc. We removed those links since they do not give any vital information.
3. Conversion of elongated words: With the help of regular expressions, transformed the elongated words to their original form. Examples: Nooooo → No, suuuppperrr → super.
4. Using ekphrasis [Baziotis et al., 2017] library, normalizes the time, date, and numbers into a standard format. This library does hashtag splitting and spelling correction. We removed the non-alpha-numeric characters, punctuation marks, and non-ASCII glyphs from the text data.
Text Encoder
The preprocessed text data tokenized and then forwarded to the text encoder module. We tried the BERT [Devlin et al., 2019], XLNet [Yang et al., 2019c], and RoBERTa pre-trained transformer models in the text encoder module to encode the text because they have been shown to give incredible results in multiple NLP classification tasks.
BERT (Bidirectional Encoder Representations from Transformers) employs a multi-layer bidirectional transformer encoder to learn deep hidden bidirectional representations. It has self-attention layers that perform self-attention on input text from both directions. This technique allows the BERT model to know the context of a word in the sentence based on the words around it. We used the BERT base case model for our task, pre-trained on the large unlabeled Book corpus and the entire Wikipedia.
RoBERTa makes use of Transformer [Vaswani et al., 2017], and it is a robustly optimized approach for pretraining NLP models that improve on BERT. RoBERTa was trained with more data, more iterations, larger batch sizes, and learning rates than the BERT. And it eliminates the next-sentence pretraining objective task from the BERT model to boost the training procedure and introduces a new idea in its architecture called dynamic masking. In this idea, masked tokens will change during the model training. For our task, we used the RoBERTa base model.
XLNet is a bidirectional transformer autoregressive model that uses a better training methodology and a more extensive dataset to achieve better results than BERT. In pretraining, it integrates the following two techniques: (1) State-of-the-art Autoregressive model and (2) Transformer XL. Furthermore, XLNet presents a permutation language modeling technique to predict all tokens in random order instead of sequential order. This idea helps the XLNet learn bidirectional relationships among words and handle the dependencies.
For better contextual encodings, we experimented on text data with the above explained three transformer models. However, the RoBERTa gave slightly better results than the BERT and XLNet.
Image Encoder
In our dataset, input images (memes) are in different shapes and sizes. So before forwarding to any CNN model, we resize all the images to 224 × 224 dimensions. After this step, applied few image transformation techniques like rotation, flipping, and cropping on the scaled dataset. We extracted image representations twice from the image encoder. The first step collected the representations from the second last output layer of the CNN model and then fused them with text encodings. Next, image encoder final layer outputs concatenated with text encodings and previous step fused encodings. [Simonyan and Zisserman, 2015], InceptionV3 [Szegedy et al., 2016], Resnet-152 [He et al., 2016], DenseNet-161 [Huang et al., 2017], and InceptionResNetV2 [Szegedy et al., 2017]. Among all these pre-trained models VGG-19 and Resnet-152 gave considerable results for our task.
Fusion Module
To join the representations procured from the text encoder and image encoder modules, T, F', and I, we examine with various techniques: (i) Concatenation of both modalities output representations, (ii) Early and Late Fusion, and (iii) MFAS.
Early Fusion: Initially started with a widely used state-of-the-art early fusion technique to fuse both modalities. Early fusion technique (See figure 2) is also known as feature-level fusion technique, which concatenates the embeddings from text and image modalities as input representations for classifiers. This technique can be conveyed as follows:
X early = f (U 1 , ....., U m , V 1 , ......, V n )(1)
Here an aggregated representation of the X early attributes is calculated by the function f that connects the individual attributes.
Late Fusion: After early fusion, we experimented with the decision-level late fusion technique (See figure 2). This technique fuses the output level representations of text and image encoder and computes a late fusion score for classifiers. Late fusion technique can be conveyed as follows:
X late = g(f 1 (U 1 ), ..., f m (U m ), f m+1 (V 1 )...., f m+n (V n ))(2)
Here functions f 1 , ..., f m+n are applied to each individual attribute and function g is used to integrate every individual decision by f 1 , .., f m+n .
Multimodal Fusion Architecture Search (MFAS): Next examined a model-level fusion technique called MFAS, which is a compromise between the two modalities. It concatenates the hidden layer representations from different modalities. As shown in model architecture (figure 1), on our task, this technique initially concatenates the text encoder output representations (T) with image encoder intermediate hidden layer representations (H) then applies a non-linearity sigmoid function.
F = σ(T ⊕ H)(3)
Next, it fuses the output (F') with text predictions (T) and image predictions (I) along with a non-linearity sigmoid function (F).
F = σ(T ⊕ F ⊕ I)(4)
Classifier
Followed by the fusion model, we have constructed a fully connected network that takes the input from the fusion model. The fully connected network consists of two dense layers with hidden unit sizes of 768 and 384 and finally the output layer with a size of 22 output units has a sigmoid function.
Experiments and Results
This experiments section explains model implementation details, hyper-parameter tuning, results of individual text and image modalities classifiers and multimodal + fusion classifiers.
Hyper-parameter Settings and Implementation Details for Reproducibility
The following experimental settings are used for our work. We trained all the models using the train set and used the validation dataset to find the right set of hyper-parameters. We experimented with different image encoder intermediate layer outputs to get better results. However, Block-2 output (H) gave the best results for our work as compared to other blocks' output.
For all the experiments, we used the Adam optimization algorithm. To increase the model speed and performance, used dropout layers with the probability of 0.2. In the dense layers used the ReLU activation function. And, we used PyTorch [Paszke et al., 2019] and Keras [Chollet et al., 2015] frameworks for building deep learning models, scikit-learn [Pedregosa et al., 2011] for ML models.
To fine-tune the transformer models used the PyTorch-based HuggingFace 3 transformer library.
Modality
Results and Analysis of Individual Modalities Classifiers
We started experiments for both (text and image) modalities with ML algorithms and TF-IDF word vectors for baseline model results. After that, we experimented on text modality with five well-known pre-trained text embedding-based classifiers and three pre-trained CNN-based image classifiers. We used Glove, FastText, BERT, XLNet, and RoBERTa for the individual text classification. And we used the VGG-19, InceptionV3, Resnet-152, Densenet-161, InceptionResNetV2 for image classification.
The comparative f1-scores of individual modalities are presented in Table 2. After observing the classification results of individual modalities, we made the following observations:
• Text modality classifiers gave encouraging results, but the image modality classifiers provide lesser f1-scores. We suspect this because images related to propaganda techniques are usually news articles, memes, or screenshots (sometimes) that generally do not convey any spatial information. • Among text modality classifiers, Transformer based approaches worked better than the shallow word embedding methods like Glove and FastText. Transformer-based BERT, XLNet, and RoBERTa gave superior results on our task. However, RoBERTa model score surpassed BERT and XLNet models. Among the image modality classifiers, pre-trained VGG-19 gave better results than other pre-trained models.
Results and Analysis of Multimodal Fusion Classifiers
In this section, we presented results and analysis of various fusion approaches for our proposed multimodal classifier. Based on the results of individual modalities (in Table 2), we noticed that transformer-based RoBERTa is the best text encoder and VGG-19 is the best image encoder for our dataset. Therefore, we did multimodal fusion experiments with these two encoders only. We presented the results of this multimodal classifier with different fusion techniques in Table 2. Moreover, we compared the results of our proposed multimodal system with individual and cross-modality models.
• All the proposed multimodal fusion models for our task performed better than other models by a considerable margin. Cross-modality by pre-training multimodal models are much better than individual modality models. And these models are working best when the visual information in the memes is not covered by textual information. • In the multimodal late fusion architecture, the Image encoder gives only prediction-level features that do not capture complex information, such as semantic concepts of faces, animals, trees, etc. Due to this, when we concatenate image encodings with text encodings, it sometimes misleads the fusion model outputs. • Concatenation of encoders features technique and early fusion technique does not meet the expectations. They only give the approximate individual image modality model results. • Decision level late fusion and MFAS approaches efficiently fused the spatial information from images and semantic information from text and performed better than all other multimodal systems. However, MFAS multimodal system outperforms the late fusion approach.
Error Analysis
For our task, we mostly lean towards pre-trained models because our dataset is tiny. Transformerbased models performed very well on our task and gave considerable results than we expected. However, in some cases, transformer models are incorrectly predicted when the input text length is short. Furthermore, pre-trained CNN classification models were mainly confused for our task in the following contexts to extract vital spatial image information. (i) Few memes are designed with the clubbing of multiple images. In this case, pre-trained models are stumbled to gather information from them. (ii) Some users take screenshots of memes posted by other users and post them again on social media. Lack of additional contextual information in such posts systems making false predictions. (iii) Sometimes, the entire meme is covered with text, making it difficult for CNN models to recognize spatial features in images.
We have analyzed that combining encoder representations from multiple modalities can help identify propaganda techniques in memes. Though, in some cases, we realize that noise in one modality leads to a total misclassification. While doing experiments, we observed that the dataset used for our task has severe class imbalance due to the heterogeneous frequency of various types of propaganda techniques in real life. "Bandwagon, Reduction ad Hitlerum", "Appeal to Authority", "Black and White Fallacy", "Whataboutism, StrawMen, RedHerring", and "Thought Terminating Cliche" classes samples are significantly less in the dataset compare to other classes samples. To tackle this class imbalance problem, we explored with different data under/over sampling techniques like SMOTE [Chawla et al., 2002], Tomek Links [Tomek, 1976], Near Miss [Zhang and Mani, 2003] for textual data, tried the position and colour augmentation techniques [Perez and Wang, 2017] for image data. Also we used the class weight [Guo et al., 2008] approach which give different weights to both the majority and minority classes. Experimented with an enhanced version of Cross-Entropy Loss that is focal loss [Lin et al., 2017] that seeks to manage the problem of class imbalance by assigning more weight to rough or easily misclassified samples and to low-weight easy samples.
Conclusion and Future Work
In this work, we first provided a systematic study on the problem of identification of propaganda techniques in Internet memes. After that, we explored this problem with a multimodal fusion architecture. In this architecture, with the help of the MFAS technique, we fused the text features extracted using RoBERTa model and image features using pre-trained VGG-19. We examined our problem with single modality as well as multimodality classifiers and noticed that fusing features from various modalities enhance the performance. Our multimodal fusion approach achieves 0.5698 micro f1-score on the test data, which is a considerable result. We hope these results will accelerate further research works in this direction.
1 describes the structure of our proposed multimodal fusion system for propaganda techniques identification in Internet memes with RoBERTa text encoder, pre-trained VGG-19 image encoder, and multimodal fusion architecture search [Pérez-Rúa et al., 2019] module.
Figure 1 :
1Multimodal Fusion Architecture
Figure 2 :
2Fusion Techniques An image encoder experimented the various pre-trained CNN architectures like: VGG-19
Table 1 :
1List of propaganda techniques and their count in train, development, and test sets.these videos and propaganda techniques and people's emotions. Simultaneously they analyze that
how these videos are influencing people's emotions with the help of propaganda techniques.
Volkova et al. [2019] prepared a dataset with 50K twitter posts consisting of memes annotated with
six labels: propaganda, disinformation, hoaxes, clickbait, conspiracies, and satire. They developed a
multimodal approach for this problem by considering the textual, linguistic characteristics, and visual
features.
Table 2 :
2Comparison of various modalities classifiers results
https://knowledge.wharton.upenn.edu/article/singer-weaponization-social-media/
https://propaganda.math.unipd.it/semeval2021task6/definitions22.html
https://huggingface.co/
A AppendixA.1 Sample Memes In above figure 3, we can see few example memes from the dataset which we used in this paper. An example (a) is using the Name calling/Labeling propaganda technique by labeling the Russian vaccine with smirnoff vodka. Next example (b) applies Exaggeration technique (hyperbole the simple statement of Trump recovered from corona), there is also a Name calling technique (referring corona with 'RONA'), and uses Glittering generalities technique. The example (c) expresses Doubt and creating a confusion in audience.
Emotion and techniques of propaganda in youtube videos. Abd Kadir, A Lokman, T Tsuchiya, 10.17485/ijst/2016/v9iS1/106841Indian Journal of Science and Technology. 1292016Abd Kadir, A. Lokman, and T. Tsuchiya. Emotion and techniques of propaganda in youtube videos. Indian Journal of Science and Technology, Vol (9):1-8, 12 2016. doi: 10.17485/ijst/2016/ v9iS1/106841.
Computational propaganda: political parties, politicians, and political manipulation on social media. C Asamuel, Philip N Woolley, Howard, Oxford University PressASamuel C Woolley and Philip N Howard. Computational propaganda: political parties, politicians, and political manipulation on social media. Oxford University Press, 2018.
Multimodal machine learning: A survey and taxonomy. T Baltrusaitis, C Ahuja, L.-P Morency, IEEE Transactions on Pattern Analysis and Machine Intelligence. 41T. Baltrusaitis, C. Ahuja, and L.-P. Morency. Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:423-443, 2019.
Proppy: Organizing the news based on their propagandistic content. Information Processing & Management, 56. A Barrón-Cedeño, I Jaradat, G Martino, P Nakov, 10.1016/j.ipm.2019.03.005A. Barrón-Cedeño, I. Jaradat, G. Martino, and P. Nakov. Proppy: Organizing the news based on their propagandistic content. Information Processing & Management, 56, 05 2019. doi: 10.1016/j.ipm.2019.03.005.
Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. C Baziotis, N Pelekis, C Doulkeridis, Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsC. Baziotis, N. Pelekis, and C. Doulkeridis. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 747-754, Vancouver, Canada, August 2017. Association for Computational Linguistics.
Enriching word vectors with subword information. P Bojanowski, E Grave, A Joulin, T Mikolov, arXiv:1607.04606arXiv preprintP. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606, 2016.
Computational propaganda and political big data: Moving toward a more critical research agenda. G Bolsover, P Howard, 10.1089/big.2017.29024.cprBig Data. 5G. Bolsover and P. Howard. Computational propaganda and political big data: Moving toward a more critical research agenda. Big Data, 5:273-276, 12 2017a. doi: 10.1089/big.2017.29024.cpr.
Computational propaganda and political big data: Moving toward a more critical research agenda. G Bolsover, P N Howard, 5Big dataG. Bolsover and P. N. Howard. Computational propaganda and political big data: Moving toward a more critical research agenda. Big data, 5 4:273-276, 2017b.
subverting the jewtocracy": Online antisemitism detection using multimodal deep learning. M Chandra, D Pailla, H Bhatia, A Sanchawala, M Gupta, M Shrivastava, P Kumaraguru, 10.1145/3447535.3462502062021M. Chandra, D. Pailla, H. Bhatia, A. Sanchawala, M. Gupta, M. Shrivastava, and P. Kumaraguru. "subverting the jewtocracy": Online antisemitism detection using multimodal deep learning. pages 148-157, 06 2021. doi: 10.1145/3447535.3462502.
Smote: Synthetic minority over-sampling technique. N Chawla, K Bowyer, L Hall, W P Kegelmeyer, J. Artif. Intell. Res. 16N. Chawla, K. Bowyer, L. Hall, and W. P. Kegelmeyer. Smote: Synthetic minority over-sampling technique. J. Artif. Intell. Res., 16:321-357, 2002.
Uniter: Learning universal image-text representations. Y.-C Chen, L Li, L Yu, A E Kholy, F Ahmed, Z Gan, Y Cheng, J Liu, abs/1909.11740ArXiv. Y.-C. Chen, L. Li, L. Yu, A. E. Kholy, F. Ahmed, Z. Gan, Y. Cheng, and J. Liu. Uniter: Learning universal image-text representations. ArXiv, abs/1909.11740, 2019.
. F Chollet, F. Chollet et al. Keras, 2015. URL https://github.com/fchollet/keras.
The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race. S Cresci, R Pietro, M Petrocchi, A Spognardi, M Tesconi, 10.1145/3041021.3055135S. Cresci, R. Pietro, M. Petrocchi, A. Spognardi, and M. Tesconi. The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race. 04 2017. doi: 10.1145/3041021.3055135.
Detecting hate speech in multi-modal memes. ArXiv, abs. A Das, J S Wahi, S Li, A. Das, J. S. Wahi, and S. Li. Detecting hate speech in multi-modal memes. ArXiv, abs/2012.14891, 2020.
P Davison, 120-134. 01 2012. ISBN 0814764061The Language of Internet Memes. P. Davison. The Language of Internet Memes, pages 120-134. 01 2012. ISBN 0814764061.
The Selfish Gene. R Dawkins, Oxford University PressR. Dawkins. The Selfish Gene. Oxford University Press, 1976.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL-HLT. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
D Dimitrov, B B Ali, S Shaar, F Alam, F Silvestri, H Firooz, P Nakov, G D S Martino, arXiv:2105.09284Semeval-2021 task 6: Detection of persuasion techniques in texts and images. arXiv preprintD. Dimitrov, B. B. Ali, S. Shaar, F. Alam, F. Silvestri, H. Firooz, P. Nakov, and G. D. S. Martino. Semeval-2021 task 6: Detection of persuasion techniques in texts and images. arXiv preprint arXiv:2105.09284, 2021.
Computational propaganda: If you make it trend, you make it true. R Diresta, https:/onlinelibrary.wiley.com/doi/abs/10.1111/yrev.13402The Yale Review. 1064R. DiRESTA. Computational propaganda: If you make it trend, you make it true. The Yale Review, 106(4):12-29, 2018. doi: https://doi.org/10.1111/yrev.13402. URL https://onlinelibrary. wiley.com/doi/abs/10.1111/yrev.13402.
The spread of disinformation on the web: An examination of memes on social networking. M Dupuis, A Williams, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation. M. Dupuis and A. Williams. The spread of disinformation on the web: An examination of memes on social networking. 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Inter- net of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pages 1412-1418, 2019.
Image and encoded text fusion for multi-modal classification. I Gallo, A Calefati, S Nawaz, M K Janjua, Digital Image Computing: Techniques and Applications (DICTA). I. Gallo, A. Calefati, S. Nawaz, and M. K. Janjua. Image and encoded text fusion for multi-modal classification. 2018 Digital Image Computing: Techniques and Applications (DICTA), pages 1-7, 2018.
Multilingual multimodal digital deception detection and disinformation spread across social platforms. M Glenski, E Ayton, J Mendoza, S Volkova, abs/1909.05838ArXiv. M. Glenski, E. Ayton, J. Mendoza, and S. Volkova. Multilingual multimodal digital deception detection and disinformation spread across social platforms. ArXiv, abs/1909.05838, 2019.
Gundapusunil at SemEval-2020 task 8: Multimodal memotion analysis. S Gundapu, R Mamidi, Proceedings of the Fourteenth Workshop on Semantic Evaluation. the Fourteenth Workshop on Semantic EvaluationBarcelonaInternational Committee for Computational LinguisticsS. Gundapu and R. Mamidi. Gundapusunil at SemEval-2020 task 8: Multimodal memotion analysis. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1112-1119, Barcelona (online), Dec. 2020. International Committee for Computational Linguistics. URL https://www. aclweb.org/anthology/2020.semeval-1.147.
On the class imbalance problem. X Guo, Y Yin, C Dong, G Yang, G Zhou, 10.1109/ICNC.2008.871Fourth International Conference on Natural Computation. 4X. Guo, Y. Yin, C. Dong, G. Yang, and G. Zhou. On the class imbalance problem. In 2008 Fourth International Conference on Natural Computation, volume 4, pages 192-201, 2008. doi: 10.1109/ICNC.2008.871.
Argotario: Computational argumentation meets serious games. I Habernal, R Hannemann, C Pollak, C Klamm, P Pauli, I Gurevych, 07I. Habernal, R. Hannemann, C. Pollak, C. Klamm, P. Pauli, and I. Gurevych. Argotario: Computa- tional argumentation meets serious games. 07 2017.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016.
G Huang, Z Liu, K Q Weinberger, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261-2269, 2017.
The hateful memes challenge: Detecting hate speech in multimodal memes. D Kiela, H Firooz, A Mohan, V Goswami, A Singh, P Ringshia, D Testuggine, abs/2005.04790ArXiv. D. Kiela, H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, and D. Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes. ArXiv, abs/2005.04790, 2020.
Visualbert: A simple and performant baseline for vision and language. L H Li, M Yatskar, D Yin, C.-J Hsieh, K.-W Chang, abs/1908.03557ArXiv. L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang. Visualbert: A simple and performant baseline for vision and language. ArXiv, abs/1908.03557, 2019.
Focal loss for dense object detection. T.-Y Lin, P Goyal, R B Girshick, K He, P Dollár, IEEE International Conference on Computer Vision (ICCV). T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2999-3007, 2017.
A multimodal framework for the detection of hateful memes. P Lippe, N Holla, S Chandra, S Rajamanickam, G Antoniou, E Shutova, H Yannakoudakis, 12P. Lippe, N. Holla, S. Chandra, S. Rajamanickam, G. Antoniou, E. Shutova, and H. Yannakoudakis. A multimodal framework for the detection of hateful memes, 12 2020.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta, abs/1907.11692A robustly optimized bert pretraining approach. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692, 2019.
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. J Lu, D Batra, D Parikh, S Lee, NeurIPS. J. Lu, D. Batra, D. Parikh, and S. Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019.
Fine-grained analysis of propaganda in news articles. G D S Martino, S Yu, A Barrón-Cedeño, R Petrov, P Nakov, EMNLP/IJCNLP. G. D. S. Martino, S. Yu, A. Barrón-Cedeño, R. Petrov, and P. Nakov. Fine-grained analysis of propaganda in news articles. In EMNLP/IJCNLP, 2019.
A survey on computational propaganda detection. G D S Martino, S Cresci, A Barrón-Cedeño, S Yu, R D Pietro, P Nakov, IJCAI. G. D. S. Martino, S. Cresci, A. Barrón-Cedeño, S. Yu, R. D. Pietro, and P. Nakov. A survey on computational propaganda detection. In IJCAI, 2020.
Internet memes: Leaflet propaganda of the digital age. J T Nieubuurt, https:/www.frontiersin.org/article/10.3389/fcomm.2020.547065Frontiers in Communication. 5116J. T. Nieubuurt. Internet memes: Leaflet propaganda of the digital age. Frontiers in Commu- nication, 5:116, 2021. ISSN 2297-900X. doi: 10.3389/fcomm.2020.547065. URL https: //www.frontiersin.org/article/10.3389/fcomm.2020.547065.
Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/ 9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf.
Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, Empirical Methods in Natural Language Processing (EMNLP). J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, 2014. URL http://www.aclweb.org/anthology/D14-1162.
The effectiveness of data augmentation in image classification using deep learning. L Perez, J Wang, abs/1712.04621ArXiv. L. Perez and J. Wang. The effectiveness of data augmentation in image classification using deep learning. ArXiv, abs/1712.04621, 2017.
Mfas: Multimodal fusion architecture search. J.-M Pérez-Rúa, V Vielzeuf, S Pateux, M Baccouche, F Jurie, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). J.-M. Pérez-Rúa, V. Vielzeuf, S. Pateux, M. Baccouche, and F. Jurie. Mfas: Multimodal fusion architecture search. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6959-6968, 2019.
Truth of varying shades: Analyzing language in fake news and political fact-checking. H Rashkin, E Choi, J Jang, S Volkova, Y Choi, 10.18653/v1/D17-1317012017H. Rashkin, E. Choi, J. Jang, S. Volkova, and Y. Choi. Truth of varying shades: Analyzing language in fake news and political fact-checking. pages 2931-2937, 01 2017. doi: 10.18653/v1/D17-1317.
Visual propaganda in the age of social media: An empirical analysis of twitter images during the 2012 israeli-hamas conflict. H Seo, 10.1080/15551393.2014.955501Visual Communication Quarterly. 21H. Seo. Visual propaganda in the age of social media: An empirical analysis of twitter images during the 2012 israeli-hamas conflict. Visual Communication Quarterly, 21:150-161, 07 2014. doi: 10.1080/15551393.2014.955501.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, abs/1409.1556CoRRK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2015.
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818-2826, 2016.
Inception-v4, inception-resnet and the impact of residual connections on learning. C Szegedy, S Ioffe, V Vanhoucke, A A , AAAI. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, 2017.
Two modifications of cnn. I Tomek, I. Tomek. Two modifications of cnn. 1976.
. A Vaswani, N M Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Attention is all you need. ArXiv, abs/1706.03762A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. ArXiv, abs/1706.03762, 2017.
Challenges and frontiers in abusive content detection. B Vidgen, A Harris, D Nguyen, R Tromble, S Hale, H Margetts, doi: 10. 18653/v1/W19-3509Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineFlorence, ItalyAssociation for Computational LinguisticsB. Vidgen, A. Harris, D. Nguyen, R. Tromble, S. Hale, and H. Margetts. Challenges and frontiers in abusive content detection. In Proceedings of the Third Workshop on Abusive Language Online, pages 80-93, Florence, Italy, Aug. 2019. Association for Computational Linguistics. doi: 10. 18653/v1/W19-3509. URL https://www.aclweb.org/anthology/W19-3509.
Explaining multimodal deceptive news prediction models. S Volkova, E Ayton, D L Arendt, Z Huang, B Hutchinson, 13S. Volkova, E. Ayton, D. L. Arendt, Z. Huang, and B. Hutchinson. Explaining multimodal deceptive news prediction models. 13:659-662, Jul. 2019. URL https://ojs.aaai.org/index.php/ ICWSM/article/view/3266.
Exploring deep multimodal fusion of text and photo for hate speech classification. F Yang, X Peng, G Ghosh, R Shilon, H Ma, E Moore, G Predovic, 10.18653/v1/W19-3502Proceedings of the Third Workshop on Abusive Language Online. the Third Workshop on Abusive Language OnlineFlorence, ItalyAssociation for Computational LinguisticsF. Yang, X. Peng, G. Ghosh, R. Shilon, H. Ma, E. Moore, and G. Predovic. Exploring deep multimodal fusion of text and photo for hate speech classification. In Proceedings of the Third Workshop on Abusive Language Online, pages 11-18, Florence, Italy, Aug. 2019a. Association for Computational Linguistics. doi: 10.18653/v1/W19-3502. URL https://www.aclweb.org/ anthology/W19-3502.
Arming the public with artificial intelligence to counter social bots. K.-C Yang, O Varol, C Davis, E Ferrara, A Flammini, F Menczer, 10.1002/hbe2.115Human Behavior and Emerging Technologies. 1115K.-C. Yang, O. Varol, C. Davis, E. Ferrara, A. Flammini, and F. Menczer. Arming the public with artificial intelligence to counter social bots. Human Behavior and Emerging Technologies, 1:e115, 02 2019b. doi: 10.1002/hbe2.115.
Xlnet: Generalized autoregressive pretraining for language understanding. Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov, Q V Le, NeurIPS. Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 2019c.
KNN Approach to Unbalanced Data Distributions: A Case Study Involving Information Extraction. J Zhang, I Mani, Proceedings of the ICML'2003 Workshop on Learning from Imbalanced Datasets. the ICML'2003 Workshop on Learning from Imbalanced DatasetsJ. Zhang and I. Mani. KNN Approach to Unbalanced Data Distributions: A Case Study Involving Information Extraction. In Proceedings of the ICML'2003 Workshop on Learning from Imbalanced Datasets, 2003.
| [
"https://github.com/fchollet/keras."
] |
[
"WASA: A Web Application for Sequence Annotation",
"WASA: A Web Application for Sequence Annotation"
] | [
"Fahad Alghamdi fghamdi@gwu.edu \nDepartment of Computer Science\nThe George Washington University Washington\nDC\n",
"Mona Diab mtdiab@gwu.edu \nDepartment of Computer Science\nThe George Washington University Washington\nDC\n"
] | [
"Department of Computer Science\nThe George Washington University Washington\nDC",
"Department of Computer Science\nThe George Washington University Washington\nDC"
] | [] | Data annotation is an important and necessary task for all NLP applications. Designing and implementing a web-based application that enables many annotators to annotate and enter their input into one central database is not a trivial task. These kinds of web-based applications require a consistent and robust backup for the underlying database and support to enhance the efficiency and speed of the annotation. Also, they need to ensure that the annotations are stored with a minimal amount of redundancy in order to take advantage of the available resources(e.g, storage space). In this paper, we introduce WASA, a web-based annotation system for managing large-scale multilingual Code Switching (CS) data annotation. Although WASA has the ability to perform the annotation for any token sequence with arbitrary tag sets, we will focus on how WASA is used for CS annotation. The system supports concurrent annotation, handles multiple encodings, allows for several levels of management control, and enables quality control measures while seamlessly reporting annotation statistics from various perspectives and at different levels of granularity. Moreover, the system is integrated with a robust language specific date prepossessing tool to enhance the speed and efficiency of the annotation. We describe the annotation and the administration interfaces as well as the backend engine. | null | [
"https://www.aclweb.org/anthology/L18-1173.pdf"
] | 21,718,699 | 1909.13008 | 1827f81974aa99016b8b25791fe5847d9655cf1e |
WASA: A Web Application for Sequence Annotation
Fahad Alghamdi fghamdi@gwu.edu
Department of Computer Science
The George Washington University Washington
DC
Mona Diab mtdiab@gwu.edu
Department of Computer Science
The George Washington University Washington
DC
WASA: A Web Application for Sequence Annotation
Code SwitchingAnnotationWeb ApplicationSociolinguistics
Data annotation is an important and necessary task for all NLP applications. Designing and implementing a web-based application that enables many annotators to annotate and enter their input into one central database is not a trivial task. These kinds of web-based applications require a consistent and robust backup for the underlying database and support to enhance the efficiency and speed of the annotation. Also, they need to ensure that the annotations are stored with a minimal amount of redundancy in order to take advantage of the available resources(e.g, storage space). In this paper, we introduce WASA, a web-based annotation system for managing large-scale multilingual Code Switching (CS) data annotation. Although WASA has the ability to perform the annotation for any token sequence with arbitrary tag sets, we will focus on how WASA is used for CS annotation. The system supports concurrent annotation, handles multiple encodings, allows for several levels of management control, and enables quality control measures while seamlessly reporting annotation statistics from various perspectives and at different levels of granularity. Moreover, the system is integrated with a robust language specific date prepossessing tool to enhance the speed and efficiency of the annotation. We describe the annotation and the administration interfaces as well as the backend engine.
Introduction
Code Switching (CS) is a phenomenon that occurs when multilingual speakers alternate between more than one language or dialect. This phenomenon can be observed in different linguistic levels of representation for different language pairs: phonological, morphological, lexical, syntactic, semantic, and discourse/pragmatics. CS presents serious challenges for language technologies, including parsing, Machine Translation (MT), Information Retrieval (IR) and others. A major barrier to research on CS has been the lack of large multilingual, multi-genre CS-annotated corpora. Creating such corpora involves managing many annotators working on multiple tasks at different times, consistent and robust backups of the underlying database, quality control, etc. In this paper, we present our effort in building an annotation system, WASA, that can manage and facilitate large-scale CS data annotation. WASA differs from other annotation systems in several respects. Our system has an option that can provide initial automatic tagging for specific tokens such as Latin words, URL, punctuation, digits, diacritics, emoticons, and speech effect tokens. This option increases the quality and the speed of annotation substantially. Moreover, the system is integrated with language-specific date preprocessing tool Smart Preprocessing (Quasi) Language Independent Tool (SPLIT) to streamline raw data cleaning and preparation. The remainder of this paper is organized as follows: Section 2 provides an overview of related work. Section 3 describes the System Architecture. Types of users including permissions and users tasks are introduced in Section 4. The data preprocessing and cleaning are discussed in Section 5. We provide an overview of the database design in Section 6. Inter-annotator agreement, current status, and our conclusion and future work are discussed in sections 7,8 and 9, respectively.
Related Works
Although, many annotation tools, such as (Aziz et al., 2012), (Cunningham et al., 2009), (Kahan et al., 2002), MnM (Vargas-Vera et al., 2002), GATE ( (Cunningham et al., 2009); Gaizauskas, 2009), and(Dickinson andLedbetter, 2012)), are effective in serving their intended purposes, none of them meets the CS annotation requirements perfectly. We need a tool that can help in sequence annotating in a way that can report the time needed for annotators to get their tasks done, manage number of annotator teams, enable quality control measures and annotation statistics, and assign some initial tags to some tags automatically (e.g. punctuation, URL, emoticon, etc.) Our tool is most similar to the annotation tool for the CO-LABA project ; Diab et al., 2010). We specifically emulate the annotator management component in the COLABA annotation tool. Although, the code switching annotation task and manual diacritization of Standard Arabic text task are completely different tasks, the MANDIAC tool (Obeid et al., 2016), which used for diacritization annotation task, has a similar annotator management component to the WASA management component. However, the technologies used in both management components are different. For instance, WASA uses PostgreSql database to store content, while MANDIAC uses a JSON blob to store content. Two other comparable tools to ours are WebANNO (Yimam et al., 2013) and SWAT (Samih et al., 2016). They both use the latest available technologies to perform a number of linguistic annotation types. The SWAT tool is a web-based interface for annotating tokens in a sequence with a predefined set of labels. The main advantages of this tool are the simplicity of its use and instal-lation as it only requires a modern web browser and minimum server-side requirements to get the tool work. The WebANNO tool is also a web-based tool that offers wide range of linguistic annotations tasks, e.g., named entity, dependency parsing, co-reference chain identification, and part-of speech annotation. However, both systems SWAT and WebANNO lack of some functionalities and features that can simplify and speed up the annotation task for our purposes. In the SWAT system for example, there is no support for user roles. Therefore, some tasks such as managing the number of annotators, monitoring the progress of the annotators, assigning tasks given to the annotators, and ensuring the quality of the submitted annotation are difficult to handle or manage with only one user type. Moreover, both systems do not have the option that can provide initial automatic tagging for named entities (NE), Latin words, URL, punctuation, number, diacritics, emoticon, and speech effects tokens. We noticed that tagging these tokens automatically increases the speed of the annotation substantially. Finally, unlike both systems, our system can seamlessly integrate with language specific data preprocessing tool to streamline raw data cleaning and preparation.
System Architecture
WASA is a typical three-tier web-based application. The platform is divided into three tiers, each with a specific function. The first tier is a data tier that saves all metadata in PostgreSql database in addition to both the annotated and raw data files. All this data is stored on a file server. The second tier is a logical tier. It contains PHP scripts that interact with an Apache web server. It is responsible for all functionalities provided by the system to the different types of users. All requests are sent by the web server to the Post-greSql database server through a secured tunnel. The third and last tier is the presentation tier. It is browser independent, which enables accessing the system from many different clients. It provides an intuitive GUI tailored to each user type. This architecture design allows multiple annotators to work on various tasks simultaneously. On the other hand, WASA allows the admin user to manage and handle a single central database. The system can handle multiple encodings allowing for multilingual processing. Figure-1 gives a high level overview of the tool's architecture.
Types of Users
Three types of users have been considered in WASA design: Annotator, Lead-Annotator, and Super-User. Each one of these user types is given and provided with different kinds of permissions, functionalities, and privileges in order to fulfill their tasks.
Annotators
Annotators are provided the following functionalities: 1access assigned tasks; 2-annotate the assigned tasks; 3submit annotation; 4check the time needed to submit one unit, e.g., post, or tweets; 5check the grade of the submitted work; 6re-annotate the rejected tasks (by rejected we mean when the annotator received a "No Pass" as a grade Figure 1: System Architecture on their annotation task); and, 7save work and continue it in a later session. Figure-2 shows an example of the annotation screen. The words of the posts or tweets that need to be annotated will be displayed as clickable units. When clicked, a pop-up screen appears to allow the annotator to choose the proper tag. To increase the speed of the annotation process, some of the words, like Named Entities and punctuations, will have an initial tag assigned automatically as part of a preprocessing step. However, the annotator is allowed to change the initial tag if he/she finds words annotated with a wrong tag. The interface uses color-coding to reflect useful information and status. For example, 'named entities' will be displayed in purple color, while Other tagged categories such as Latin, URL, punctuation, digits, diacritics, emoticons, sound effects will be displayed in the orange color. Words already annotated will be displayed in blue while words that are yet to be annotated appear in black. Figure-3 shows an example of some of the assigned tasks with information about the tasks that have been already submitted (e.g, number of annotated words, speed of annotation, path of the raw file)
Lead Annotator
For each dialect/language, there is one lead annotator only. Each lead annotator has the following functions: 1-Annotator management, e.g., create, edit and delete annotator accounts; 2-Tasks management; 3-Monitor status and progress; 4-Review and grade annotators' work; and 5- Produce quality measures like inter-annotator agreement. The system enables lead annotators to reject submitted work that does not meet the assessment criteria and add comments and feedback for the annotators to re-annotate rejected work.
Super User
There is only one Superuser account in WASA for all dialects/languages. The Superuser functions include: 1-Database management and maintenance; 2-Lead annotators management, 3-Annotators management, 4-Monitor the overall performance of the system; and 5-Manage annotation data imports and exports.
Data Preprocessing and Input and Output Format
The system has the ability to integrate with languagespecific date preprocessing scripts to streamline raw data cleaning and preparation. For example, for cleaning process (step-1) the system integrates the Smart Preprocessing (Quasi) Language Independent tool (SPLIT) (Al-Badrashiny et al., 2016) to handle the encoding issues (i.e, Change the character encoding to UTF8). Moreover, for the Dialectal Arabic (DA) and Modern Standard Arabic (MSA) language pair (step-2), the system integrates with the Automatic Identification of Dialectal Arabic (AIDA2) tool (Al-Badrashiny et al., 2015) to provide initial automatic tagging for named entities (NE), Latin words, URL, punctuation, number, diacritics, emoticon, and speech effects tokens. Figure-2 illustrates an example of a commentary with some pre-annotated tokens. Named entities tokens are colored purple, while punctuation and numbers are colored with orange. Both preprocessing and cleaning steps are performed offline. The Super User is the user responsible for preparing the data for annotation. Figure-5 shows the cleaning and preprocessing steps. The output file is written in a simple XML format as shown in Figure-4. The XML file includes all meta-data related to the annotation file such as the annotated, sentence id, task id, language, user id, word id, actual word, annotation tag, ...etc. The output XML is customizable. The superuser can choose what metadata to be included in the XML output file.
Our system is able to handle different types of genres such as Twitter, commentaries, conversations, or discussion fo-rum data. Accordingly, WASA is quite robust as it is able to handle a variety of data genres and formats. For example, if the data comes from Twitter, then information like tweet id and user id needs to be preserved along with the annotation tags. If the genre of the data is discussion forums, information such as post order in the context of a conversation thread along with the names of the people who are involved in the conversation are maintained.
Database Design
WASA system uses a relational database to manage, handle and store all meta-data. The data stored is categorized as follow:
Profiling information
It saves information about all registered users of the system including their roles (i.e. annotator, lead annotator or superuser), login information as well as the dialect and languages for each one of them. Moreover, It contains information about different languages/dialects used in the project.
Annotation Information
This is the core part of WASA's database. It includes all meta-data related to the annotation tasks such as the number of tasks assigned to each annotator, actual annotations completed by each annotator, and temporarily saved annotations.
Assessment Information
This contains information about 1) Task-Annotator assignment: it includes the tasks assigned to each annotator and the number of tasks that have already been annotated and submitted, the number of assigned units (tweets, posts) per each task, genre type, percentage of overlapping units (tweets, posts) shared among annotators to ease the process of calculating inter-annotator agreement, etc.; 2) Annotator-Units assignment: It includes information about each unit (post, tweet) that is assigned to the annotators such as post/tweet-id, user-id, genre-id, task-id, path of the assigned file; Finally 3) Language-Unit assignment: It includes information about the language/dialect id for each unit.
Quality Control Measures
WASA has built-in functionalities that can help in managing the inter-annotator agreement (IAA) measures for different task and report performance statistics. The lead annotator is able to specify the percentage of data annotation overlap between the annotators per task and the system manages to distribute the data and calculate the IAA. Moreover, WASA generates tag distribution, the number of annotated tokens, expected time needed to finish each assigned task, and much other quality management crucial statistics.
Current Status
We have tested the tool for annotation on Arabic MSA and dialectal data, Chinese-English, Spanish-English, and Hindi-English. The IAA for our the Arabic annotated data is ranged between 92% and 97%. Moreover, a small portion of the Code-Switching data that was released in was used to test the performance of WASA. We noticed that the annotators' speed has increased substantially when we assign initial tags to some tags automatically (e.g. punctuation, URL, emoticon, ...etc.). The average time for annotating a full tweet was ∼ 40 seconds without using SPLIT tool (Al-Badrashiny et al., 2016), but after assigning the initial tags using the SPLIT tool, the average time for annotating a full tweet became ∼ 27 seconds. This results in saving much of the effort in annotating these tags.
Conclusion
We gave a detailed overview of our annotation system WASA. We have shown that WASA allows multiple annotator teams to work on various tasks simultaneously. Also, we have seen that using the SPLIT tool to annotate some specific tokens automatically has helped in saving the effort and time spent by annotators. Moreover, the annotation quality of these tokens is very high. We will keep updating and modifying the current functionalities of the system as per different users type feedback. Also, we plan to add more functionality that can help in enhancing the speed, quality, and the efficiency of the CS annotation.
Figure 2 Figure 3 :
23An example of the annotator's "Check-Status" screen
Figure 4 :Figure 5 :
45A sample of an output file Preprocessing and Cleaning Steps
AcknowledgementsWe would like to thank Mahmoud Ghoneim for his invaluable suggestions and support in the development of WASA. Also, We would like to acknowledge the useful comments by the three anonymous reviewers who helped in making this publication better presented.Bibliographical References
. M Al-Badrashiny, H Elfardy, M T Diab, Al-Badrashiny, M., Elfardy, H., and Diab, M. T. (2015).
Aida2: A hybrid approach for token and sentence level dialect identification in arabic. CoNLL. Aida2: A hybrid approach for token and sentence level dialect identification in arabic. In CoNLL, pages 42-51.
Split: Smart preprocessing (quasi) language independent tool. M Al-Badrashiny, A Pasha, M T Diab, N Habash, O Rambow, W Salloum, R Eskander, LREC. Al-Badrashiny, M., Pasha, A., Diab, M. T., Habash, N., Rambow, O., Salloum, W., and Eskander, R. (2016). Split: Smart preprocessing (quasi) language independent tool. In LREC.
Evolving a general framework for text alignment: Case studies with two south asian languages. N Aswani, R Gaizauskas, Proceedings of the International Conference on Machine Translation: Twenty-Five Years On. the International Conference on Machine Translation: Twenty-Five Years OnCranfield, Bedfordshire, UKAswani, N. and Gaizauskas, R. (2009). Evolving a gen- eral framework for text alignment: Case studies with two south asian languages. In Proceedings of the Interna- tional Conference on Machine Translation: Twenty-Five Years On, Cranfield, Bedfordshire, UK, November.
Pet: a tool for post-editing and assessing machine translation. W Aziz, S Castilho, L Specia, LREC. Aziz, W., Castilho, S., and Specia, L. (2012). Pet: a tool for post-editing and assessing machine translation. In LREC, pages 3982-3987.
A web application for dialectal arabic text annotation. Y Benajiba, M Diab, Proceedings of the lrec workshop for language resources (lrs) and human language technologies (hlt) for semitic languages: Status, updates, and prospects. the lrec workshop for language resources (lrs) and human language technologies (hlt) for semitic languages: Status, updates, and prospectsBenajiba, Y. and Diab, M. (2010). A web application for dialectal arabic text annotation. In Proceedings of the lrec workshop for language resources (lrs) and human language technologies (hlt) for semitic languages: Sta- tus, updates, and prospects.
Developing Language Processing Components with Gate Version. H Cunningham, D Maynard, K Bontcheva, V Tablan, C Ursu, M Dimitrov, M Dowman, N Aswani, I Roberts, Y Li, 5A User Guide). University of SheffieldCunningham, H., Maynard, D., Bontcheva, K., Tablan, V., Ursu, C., Dimitrov, M., Dowman, M., Aswani, N., Roberts, I., Li, Y., et al. (2009). Developing Language Processing Components with Gate Version 5:(A User Guide). University of Sheffield.
Colaba: Arabic dialect annotation and processing. M Diab, N Habash, O Rambow, M Altantawy, Y Benajiba, Lrec workshop on semitic language processing. Diab, M., Habash, N., Rambow, O., Altantawy, M., and Benajiba, Y. (2010). Colaba: Arabic dialect annotation and processing. In Lrec workshop on semitic language processing, pages 66-74.
Creating a large multi-layered representational repository of linguistic code switched arabic data. M Diab, M Ghoneim, A Hawwari, F Alghamdi, N Al-Marwani, M Badrashiny, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, France, mayEuropean Language Resources Association (ELRADiab, M., Ghoneim, M., Hawwari, A., AlGhamdi, F., Al- Marwani, N., and Al-Badrashiny, M. (2016). Creating a large multi-layered representational repository of lin- guistic code switched arabic data. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. Euro- pean Language Resources Association (ELRA).
Annotating errors in a hungarian learner corpus. M Dickinson, S Ledbetter, LREC. Dickinson, M. and Ledbetter, S. (2012). Annotating errors in a hungarian learner corpus. In LREC, pages 1659- 1664.
Annotea: an open rdf infrastructure for shared web annotations. J Kahan, M.-R Koivunen, E Prud'hommeaux, R R Swick, Computer Networks. 395Kahan, J., Koivunen, M.-R., Prud'Hommeaux, E., and Swick, R. R. (2002). Annotea: an open rdf infrastruc- ture for shared web annotations. Computer Networks, 39(5):589-608.
Mandiac: A web-based annotation system for manual arabic diacritization. O Obeid, H Bouamor, W Zaghouani, M Ghoneim, A Hawwari, S Alqahtani, M Diab, K Oflazer, The 2nd Workshop on Arabic Corpora and Processing Tools. 16Obeid, O., Bouamor, H., Zaghouani, W., Ghoneim, M., Hawwari, A., Alqahtani, S., Diab, M., and Oflazer, K. (2016). Mandiac: A web-based annotation system for manual arabic diacritization. In The 2nd Workshop on Arabic Corpora and Processing Tools 2016 Theme: So- cial Media, page 16.
Sawt: Sequence annotation web tool. Y Samih, W Maier, L Kallmeyer, EMNLP. 65Samih, Y., Maier, W., and Kallmeyer, L. (2016). Sawt: Sequence annotation web tool. EMNLP 2016, page 65.
Mnm: Ontology driven semi-automatic and automatic support for semantic markup. M Vargas-Vera, E Motta, J Domingue, M Lanzoni, A Stutt, F Ciravegna, International Conference on Knowledge Engineering and Knowledge Management. SpringerVargas-Vera, M., Motta, E., Domingue, J., Lanzoni, M., Stutt, A., and Ciravegna, F. (2002). Mnm: Ontology driven semi-automatic and automatic support for seman- tic markup. In International Conference on Knowledge Engineering and Knowledge Management, pages 379- 391. Springer.
Webanno: A flexible, web-based and visually supported system for distributed annotations. S M Yimam, I Gurevych, R Eckart De Castilho, C Biemann, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 51st Annual Meeting of the Association for Computational Linguistics: System DemonstrationsSofia, BulgariaAssociation for Computational LinguisticsYimam, S. M., Gurevych, I., Eckart de Castilho, R., and Biemann, C. (2013). Webanno: A flexible, web-based and visually supported system for distributed annota- tions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 1-6, Sofia, Bulgaria, August. As- sociation for Computational Linguistics.
| [] |
[
"QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task",
"QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task",
"QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task",
"QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task"
] | [
"Jiayi Wang \nAlibaba Group\nHangzhouChina\n",
"Ke Wang \nAlibaba Group\nHangzhouChina\n",
"Boxing Chen boxing.cbx@alibaba-inc.com \nAlibaba Group\nHangzhouChina\n",
"Yu Zhao \nAlibaba Group\nHangzhouChina\n",
"Weihua Luo \nAlibaba Group\nHangzhouChina\n",
"Yuqi Zhang \nAlibaba Group\nHangzhouChina\n",
"Jiayi Wang \nAlibaba Group\nHangzhouChina\n",
"Ke Wang \nAlibaba Group\nHangzhouChina\n",
"Boxing Chen boxing.cbx@alibaba-inc.com \nAlibaba Group\nHangzhouChina\n",
"Yu Zhao \nAlibaba Group\nHangzhouChina\n",
"Weihua Luo \nAlibaba Group\nHangzhouChina\n",
"Yuqi Zhang \nAlibaba Group\nHangzhouChina\n"
] | [
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina",
"Alibaba Group\nHangzhouChina"
] | [] | Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years. The goal is to investigate automatic methods for estimating the quality of machine translation results without reference translations. In this year's WMT QE shared task, we utilize the large-scale XLM-Roberta pre-trained model and additionally propose several useful features to evaluate the uncertainty of the translations to build our QE system, named QEMind. The system has been applied to the sentence-level scoring task of Direct Assessment and the binary score prediction task of Critical Error Detection. In this paper, we present our submissions to the WMT 2021 QE shared task and an extensive set of experimental results have shown us that our multilingual systems outperform the best system in the Direct Assessment QE task of WMT 2020. | null | [
"https://arxiv.org/pdf/2112.14890v1.pdf"
] | 245,634,662 | 2112.14890 | 84a3b94ecf18346d37978425e7a4e0f48954c304 |
QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task
Jiayi Wang
Alibaba Group
HangzhouChina
Ke Wang
Alibaba Group
HangzhouChina
Boxing Chen boxing.cbx@alibaba-inc.com
Alibaba Group
HangzhouChina
Yu Zhao
Alibaba Group
HangzhouChina
Weihua Luo
Alibaba Group
HangzhouChina
Yuqi Zhang
Alibaba Group
HangzhouChina
QEMind: Alibaba's Submission to the WMT21 Quality Estimation Shared Task
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years. The goal is to investigate automatic methods for estimating the quality of machine translation results without reference translations. In this year's WMT QE shared task, we utilize the large-scale XLM-Roberta pre-trained model and additionally propose several useful features to evaluate the uncertainty of the translations to build our QE system, named QEMind. The system has been applied to the sentence-level scoring task of Direct Assessment and the binary score prediction task of Critical Error Detection. In this paper, we present our submissions to the WMT 2021 QE shared task and an extensive set of experimental results have shown us that our multilingual systems outperform the best system in the Direct Assessment QE task of WMT 2020.
Introduction
Quality estimation (QE) aims to predict the quality of a machine translation (MT) system's output without any access to ground-truth translation references or human intervention (Blatz et al., 2004;Specia et al., 2009Specia et al., , 2018. Automatic methods for QE are highly appreciated in MT applications when we expect to efficiently obtain the quality indications for a larget amount of machine translation outputs in a short time, or even at run-time. This paper describes Alibaba's submissions to the WMT 2021 Quality Estimation Shared Task. We developed a novel QE system, called QEMind, that have been applied to two tasks this year, the sentence-level direct assessment (DA) and binary score prediction of Critical Error Detection (CED).
Common approaches in the previous years heavily focus on human-crafted rule-based feature engineering mode such as QuEst++ (Specia et al., 2015). * indicates equal contribution.
† indicates corresponding author. The features extracted are usually fed into traditional machine learning algorithms such as a support vector regression for the sentence-level scoring or a sequence-labeling model with conditional random fields for the word-level labeling respectively. With the development of neural networks applied in machine translation and other NLP tasks, a neural predictor-estimator framework for QE was proposed and achieved better results in WMT 2017 and WMT 2018 QE shared tasks (Fan et al., 2019;Kim et al., 2017). This framework extensively requires a pre-training procedure with a large amount of parallel corpora in the predictor mode and stacks a downstream estimator mode with additional layers for a supervised regression or classification task. Since 2019, state-of-the-art (SOTA) QE systems (Kepler et al., 2019;Ranasinghe et al., 2020) have hit record high with transfer learning by leveraging SOTA pre-trained NLP neural network models, for example, mBERT (Pires et al., 2019) and XLM-Roberta (Conneau et al., 2019). Till then, only "black-box" QE methods had been mainly used in WMT QE shared tasks. Furthermore, with the accessibility to the NMT systems, some "glass-box" QE features have been explored and verified to bring improvements upon "black-box" approaches (Moura et al., 2020). In addition, Fomicheva et al. (2020) have showed that useful information that are extracted from the MT systems performs good correlation with human judgements of quality. Inspired by these works, we propose more useful features in this paper, among which, some are derived from the NMT systems and others are created via utilizing the masked language model of XLM. We develop our QE systems by incorporating all the features that can potentially evaluate the uncertainty of the machine translations into a supervised QE model based on the transfer learning from XLM-Roberta. We evaluate our method on the Direct Assessment QE tasks of WMT 2020 and WMT 2021 and our experiment results demonstrate the efficiency and versatility of the features we have proposed on the quality estimation in different language pairs.
XLM-Roberta
Task & Data Set
We participate the sentence-level Direct Assessment task and Critical Error Detection tasks of this year's QE shared task. (1) For the DA task, we merge 7000 and 1000 labeled data in the training and development data sets as our training set and treat the test20 data set as our development set for each of the seven language pairs. However, for the four zero-shot language pairs, we only have the blind test sets. (2) For the CED task, we observed that the distributions of two classes, NOT and ERR, are extremely unbalanced for all four language pairs. Therefore, we simply up-sample the samples with ERR labels to get a relatively balanced training set. This strategy of data augmentation has also been empirically verified to be valid.
Methodology
In this section, we provide a complete view of our uncertainty feature enhanced approach, including:
(1) The overall framework of QEMind is carried out in Section 3.1: how uncertainty features are combined with a pre-trained multilingual language model to enhance transfer learning;
(2) Uncertainty features used in QEMind are described in Section 3.2: how uncertainty features are defined and extracted for translation quality estimation;
(3) Strategies we applied in the WMT QE shared task to further improve the system's performance, such as data augmentation and model ensemble, are explained in Section 3.3.
QEMind Framework
QEMind follows the general transfer learning procedure while allowing extra meta features to enhance the model. We concatenate the source text and machine translation and feed them into the pre-trained XLM-Roberta model to get the output representation of the special [CLS] token. Afterwards, the output representation is combined with the normalized uncertainty features described in Section 3.2. They are fed into a simple linear regression/classification layer to predict the continuous or binary quality score. The architecture of our feature enhanced model is shown in Figure 1. This model is equivalent to TransQuest's (Ranasinghe et al., 2020) when no extra feature is used. Considering the size of the training set is small, we have not added extra parameters, such as bottleneck adapter layers used in Moura et al. (2020), to fuse uncertainty features and the output from XLM-Roberta. Fomicheva et al. (2020) proposed several "glassbox" features extracted from the NMT model. Estimating translation quality with these features achieves state-of-the-art results as an unsupervised approach. However, the performances of this approach are still far below those of the supervised model from transfer learning (Ranasinghe et al., 2020). Moura et al. (2020) combined limited "glass-box" features with the hidden state of a bottle-neck adapter layer attached on the output from the XLM-Robert, and the results indicate that these features can bring slight but significant improvements to the transfer learning model. Wang et al. (2021) proposed more unsupervised "glassbox" and "black-box" QE features and investigated further on the contributions of each one to the QE model's performance via a feature-enhanced supervised model. Inspired by their work, we explored deeply in the aspect of uncertainty quantification to obtain uncertainty features in this section to enhance the transfer learning model. First, we extend "glass-box" features in Fomicheva et al. (2020) Decoding Probability Features. For autoregressive sequence generating models like Transformers (Vaswani et al., 2017), the decoding probability at each step can be extracted from the softmax layer directly in a "glass-box" setting:
Uncertainty Features
P (x,t,θ) step = log P (y t |y <t , x, θ)(1)
where x represents the input source text and y is the output machine translation. P step is a probability sequence with the same length of the generated sequence y. Three statistical indicators of P step can be used to estimate the uncertainty of the output: expectation, standard deviation, and the combined ratio of them:
E(P step |x, θ) = 1 T T t=1 P (x,t,θ) step (2) σ(P step |x, θ) = E(P 2 step |x, θ) − E 2 (P step |x, θ) (3) Combo(P step |x, θ) = E(P step |x, θ) σ(P step |x, θ)(4)
Intuitively, larger expectation, smaller deviation and larger combined ratio of P step indicate lower uncertainty and higher quality. P step is an extended version of the T P feature in Fomicheva et al. (2020) and the expectation of P step is the same as T P . Monte Carlo Dropout Features. Monte Carlo (MC) Dropout sampling, that has been exploited in Gal and Ghahramani (2016), is an efficient "glassbox" approach to estimate uncertainty. It enables random dropout on neural networks during inference and the predictive probabilities through different sampling paths are used to obtain measures of uncertainty (Fomicheva et al., 2020). The output sequencesŷ sampled across stochastic forwardpasses by MC dropout with different sampled model parametersθ can be different as well. If y is a high-quality output with low uncertainty, the Monte Carlo sampled outputsŷ should be close to y and the variance ofŷ should be low. Hence, two measurements of sampling based on text similarity are carried out here:
M C-Sim = Sim(y,ŷ i ) (5) M C-Sim-Inner = 1 N N j=1 Sim(ŷ i ,ŷ j ) (6)
whereŷ i is the i-th sample ofŷ, and 1 ≤ i ≤ N . For the similarity score function, as in Fomicheva et al. (2020), Meteor metric (Denkowski and Lavie, 2014) is applied. Besides, as a sentence-level probability score, E(P step ) can also be calculated with different model parametersθ by MC dropout sampling:
M C-P step = E(P step |x,θ)
The expectation, standard deviation, and combined ratio of M C-Sim, M C-Sim-Inner and M C-P step are calculated over all MC dropout samples and will be used as "glass-box" uncertainty features. Among them, E(M C-P step ), σ(M C-P step ), Combo(M C-P step ), and E(M C-Sim-Inner) are equivalent to D-T P , D-V ar, D-Combo, and D-Lex-Sim in Fomicheva et al. (2020) Noised Data Features. Monte Carlo Dropout approaches mentioned above can be regarded as a robustness test of the NMT model. Due to its validity in Fomicheva et al. (2020), it is rational to believe that a similar way with appropriate noise in the input of MT may perform comparably. Therefore, we define the following uncertainty features similar to M C-Sim, M C-Sim-Inner and M C-P step . The differences are: (1) the NMT model weights are fixed θ without MC dropout sampling;
(2) the model decodes translationsỹ with a noised inputx.
N oise-Sim = Sim(y,ỹ i ) (8) N oise-Sim-Inner = 1 N N j=1 Sim(ỹ i ,ỹ j ) (9)
N oise-P step = E(P step |x, θ)
One crucial point in designing this type of features is how to generate noised inputx. One solution is
Algorithm 1 Generate Noise Input with "Post-Editing"
Require: input x = {x t |t = 1, 2, ..., T }, hyperparameters R, p i , p d . 1: Initialize x mask = x 2: for r = 1, ..., R do 3:
x mask = randomly delete tokens from x mask with probability p d 4:
x mask = randomly insert special <mask> tokens into x mask with probability p i 5: end for 6:x = M LM (x mask ), where M LM is a pretrained masked language model. 7: returnx a "black-box" way that takes the advantage of the masking strategy of the pre-trained XLM-Roberta. Basically, we can mask some words in the source text and get a noised source text by the predictions from the pre-trained model in the masked positions. This simple approach only conducts substitutions on x with the [mask] token, but it limits the diversity of the noised sample inputs. To enrich the variety of x, we adjust the imitation learning algorithm in Wang et al. (2020) to a simplified version to obtain noised inputx. We "post-edit" the input x by randomly deleting tokens and inserting masks for several rounds to get x mask . Then, the pre-trained XLM-R is used as a masked language model to predict the tokens in the masked positions of x mask to get the post-editedx. Pseudo codes of this "post-editing" algorithm is provided in Algorithm 1.
Strategies
Multilingual Training. Considering zero-shot language pairs in the DA task, we mix up all seven language pairs' training data to fine-tune the XLM-Roberta model and predict on the whole test set including zero-shot language pairs. We have tried two different ways of mixing up training data from different language pairs to fine-tune XLM-Roberta:
(1) source sentence + translation sentence; (2) English sentence + non-English sentence. Our experimental results demonstrate that multilingual models usually perform better than bilingual models trained on a single language pair, but there is no prominent difference in performance of the two different multilingual strategies. We keep both multilingual models and bilingual models for model ensemble.
Data Augmentation. Two data augmentation strategies are applied for the CED task. First, considering the imbalance between positive and negative samples in the CED dataset, we up-sample the data withERR labels in each language pair to obtain a balanced dataset. Secondly, inspired by examples provided by the organizer, we have also tried to replace the original machine translation with a back-translated sentence and hope that the gap between the source sentence and the back-translated sentence can provide insights of the detection of potential critical errors. The back translations come from the released ML50 multilingual translation model (Tang et al., 2020).
Model Ensemble. For the DA task, models trained with different multilingual strategies and different uncertainty features are ensembled by averaging predicted scores. While for the CED task, we average classification probability outputs from models trained with different data augmentation strategies and uncertainty features to obtain ensemble results. We apply a greedy ensemble strategy. First, all models are sorted by their performance on the development sets. Then, upon the best single model, we take one more model into the ensemble at each step until there is no more performance gain on the development sets or the maximum step is reached. We set the maximum step to avoid overfitting on the development sets.
Experiments
Model Settings
We follow the model settings of Transquest (Ranasinghe et al., 2020) to fine-tune our QE model based on the XLM-Roberta large model with a classification/regression head on a single P100 GPU. The training batch size is set to 8 and the training process takes about 2 hours to convergence. For the DA task, the total number of parameters of QE-Mind with uncertainty features is 560981507; if no uncertainty features are used, it is 560941571. And for the CED task, the numbers of parameters with and without uncertainty features are 560982532 and 560942596 respectively.
Experiments of DA task
We conduct all experiments and evaluate our model on last year's test sets to optimize model configurations for each language pair. In particular, the model performed best on all seven language pairs in average is selected to generate submissions for The Pearson's correlations between our model's predictions and the human DA judges (zstandardized mean DA score) are shown in Table 1. TransQuest Single and TransQuest Ensemble are the best single and ensemble models of Ranasinghe et al. (2020), which is the winner system of last year's DA task. QEMind-Bi and QEMind-Multi are models without uncertainty features, between which, the difference is that the model is trained on bilingual data or mixed multilingual data. QEMind-Multi + UNC is the complete QEMind model enhanced by various uncertainty features described in Section 3.2. Finally, predictions from bilingual models, multilingual models, and uncertainty features enhanced models are ensembled following Section 3.3, marked as QEMind Ensemble in the table.
Results on the DA test sets of WMT 2020 show that: (1) multilingual strategies work well on this task, especially for high-resource language pairs;
(2) the uncertainty features enhanced multilingual model achieves the highest performance among all single models, which verifies that these uncertainty features are useful to all language pairs and can be fused in multilingual models. (3) ensemble of multiple models of different settings can further improve the performance of QEMind systems.
We pick the best single and ensemble models for each language pair and produce predictions on the newly released blind test sets of WMT 2021, including the 4 zero-shot language pairs. Results of Pearson's correlations are shown in Table 2 and Table 3.
Experiments of CED task
We test different strategies and uncertainty features on the CED development sets. Brief results of Matthews correlations (MCC) on development sets are shown in Table 4. All models are trained on up-sampled training data of each language pair. From the results observations, compared to QE-Mind, which only applies up-sampling on the training data, the strategies of back-translation (QEMind + BK) and uncertainty features (QEMind + UNC) can achieve comparable or better performances. The ensemble of all these models makes a significant improvement. Similar to the DA task, the best single and ensemble models are picked to generate our final submissions. Results on test sets of this year are listed in Table 5.
Conclusion
This paper introduces our machine translation quality estimation model, QEMind, for the sentencelevel Direct Assessment and Critical Error Detection tasks of WMT 2021. We propose novel features to estimate the uncertainty of machine translations and incorporate them into the transfer learning from the large-scale pre-trained model, XLM-Roberta. Besides, three important strategies are particularly utilized for improving the QE system's performance such as multilingual training, data augmentation and model ensemble. Our system has achieved the first ranking in average Pearson correlation across all languages, including the zeroshot ones in the multilingual DA task of WMT 2021.
Figure 1 :
1Structure of the uncertainty quantification feature-enhanced model.
to the Decoding Probability Features and the Monte Carlo Dropout Features. And then, the Noised Data Features are proposed similar to the Monte Carlo Dropout Features.
Table 1 :
1The Pearson's correlation between model predictions and human DA judges on the WMT 2020 QE test
sets
Model
High-Resource
Mid-Resource
Low-Resource
En-De En-Zh Et-En Ro-En Ru-En Si-En Ne-En
Official Baseline
0.4025 0.5248 0.6601 0.8175 0.6766 0.5127 0.7376
QEMind Single
0.5281 0.5635 0.7909 0.8954 0.7893 0.5769 0.8406
QEMind Ensemble 0.5666 0.6025 0.8117 0.9082 0.8060 0.5956 0.8667
Table 2 :
2Pearson's correlations results of 2021 DA task on non-zero-shot language pairsModel
En-Ja En-Cs Km-En Ps-En
Official Baseline 0.2301 0.3518 0.5623 0.4760
QEMind Single
0.3354 0.5456 0.6509 0.6159
QEMind Ensemble 0.3589 0.5816 0.6787 0.6474
Table 3 :
3Pearson's correlations results of 2021 DA task on zero-shot language pairsModel
En-Cs En-De En-Ja En-Zh
QEMind
0.3915 0.4629 0.2559 0.2629
QEMind + BK
0.4257 0.4914 0.2471 0.2800
QEMind + UNC 0.4111 0.4859 0.2606 0.2897
QEMind Ensemble 0.4864 0.5257 0.3325 0.3587
Table 4 :
4Matthews correlation results of WMT 2021 CED task on development sets ModelEn-Cs En-De En-Ja En-ZhOfficial Baseline 0.3875 0.3974 0.2139 0.1873
QEMind-Single
0.4129 0.4257 0.2139 0.2356
QEMind Ensemble 0.4539 0.4797 0.2601 0.2777
Table 5 :
5Matthews correlation results of WMT 2021 CED task on test sets zero-shot language pairs.
AcknowledgementsThis work is supported by National Key R&D Program of China (2018YFB1403202).
Confidence estimation for machine translation. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Coling 2004: Proceedings of the 20th international conference on computational linguistics. Alberto Sanchis, and Nicola UeffingJohn Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In Coling 2004: Proceedings of the 20th international conference on computational linguistics, pages 315-321.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Meteor universal: Language specific translation evaluation for any target language. Michael Denkowski, Alon Lavie, Proceedings of the ninth workshop on statistical machine translation. the ninth workshop on statistical machine translationMichael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376-380.
bilingual expert" can find translation errors. Kai Fan, Jiayi Wang, Bo Li, Fengming Zhou, Boxing Chen, Luo Si, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Kai Fan, Jiayi Wang, Bo Li, Fengming Zhou, Boxing Chen, and Luo Si. 2019. "bilingual expert" can find translation errors. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 6367-6374.
Unsupervised quality estimation for neural machine translation. Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, Lucia Specia, Transactions of the Association for Computational Linguistics. 8Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. Transactions of the As- sociation for Computational Linguistics, 8:539-555.
Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Yarin Gal, Zoubin Ghahramani, PMLRinternational conference on machine learning. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncer- tainty in deep learning. In international conference on machine learning, pages 1050-1059. PMLR.
Unbabel's participation in the wmt19 translation quality estimation shared task. Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, Amin Farajian, V António, André Ft Lopes, Martins, arXiv:1907.10352arXiv preprintFabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, M Amin Farajian, António V Lopes, and André FT Martins. 2019. Unbabel's par- ticipation in the wmt19 translation quality estima- tion shared task. arXiv preprint arXiv:1907.10352.
Predictorestimator: Neural quality estimation based on target word prediction for machine translation. Hyun Kim, Hun-Young Jung, Hongseok Kwon, Jong-Hyeok Lee, Seung-Hoon Na, ACM Transactions on Asian and Low-Resource Language Information Processing. 171TALLIPHyun Kim, Hun-Young Jung, Hongseok Kwon, Jong- Hyeok Lee, and Seung-Hoon Na. 2017. Predictor- estimator: Neural quality estimation based on tar- get word prediction for machine translation. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 17(1):1-22.
Ist-unbabel participation in the wmt20 quality estimation shared task. Joao Moura, Miguel Vera, Fabio Daan Van Stigt, André Ft Kepler, Martins, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationJoao Moura, Miguel Vera, Daan van Stigt, Fabio Ke- pler, and André FT Martins. 2020. Ist-unbabel par- ticipation in the wmt20 quality estimation shared task. In Proceedings of the Fifth Conference on Ma- chine Translation, pages 1029-1036.
How multilingual is multilingual bert?. Telmo Pires, Eva Schlinger, Dan Garrette, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsTelmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001.
Transquest at wmt2020: Sentencelevel direct assessment. Tharindu Ranasinghe, Constantin Orǎsan, Ruslan Mitkov, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationTharindu Ranasinghe, Constantin Orǎsan, and Ruslan Mitkov. 2020. Transquest at wmt2020: Sentence- level direct assessment. In Proceedings of the Fifth Conference on Machine Translation, pages 1049- 1055.
Lucia Specia, Frédéric Blain, Varvara Logacheva, Ramón Astudillo, André Martins, Findings of the wmt 2018 shared task on quality estimation. Association for Computational LinguisticsLucia Specia, Frédéric Blain, Varvara Logacheva, Ramón Astudillo, and André Martins. 2018. Find- ings of the wmt 2018 shared task on quality estima- tion. Association for Computational Linguistics.
Multi-level translation quality prediction with quest++. Lucia Specia, Gustavo Paetzold, Carolina Scarton, Proceedings of ACL-IJCNLP 2015 System Demonstrations. ACL-IJCNLP 2015 System DemonstrationsLucia Specia, Gustavo Paetzold, and Carolina Scarton. 2015. Multi-level translation quality prediction with quest++. In Proceedings of ACL-IJCNLP 2015 Sys- tem Demonstrations, pages 115-120.
Estimating the sentence-level quality of machine translation systems. Lucia Specia, Marco Turchi, Nicola Cancedda, Marc Dymetman, Nello Cristianini, 13th Conference of the European Association for Machine Translation. Lucia Specia, Marco Turchi, Nicola Cancedda, Marc Dymetman, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation sys- tems. In 13th Conference of the European Associa- tion for Machine Translation, pages 28-37.
Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2020. Multilingual translation with exten- sible multilingual pretraining and finetuning.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
Beyond glassbox features: Uncertainty quantification enhanced quality estimation for neural machine translation. Ke Wang, Yangbin Shi, Jiayi Wang, Yuqi Zhang, Yu Zhao, Xiaolin Zheng, Ke Wang, Yangbin Shi, Jiayi Wang, Yuqi Zhang, Yu Zhao, and Xiaolin Zheng. 2021. Beyond glass- box features: Uncertainty quantification enhanced quality estimation for neural machine translation.
Computer assisted translation with neural quality estimation and auotmatic postediting. Ke Wang, Jiayi Wang, Niyu Ge, Yangbin Shi, Yu Zhao, Kai Fan, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsKe Wang, Jiayi Wang, Niyu Ge, Yangbin Shi, Yu Zhao, and Kai Fan. 2020. Computer assisted translation with neural quality estimation and auotmatic post- editing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2175-2186.
| [] |
[
"Layer Trajectory LSTM",
"Layer Trajectory LSTM"
] | [
"Jinyu Li jinyli@microsoft.com \nMicrosoft AI and Research\n\n",
"Changliang Liu \nMicrosoft AI and Research\n\n",
"Yifan Gong ygong@microsoft.com \nMicrosoft AI and Research\n\n"
] | [
"Microsoft AI and Research\n",
"Microsoft AI and Research\n",
"Microsoft AI and Research\n"
] | [] | It is popular to stack LSTM layers to get better modeling power, especially when large amount of training data is available. However, an LSTM-RNN with too many vanilla LSTM layers is very hard to train and there still exists the gradient vanishing issue if the network goes too deep. This issue can be partially solved by adding skip connections between layers, such as residual LSTM. In this paper, we propose a layer trajectory LSTM (ltLSTM) which builds a layer-LSTM using all the layer outputs from a standard multi-layer time-LSTM. This layer-LSTM scans the outputs from time-LSTMs, and uses the summarized layer trajectory information for final senone classification. The forward-propagation of time-LSTM and layer-LSTM can be handled in two separate threads in parallel so that the network computation time is the same as the standard time-LSTM. With a layer-LSTM running through layers, a gated path is provided from the output layer to the bottom layer, alleviating the gradient vanishing issue. Trained with 30 thousand hours of EN-US Microsoft internal data, the proposed ltLSTM performed significantly better than the standard multi-layer LSTM and residual LSTM, with up to 9.0% relative word error rate reduction across different tasks. | 10.21437/interspeech.2018-1485 | [
"https://arxiv.org/pdf/1808.09522v1.pdf"
] | 52,122,400 | 1808.09522 | bbd42675e9829c0a235fd412ef829557e7a29f98 |
Layer Trajectory LSTM
Jinyu Li jinyli@microsoft.com
Microsoft AI and Research
Changliang Liu
Microsoft AI and Research
Yifan Gong ygong@microsoft.com
Microsoft AI and Research
Layer Trajectory LSTM
Index Terms: speech recognitionLSTMlayer trajectoryfac- torized gate
It is popular to stack LSTM layers to get better modeling power, especially when large amount of training data is available. However, an LSTM-RNN with too many vanilla LSTM layers is very hard to train and there still exists the gradient vanishing issue if the network goes too deep. This issue can be partially solved by adding skip connections between layers, such as residual LSTM. In this paper, we propose a layer trajectory LSTM (ltLSTM) which builds a layer-LSTM using all the layer outputs from a standard multi-layer time-LSTM. This layer-LSTM scans the outputs from time-LSTMs, and uses the summarized layer trajectory information for final senone classification. The forward-propagation of time-LSTM and layer-LSTM can be handled in two separate threads in parallel so that the network computation time is the same as the standard time-LSTM. With a layer-LSTM running through layers, a gated path is provided from the output layer to the bottom layer, alleviating the gradient vanishing issue. Trained with 30 thousand hours of EN-US Microsoft internal data, the proposed ltLSTM performed significantly better than the standard multi-layer LSTM and residual LSTM, with up to 9.0% relative word error rate reduction across different tasks.
Introduction
Recently, significant progress has been made in automatic speech recognition (ASR) when switching from the deep neural networks (DNNs) [1] to recurrent neural networks (RNNs) with long short-term memory (LSTM) units [2], which solve the gradient vanishing or exploding issues in standard RNNs by using input, output and forget gates to achieve a network that can maintain state and propagate gradients in a stable fashion over long spans of time. These LSTM-RNNs [3,4,5,6,7] and their variants such as two-dimensional LSTM-RNNs [8,9,10] have been shown to outperform DNNs on a variety of ASR tasks.
It is popular to stack multiple LSTM layers to get better modeling power [4], especially when large amount of training data is available. However, an LSTM-RNN with too many vanilla LSTM layers is very hard to train and there still exists the gradient vanishing issue if the network goes too deep [11,12]. This issue can be partially solved by adding skip connections or gating functions between layers.
Residual LSTM [13,14] uses shortcut connections between LSTM layers, and hence provides a way to alleviate the gradient vanishing problem. In the highway LSTM [11], memory cells of adjacent layers are connected by gated direct links which provide a path for information to flow between layers more directly without decay. Therefore, it alleviates the gradient vanishing issue and enables the training of much deeper LSTM-RNN networks. In [15], highway LSTM was investigated with large scale of training data, but only very limited gain was obtained over the standard multi-layer LSTM. Grid LSTM [16] is a more general LSTM which arranges the LSTM memory cells into a multidimensional grid along both time and layer axis. It was extended in [12,17] as prioritized grid LSTM which was shown to outperform highway LSTM on several ASR tasks.
All the aforementioned models work in a layer-by-layer and step-by-step fashion. The output of a LSTM unit (either the standard time LSTM or grid LSTM) is used as the input of the LSTM at the same time step in the next layer and the recurrent input of the LSTM at the next time step in the same layer. The output of the highest layer LSTM is used for final senone (tied triphone states) classification. However, it may not be optimal that the LSTM outputs serve the purpose of both recurrence along time axis (for temporal modeling) and senone classification along the layer axis (for target discrimination).
In this paper, we decouple the purposes of time recurrence and senone classification by proposing a layer trajectory LSTM (ltLSTM) which builds a layer-LSTM using the outputs from all the layers of a standard multi-layer time-LSTM. The time-LSTM is used for temporal modeling with time recurrence, while the layer-LSTM scans the outputs from multiple time-LSTM layers and uses the summarized layer trajectory information for final senone classification. Hence, the forwardpropagation of the time-LSTM in next frame is independent of the calculation of the layer-LSTM in current frame, therefore the evaluation of time-LSTM and layer-LSTM can be handled in two separate threads in parallel and the network computational time can be kept the same as the standard time-LSTM. With a layer-LSTM running through layers, a gated path is provided from the output layer to the bottom layer, reducing the gradient vanishing issue. We evaluate the proposed method by training various models with 30 thousand (k) hours of EN-US data which pools from Microsoft Cortana, Conversation, and xBox data with mixed close-talk and far-field utterances. The proposed ltLSTM is significantly better than the standard multilayer LSTM and residual LSTM.
The rest of the paper is organized as follows. In Section 2, we explore different LSTM structures: standard multi-layer LSTM, Residual LSTM, and the proposed ltLSTM. We also propose a new way to reduce the computational cost of LSTM by factorizing the gates calculation. We evaluate the proposed models in Section 3, and conclude our study in Section 4.
Exploring LSTM structures
In this section, we first introduce the standard multi-layer LSTM and residual LSTM (ResLSTM). Then, we describe our proposed layer trajectory LSTM. Finally, a factorized gate LSTM is proposed to reduce the computational cost of LSTM units.
LSTM
The standard LSTM is a time-LSTM which does temporal modeling via time recurrence by taking the output of time-LSTM at previous time step as the input of the time-LSTM at current time step. To increase modeling power, multiple layers of LSTM units are stacked together to form a multi-layer LSTM which is shown in Figure 1. At time step t, the vector formulas of the computation of the l-th layer LSTM units can be described as:
i l t = σ(W l ix x l t + W l ih h l t−1 + p l i c l t−1 + b l i ) (1) f l t = σ(W l f x x l t + W l f h h l t−1 + p l f c l t−1 + b l f )(2)c l t = f l t c l t−1 + i l t φ(W l cx x l t + W l ch h l t−1 + b l c )(3)o l t = σ(W l ox x l t + W l oh h l t−1 + p l o c l t + b l o )(4)h l t = o l t φ(c l t )(5)
where x l t is the input vector for the l-th layer with
x l t = h l−1 t , if l > 1 st, if l = 1 (6)
st is the speech spectrum input at time step t. The vectors i l t , o l t , f l t , c l t are the activation of the input, output, forget gates, and memory cells, respectively. h l t is the output of the time-LSTM. W l .x and W l .h are the weight matrices for the inputs x l t and the recurrent inputs h l t−1 , respectively. b l . are bias vectors. p l i , p l o , p l f are parameter vectors associated with peephole connections. The functions σ and φ are the logistic sigmoid and hyperbolic tangent nonlinearity, respectively. The operation represents element-wise multiplication of vectors.
From Figure 1, we can see that the output of a time-LSTM is used as the input of the time-LSTM at the same time step in the next layer and the recurrent input of the time-LSTM at the next time step in the same layer. The last hidden layer's output is used to predict senone labels for senone classification. Therefore, the same output is used for the purpose of temporal model along time axis and the purpose of target discrimination along the layer axis. However, these two purposes are indeed very different. Hence, the standard time-LSTM modeling may not be optimal.
Residual LSTM
Similar to Residual CNN [18] which recently achieves great success in the image classification task, residual LSTM (ResLSTM) is very straightforward with the direct shortcut path across layers by changing Eq. (6) to Eq. (7) so that gradient vanishing issue can be partially solved.
x l t = x l−1 t + h l−1 t , if l > 1 st, if l = 1(7)
We will use ResLSTM as a baseline model with skip connection in Section 3. Although ResLSTM can partially solve the gradient vanishing issue, it still has the same challenges as the standard time-LSTM -the output vector works for two very different purposes: temporal modeling and senone classification.
Layer trajectory LSTM
As discussed above, it may not be optimal that the output of time-LSTM serves both the purposes of temporal modeling and senone classification. In this study, we decouple these two purposes by proposing a layer trajectory LSTM (ltLSTM) which builds a layer-LSTM using the outputs from all the time-LSTM layers, shown in Figure 2. The weights are not shared between layers because sharing doesn't bring any computational benefit. The time-LSTM is used for the purpose of temporal modeling via time recurrence, while the layer-LSTM scans the outputs from multiple time-LSTM layers and uses the summarized layer trajectory information for final senone classification. With a layer-LSTM running through layers, a gated path is provided from the output layer to the bottom layer, reducing the gradient vanishing issue.
In ltLSTM, the formulation of time-LSTM is still the standard LSTM formulation, with Eqs. (1) - (5). As shown in Figure 2, there is no time recurrence between layer-LSTMs across different time steps. Hence, the formulation of layer-LSTM only has the recurrence across layers as:
j l t = σ(U l jh h l t + U l jg g l−1 t + q l j m l−1 t + d l j ) (8) e l t = σ(U l eh h l t + U l eg g l−1 t + q l e m l−1 t + d l e )(9)m l t = e l t m l−1 t + j l t φ(U l sh h l t + U l sg g l−1 t + d l s ) (10) v l t = σ(U l vh h l t + U l vg g l−1 t + q l v m l t + d l v )(11)g l t = v l t φ(m l t )(12)
The vectors j l t , v l t , e l t , m l t are the activation of the input, output, forget gates, and memory cell of the layer-LSTM, respectively. It is a common practice to deploy a complicated model by reducing parallel computational time, e.g., [19]. Because the forward-propagation of the time-LSTM at next time step is independent of the calculation of the layer-LSTM at current time step, the forward-propagation of time-LSTM and layer-LSTM can be handled in two separate threads in parallel and the network computational time is the same as the standard time-LSTM which operates in a layer-by-layer and frame-by-frame fashion. Another advantage of decoupling the time and layer operation in ltLSTM is that layer-LSTM can be evaluated with batching [20] which was proposed to improve the runtime of feed-forward DNNs by evaluating the network scores from several time frames at the same time. However, batching cannot be applied to standard time-LSTM because the input of current frame is from the output of previous frame. Since there is no time recurrence between layer-LSTM across different frames, batching can be applied to evaluate layer-LSTM once the time-LSTM vectors have been calculated in multiple frames.
Comparison with grid LSTMs
Grid LSTM (gLSTM) [16] and prioritized grid LSTM (pgLSTM) [12,17] can be considered as a multidimensional LSTM which arranges the LSTM memory cells along both time and layer axis. They modify the LSTM units with multidimensional formulation and still process the speech input in a step-by-step and layer-by-layer fashion. The operation along time and layer dimensions is mixed together. In contrast, ltL-STM decouples the jobs of temporal modeling with time-LSTM and target classification with layer-LSTM. The evaluation of time-LSTM doesn't rely on the value of layer-LSTM in previous time step or lower layer. Hence, ltLSTM enjoys clear computational advantage as discussed in the previous section. Because of the function decoupling, it is not necessary to use LSTM units for modeling layer dependency. We can just replace layer-LSTM units with layer-DNN units or any other units, which gives more modeling flexibility [21].
Factorized gate LSTM
The computational cost of LSTM is always a concern. There are lots of attempts [22] to reduce the computational cost, such as getting low-rank matrices with singular value decomposition (SVD) [23,24], model compression via teacher-student (T/S) learning [25] or knowledge distillation [26], scalar quantization [20], and vector quantization [27] etc. The computational cost can also be reduced by exploring different model structures [7,28] or using lower frame rate strategies [7,29].
In this section, we focus on reducing the size of weight matrices used to calculate input, output, and forget gates in the LSTM unit. Those matrices usually are of large size, resulting the major computational cost in the network evaluation. For example, some typical LSTM-RNN systems usually have around 1024 memory cells [4,30] in the LSTM unit, which means the dimension of gate vectors is 1024. Usually a linear projection layer is applied to the LSTM output vector to reduce its dimension, for example to 512. Hence the two weight matrices , W l ix and W l ih , used to calculate the input gate vector in Eq. (1) are of dimension 1024x512.
In Eq. (15), we factorize the input gate vector calculation as the square root of the product of two vectors,í l t and {ì l t } T , which are calculated by Eqs. (13) and (14). vec(.) is the operation that squashes a kxk matrix into a m-dimension vector, where m = kxk. Peephole connections are not used in Eqs. (13) and (14) because of dimension mismatch between the state vector (m dimension) and factorized gate vector (k dimension).
i l t = σ(Ẃ l ix x l t +Ẃ l ih h l t−1 +b l i ) (13) i l t = σ(Ẁ l ix x l t +Ẁ l ih h l t−1 +b l i ) (14) i l t = vec(sqrt(í l t {ì l t } T ))(15)
For the above example, instead of having two 1024x512 matrices for the input gate calculation in Eq. (15), only four 32x512 matrices (1024 = 32x32) are involved in Eqs. (13) and (14). The computation cost is reduced to only 1/16 for the input gate vector calculation.
Similar formulations can be applied to the forget and output gates of the time-LSTM in Eqs. (2) and (4), and the input, forget, and output gates of layer-LSTM in Eqs. (8), (9) and (11).
Experiments
We compare standard multi-layer LSTM, ResLSTM, and the proposed ltLSTM in this section. All these models use standard LSTM units as basic building blocks, different from grid LSTM which modifies the LSTM units with multi-dimensional formulation. All models were trained with 30 thousand (k) hours of anonymized and transcribed Microsoft production data, including Cortana [30], xBox [31], and Conversation data, which is a mixture of close-talk and far-field utterances from a variety of devices. The first model was built as a 4-layer LSTM-RNN with projection layer as what we usually did [30]. Each LSTM layer has 1024 hidden units and the output size of each LSTM layer is reduced to 512 using a linear projection layer. The output layer has 9404 nodes, modeling the senone labels. The target senone label is delayed by 5 frames as in [4]. The input feature is 80-dimension log Mel filter bank. We applied frame skipping [7] to reduce the runtime cost. Note that in this study, we only compare the baseline full-rank cross-entropy models. If we want to deploy models into production, we will further apply SVD training [23] and sequence discriminative training using the maximum mutual information (MMI) criterion with F-smoothing [32], as the systems described in [30]. The LM is a 5-gram with around 100 million (M) ngrams. We evaluate all models with Cortana and Conversation test sets. Both sets contain mixed close-talk and far-field utterances, with 439k and 111k words, respectively. The Cortana test set has shorter utterances related to voice search and commands, while the Conversation test set has longer utterances for conversation. As shown in Table 1, the 4-layer LSTM model obtained 10.37% and 19.41% WER on these 2 test sets, respectively.
Then, we simply increased the number of LSTM layers to 6 and 10. Increasing from 4 layers to 6 layers, the multilayer LSTM got improvement across all tasks, with 9.85% and 19.20% WERs on Cortana and Conversation test sets, respectively. However, when increasing to 10 layers, the multi-layer LSTM got lots of degradation, consistent with the observations in literature [11,12].
The 6-layer ResLSTM obtained very similar WERs as the 6-layer LSTM, with improvement on Conversation test sets, but slight degradation on Cortana test sets. However, different from the behavior of the 10-layer LSTM, consistent improvement was obtained with the 10-layer ResLSTM which got 9.68% and 18.15% WERs on Cortana and Conversation test sets, respectively. This clearly demonstrates the effectiveness of skipping connection for reducing the gradient vanishing issue.
Finally, the 6-layer ltLSTM got significant improvement over all models, obtaining 9.28% and 17.47% WERs on Cortana and Conversation test sets, respectively. This represents 5.8% and 9.0% relative WER reduction from the 6-layer LSTM, or 4.1% and 3.7% relative WER reduction from the 10-layer ResLSTM on Cortana and Conversation test sets, respectively.
In Table 2, we examine the total and parallel computational costs of all LSTM, ResLSTM, and ltLSTM models. Both LSTM and ResLSTM operate in a frame-by-frame and layerby-layer fashion, therefore the total and parallel computational costs are same. As described in Section 2.3, the layer-LSTM and time-LSTM inside ltLSTM can be evaluated in parallel as there is no time recurrence between layer-LSTMs at different time steps. As a result, the parallel computational cost is about 31 M per frame, which is the same as that of the 6-layer LSTM. We applied factorized gate LSTM described in Section 2.5 to the 6-layer ltLSTM and evaluated the method in Table 3. We factorized input gates in both time and layer LSTMs by reducing the calculation of a 1024-dimension gate vector into the calculation of two 32-dimension gate vectors as in Eqs. (13), (14), and (15). We also applied similar operation to factorize output and forget gates in both time and layer LSTMs. All the factorized gate operation increased WER. Clearly, no magic happens even with the factorization in Eq.(15) because two 32-dimension gate vectors carry much less information than what a 1024-dimension gate vector can carry. The impact of factorizing forget gate is the smallest, with relative 2.3% and 3.7% WER increase from the full version of ltLSTM without any factorization, although it is still better than all the LSTM and ResLSTM models. Factorizing input gates has the biggest degradation. Given the loss, we didn't evaluate the setup which factorizes all the gates together. With single gate factorization, the parallel computational cost is reduced to 25 M operation per frame which is even lower than that of the 6-layer LSTM or ResLSTM while the WER of factorized gate ltLSTM is clearly better than that of the 6-layer LSTM or ResLSTM.
Conclusions and Future Works
In this paper, we proposed a novel model called ltLSTM which scans the outputs of the multi-layer time-LSTM with a layer-LSTM to learn layer trajectory information which is used for classification. This model decouples the tasks of temporal modeling and target classification by using time-LSTM and layer-LSTM, respectively. It brings the benefits of both accuracy and runtime. Trained with 30k hours of speech data, the 6-layer ltL-STM improves the baseline 6-layer LSTM with relative 5.8% and 9.0% WER reduction on Cortana and Conversation test sets respectively and reduces the WERs of the 10-layer ResLSTM by 4.1% and 3.7% relative. With parallel computation, the model evaluation time of the 6-layer ltLSTM is kept the same as that of the 6-layer LSTM. Furthermore, we proposed to factorize gates inside LSTM units to reduce the runtime cost. Applied to the 6-layer ltLSTM, the model has smaller parallel computational cost and better accuracy than that of the 6-layer LSTM or ResLSTM.
Recently, we blended attention mechanism [33] into CTC modeling and achieved very good accuracy improvement [34,35]. We are now using similar idea to further improve ltLSTM. As noted in Section 2.4, it is not necessary to use LSTM units for modeling layer dependency. We are working on a generalized ltLSTM which can employ any units for modeling layer dependency. All these works will be reported in [21].
Figure 1 :
1Flowchart of multi-layer time-LSTM (T-LSTM). The output of a T-LSTM is used as the input of the T-LSTM at the same time step in the next layer and the recurrent input of the T-LSTM at the next time step in the same layer.
Figure 2 :
2Flowchart of layer trajectory LSTM (ltLSTM). Layer-LSTM (L-LSTM) is used to scan the outputs of time-LSTM (T-LSTM) across all layers at the current time step to get summarized layer trajectory information for senone classification. There is no time recurrence for L-LSTM. Time recurrence only exists between T-LSTMs at different time steps.
g l t is the output of the layer-LSTM. The matrices U l .h and U l .g terms are the weight matrices for the inputs h l t and the recurrent inputs g l−1 t , respectively. The d l . are bias vectors. The q l j , q l v , q l e are parameter vectors associated with peephole connections. Comparing Eqs. (1) -(5) with Eqs. (8) -(12), we can see the biggest difference is the recurrence now happens across the layers with g l−1 t in layer-LSTM, compared to the time recurrence with h l t−1 in time-LSTM. Layer-LSTM uses the output of time-LSTM at current layer, h l t , as the input, compared to the x l t in time-LSTM.
Table 1 :
1WERs of LSTM, ResLSTM, and ltLSTM models on Cortana and Conversation test sets. Both test sets are mixed with close-talk and far-field utterances.Cortana Conversation
4-layer LSTM
10.37
19.41
6-layer LSTM
9.85
19.20
10-layer LSTM
10.58
19.92
6-layer ResLSTM
9.99
18.85
10-layer ResLSTM
9.68
18.15
6-layer ltLSTM
9.28
17.47
Table 2 :
2Totaland parallel per-thread computational costs of
LSTM, ResLSTM, and ltLSTM models in terms of million (M)
operations per frame.
Total (M) Parallel per thread (M)
4-layer LSTM
22
22
6-layer LSTM
31
31
10-layer LSTM
49
49
6-layer ResLSTM
31
31
10-layer ResLSTM
49
49
6-layer ltLSTM
57
31
Table 3 :
3WERs of the 6-layer ltLSTM and its factorized gate versions on Cortana and Conversation test sets. Both test sets are mixed with close-talk and far-field utterances.Cortana Conversation
full
9.28
17.47
factorized input
9.62
18.31
factorized output
9.50
18.11
factorized forget
9.57
17.97
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, G E Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T N Sainath, IEEE Signal Processing Magazine. 296G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Speech recognition with deep recurrent neural networks. A Graves, A Mohamed, G Hinton, ICASSP. A. Graves, A. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in ICASSP, 2013, pp. 6645- 6649.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling. H Sak, A Senior, F Beaufays, INTERSPEECH. H. Sak, A. Senior, and F. Beaufays, "Long short-term memory re- current neural network architectures for large scale acoustic mod- eling." in INTERSPEECH, 2014, pp. 338-342.
Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. X Li, X Wu, Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEEX. Li and X. Wu, "Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recog- nition," in Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEE, 2015, pp. 4520-4524.
On speaker adaptation of long short-term memory recurrent neural networks. Y Miao, F Metze, INTERSPEECH. Y. Miao and F. Metze, "On speaker adaptation of long short-term memory recurrent neural networks." in INTERSPEECH, 2015, pp. 1101-1105.
Simplifying long short-term memory acoustic models for fast training and decoding. Y Miao, J Li, Y Wang, S Zhang, Y Gong, ICASSP. Y. Miao, J. Li, Y. Wang, S. Zhang, and Y. Gong, "Simplifying long short-term memory acoustic models for fast training and de- coding," in ICASSP, 2016.
LSTM time and frequency recurrence for automatic speech recognition. J Li, A Mohamed, G Zweig, Y Gong, ASRUJ. Li, A. Mohamed, G. Zweig, and Y. Gong, "LSTM time and frequency recurrence for automatic speech recognition," in ASRU, 2015.
Exploring multidimensional LSTMs for large vocabulary ASR. ICASSP. --, "Exploring multidimensional LSTMs for large vocabulary ASR," in ICASSP, 2016.
Modeling time-frequency patterns with LSTM vs. convolutional architectures for LVCSR tasks. T N Sainath, B Li, IN-TERSPEECH. T. N. Sainath and B. Li, "Modeling time-frequency patterns with LSTM vs. convolutional architectures for LVCSR tasks," in IN- TERSPEECH, 2016.
Highway long short-term memory rnns for distant speech recognition. Y Zhang, G Chen, D Yu, K Yao, S Khudanpur, J Glass, ICASSP. Y. Zhang, G. Chen, D. Yu, K. Yao, S. Khudanpur, and J. Glass, "Highway long short-term memory rnns for distant speech recog- nition," ICASSP, 2016.
A prioritized grid long shortterm memory RNN for speech recognition. W.-N Hsu, Y Zhang, J Glass, Spoken Language Technology Workshop (SLT). IEEEW.-N. Hsu, Y. Zhang, and J. Glass, "A prioritized grid long short- term memory RNN for speech recognition," in Spoken Language Technology Workshop (SLT), 2016 IEEE. IEEE, 2016, pp. 467- 473.
Multidimensional residual learning based on recurrent neural networks for acoustic modeling. Y Zhao, S Xu, B Xu, IN-TERSPEECH. Y. Zhao, S. Xu, and B. Xu, "Multidimensional residual learning based on recurrent neural networks for acoustic modeling," in IN- TERSPEECH, 2016, pp. 3419-3423.
Residual LSTM: Design of a deep recurrent architecture for distant speech recognition. J Kim, M El-Khamy, J Lee, arXiv:1701.03360arXiv preprintJ. Kim, M. El-Khamy, and J. Lee, "Residual LSTM: Design of a deep recurrent architecture for distant speech recognition," arXiv preprint arXiv:1701.03360, 2017.
Highway-LSTM and recurrent highway networks for speech recognition. G Pundak, T N Sainath, Proc. of INTER-SPEECH. of INTER-SPEECHG. Pundak and T. N. Sainath, "Highway-LSTM and recurrent highway networks for speech recognition," in Proc. of INTER- SPEECH, 2017.
Grid long shortterm memory. N Kalchbrenner, I Danihelka, A Graves, arXiv:1507.01526arXiv preprintN. Kalchbrenner, I. Danihelka, and A. Graves, "Grid long short- term memory," arXiv preprint arXiv:1507.01526, 2015.
Automatic speech recognition of Arabic multi-genre broadcast media. M Najafian, W.-N Hsu, A Ali, J Glass, Automatic Speech Recognition and Understanding Workshop. ASRU2017M. Najafian, W.-N. Hsu, A. Ali, and J. Glass, "Automatic speech recognition of Arabic multi-genre broadcast media," in Automatic Speech Recognition and Understanding Workshop (ASRU), 2017
. IEEE. IEEE. IEEE, 2017, pp. 353-359.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, arXiv:1512.03385arXiv preprintK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," arXiv preprint arXiv:1512.03385, 2015.
Reducing the computational complexity of two-dimensional LSTMs. B Li, T N Sainath, Proc. Interspeech. InterspeechB. Li and T. N. Sainath, "Reducing the computational complexity of two-dimensional LSTMs," in Proc. Interspeech, 2017.
Improving the speed of neural networks on CPUs. V Vanhoucke, A Senior, M Z Mao, Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop. Deep Learning and Unsupervised Feature Learning NIPS WorkshopV. Vanhoucke, A. Senior, and M. Z. Mao, "Improving the speed of neural networks on CPUs," in Proc. Deep Learning and Unsu- pervised Feature Learning NIPS Workshop, 2011.
Exploring layer trajectory LSTM with depth processing units and attention. J Li, L Lu, C Liu, Y Gong, submitted to SLTJ. Li, L. Lu, C. Liu, and Y. Gong, "Exploring layer trajectory LSTM with depth processing units and attention," in submitted to SLT, 2018.
Recent Progresses in Deep Learning Based Acoustic Models. D Yu, J Li, IEEE/CAA J. of Autom. Sinica. 43D. Yu and J. Li, "Recent Progresses in Deep Learning Based Acoustic Models," IEEE/CAA J. of Autom. Sinica., vol. 4, no. 3, pp. 399-412, Jul. 2017.
Restructuring of deep neural network acoustic models with singular value decomposition. J Xue, J Li, Y Gong, INTER-SPEECH. J. Xue, J. Li, and Y. Gong, "Restructuring of deep neural network acoustic models with singular value decomposition," in INTER- SPEECH, 2013, pp. 2365-2369.
On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition. R Prabhavalkar, O Alsharif, A Bruguier, L Mcgraw, Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEER. Prabhavalkar, O. Alsharif, A. Bruguier, and L. McGraw, "On the compression of recurrent neural networks with an application to LVCSR acoustic modeling for embedded speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), IEEE In- ternational Conference on. IEEE, 2016, pp. 5970-5974.
Learning small-size DNN with output-distribution-based criteria. J Li, R Zhao, J.-T Huang, Y Gong, INTERSPEECH. J. Li, R. Zhao, J.-T. Huang, and Y. Gong, "Learning small-size DNN with output-distribution-based criteria." in INTERSPEECH, 2014, pp. 1910-1914.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintG. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.
Small-footprint high-performance deep neural network-based speech recognition using split-vq. Y Wang, J Li, Y Gong, Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEEY. Wang, J. Li, and Y. Gong, "Small-footprint high-performance deep neural network-based speech recognition using split-vq," in Acoustics, Speech and Signal Processing (ICASSP), IEEE Inter- national Conference on. IEEE, 2015, pp. 4984-4988.
LSTM: A search space odyssey. K Greff, R K Srivastava, J Koutník, B R Steunebrink, J Schmidhuber, IEEE transactions on neural networks and learning systems. 28K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, "LSTM: A search space odyssey," IEEE trans- actions on neural networks and learning systems, vol. 28, no. 10, pp. 2222-2232, 2017.
Lower frame rate neural network acoustic models. G Pundak, T N Sainath, INTERSPEECH. G. Pundak and T. N. Sainath, "Lower frame rate neural network acoustic models," in INTERSPEECH, 2016, pp. 22-26.
Developing far-field speaker system via teacher-student learning. J Li, R Zhao, Z Chen, Proc. ICASSP. ICASSPJ. Li, R. Zhao, Z. Chen et al., "Developing far-field speaker sys- tem via teacher-student learning," in Proc. ICASSP, 2018.
SVD-based universal DNN modeling for multiple scenarios. C Liu, J Li, Y Gong, Proc. of INTERSPEECH. of INTERSPEECHC. Liu, J. Li, and Y. Gong, "SVD-based universal DNN modeling for multiple scenarios," in Proc. of INTERSPEECH, 2015.
Error back propagation for sequence training of context-dependent deep networks for conversational speech transcription. H Su, G Li, D Yu, F Seide, Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEEH. Su, G. Li, D. Yu, and F. Seide, "Error back propagation for sequence training of context-dependent deep networks for con- versational speech transcription," in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 6664-6668.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintD. Bahdanau, K. Cho, and Y. Bengio, "Neural machine trans- lation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Advancing connectionist temporal classification with attention modeling. A Das, J Li, R Zhao, Y Gong, Proc. ICASSP. ICASSPA. Das, J. Li, R. Zhao, and Y. Gong, "Advancing connection- ist temporal classification with attention modeling," in Proc. ICASSP, 2018.
Advancing Acousticto-Word CTC Model. J Li, G Ye, A Das, R Zhao, Y Gong, Proc. ICASSP. ICASSPJ. Li, G. Ye, A. Das, R. Zhao, and Y. Gong, "Advancing Acoustic- to-Word CTC Model," in Proc. ICASSP, 2018.
| [] |
[
"Semi-Supervised Translation with MMD Networks",
"Semi-Supervised Translation with MMD Networks"
] | [
"Mark Hamilton "
] | [] | [] | This work aims to improve semi-supervised learning in a neural network architecture by introducing a hybrid supervised and unsupervised cost function. The unsupervised component is trained using a differentiable estimator of the Maximum Mean Discrepancy (MMD) distance between the network output and the target dataset. We introduce the notion of an n-channel network and several methods to improve performance of these nets based on supervised preinitialization, and multi-scale kernels. This work investigates the effectiveness of these methods on language translation where very few quality translations are known a priori. We also present a thorough investigation of the hyper-parameter space of this method on both synthetic data. | null | [
"https://arxiv.org/pdf/1810.11906v1.pdf"
] | 53,098,325 | 1810.11906 | 64e9d39bd61666c82ed597a10bac080e6ac220c8 |
Semi-Supervised Translation with MMD Networks
Mark Hamilton
Semi-Supervised Translation with MMD Networks
This work aims to improve semi-supervised learning in a neural network architecture by introducing a hybrid supervised and unsupervised cost function. The unsupervised component is trained using a differentiable estimator of the Maximum Mean Discrepancy (MMD) distance between the network output and the target dataset. We introduce the notion of an n-channel network and several methods to improve performance of these nets based on supervised preinitialization, and multi-scale kernels. This work investigates the effectiveness of these methods on language translation where very few quality translations are known a priori. We also present a thorough investigation of the hyper-parameter space of this method on both synthetic data.
Introduction
Often in data analysis, one has a small set of quality labeled data, and a large pool of unlabeled data. It is the task of semi-supervised learning to make as much use of this unlabeled data as possible. In the low-data regime, the aim is to create models that perform well after seeing only a handful of labeled examples. This is often the case with machine translation and dictionary completion, as it can be difficult to construct a large number of labeled instances or a sufficiently large parallel corpora. However, this domain offers a huge number of monolingual corpora to make high quality language embeddings (Tiedemann, 2012;Al-Rfou et al., 2013). The methods presented in this paper are designed to take into consideration both labeled and unlabeled information when training a neural network. The supervised component uses the standard alignment-based loss functions and the unsupervised component attempts to match the distribution of the network's output to the target data's distribution by minimizing the Maximum Mean Discrepancy (MMD) "distance" between the two distributions. This has the effect of placing a prior on translation methods that preserve the distributional structure of the two datasets. This limits the model space and increases the quality of the mapping, allowing one to use less labeled data.
Related methods such as Auto-Encoder pre-initialization (Erhan et al., 2010), first learn the structure of the input, then learn a mapping. In this setup, unsupervised knowledge enters through learning good features to describe the dataset. The MMD method of unsupervised training directly learns a mapping between the two spaces that aligns all of the moments of the mapped data and the target data. This method can be used to improve any semisupervised mapping problem, such as mappings between languages (Dinu et al., 2014), image labeling, FMRI analysis (Mitchell et al., 2008), and any other domains where transformations need to be learned between data. This investigation aims to study these methods in the low data regime, with the eventual goal of studying of dying or lost languages, where very few supervised training examples exist.
Background
Maximum Mean Discrepancy
The Maximum Mean Discrepancy (MMD) put forth by (Gretton et al., 2012a) is a measure of distance between two distributions p, q. More formally, letting x, y be variables defined on a topological space X with Borel measures p, q, and F be a class of functions from X → R. The MMD semi-metric is defined as:
M M D F (p, q) = sup f ∈F E x∼p f (x)−E y∼q f (y) (1)
Where E is the first raw moment defined as:
E x∼p f (x) = X f (x)dp(2)
Intuitively, the MMD is a measure of distance which uses a class of functions as a collection of "trials" to put the two distributions through. The distributions pass a trial if the function evaluated on both distributions has the same expectation or mean. Two distributions fail a trial if they yield different means, the size of the difference measures how much the distributions fail that trial. Identical distributions should yield the same images when put through each function in F, so the means (first moments) of the images should also be identical. Conversely, if the function class is "large enough" this method can distinguish between any two probability distributions that differ, making the MMD a semi-metric on the space of probability distributions. A unit ball in a Reproducing Kernel Hilbert Space (RKHS) is sufficient to discern any two distributions provided the kernel, k, is universal. (Cortes et al., 2008) If F is equal to a unit ball in kernel space, Gretton et.al . showed that the following is an unbiased estimator of the MMD: (Gretton et al., 2012a)
M M D 2 u (X, Y ) = 1 m(m − 1) m i=1 m j =i k(x i , x j )+ 1 n(n − 1) n i=1 n j =i k(y i , y j ) − 2 mn m i=1 n j=1 k(x i , y j ) (3)
If the kernel function is differentiable, this implies that the estimator of the MMD is differentiable, allowing one to use it as a loss function that can be optimized with gradient descent.
MMD Networks
The differentiability of the MMD estimator allows it to be used as a loss function in a feed-forward network. Li et.al . showed that by using the MMD distance as a loss function in a neural net, N , one can learn a transformation that maps a distribution of points X = (x i ) n 1 in R d to another distribution Y = (y i ) m 1 in R n while approximately minimizing the MMD distance between the image of X, N (X), and Y . (Li et al., 2015)
l M M D (X, Y, N ) = M M D 2 u (N (X), Y )(4)
This loss function allows the net to learn transformations of probability distributions in a completely unsupervised manner. Furthermore, the MMD-net can also be used to create generative models, or mappings from a simple distribution to a target distribution. (Li et al., 2015) Where simple usually means easy to sample from, or a maximum entropy distribution. Often, a multivariate uniform or Gaussian source distribution is used in these generative models. This loss function can be optimized via mini-batch stochastic gradient descent, though the samples from X and Y need not be paired in any way. To avoid over-fitting, the minibatches for X and Y should be sampled independently, which this paper refers to as "unpaired" minibatching.
Methods
n-Channel Networks
This work introduces a generalization of a feed forward net, called an n-Channel net. This architecture allows an unsupervised loss term that requires unpaired mini-batching and a paired mini-batching scheme of a standard feed forward network to be mixed.
An n-channel net is a collection of n networks with tied weights that operate on n separate datasets (X i , Y i ) n 1 . More formally, an n-channel net is a mapping:
N n : R d n → (R e ) n(5)
defined as:
N n (X i ) n 1 ≡ N (X i ) n 1 (6)
where where N : R d → R e is a feed forward network. Each channel of the network can have it's own loss function and be fed with a separate data source. Most importantly, these separate data sources can be trained in a paired or unpaired manner.
A Semi-Supervised MMD-Net
In many applications where one is interested in estimating a transformation between data spaces, one has a small labeled dataset (X, Y ), and large, unlabeled datasets (S, T ). Throughout the literature, MMD networks have only been applied to the case of unpaired data. (Li et al., 2015) We expand on this work by augmenting the completely unsupervised MMD distance with a semi-supervised alignment term. More formally, if one has a collection of k paired vectors (x i , y i ) k 1 with x i ∈ X and y i ∈ Y that should be aligned through the transformation N , one can use the standard loss function:
l alignment (X, Y, N ) = k i=1 N (x i ) − y i(7)
Where · is any differentiable norm in R d . This work uses the standard l 2 vector norm. This is the standard norm used in regression, where the goal of the network is to minimize the distance between the network outputN (x i ), and the observed responses y i .
Using a hyperparameter, we can blend the cost functions of the supervised alignment loss and the unsupervised MMD loss. The full cost function for the MMD network then becomes:
l(X, Y, S, T, N ) = α pair l alignment (X, Y, N )+ (1 − α pair )l M M D (S, T, N ) (8)
Supervised Pre-Initialization
The MMD term of the cost function scales as O M 2 where M is the size of the mini-batch. This significantly increases training time for large batch sizes slowing convergence in wall-time. To mitigate this effect, we first train the network until convergence with only the supervised term of the cost function. Once converged, we then switch to the semi-supervised cost function.
This also helps the network avoid local minima as it already starts close to the optimal solution. Because the MMD cost function is inherently unpaired, it is susceptible to getting stuck in local minima when there are multiple ways to map the mass of one probability distribution into another distribution. We say that a mapping from the sup-
ports, f : X → Y, is a MMD-mode from distributions p to q if f (p) ∼ q.
Here f (p) is the distribution formed by sampling from p and then applying f . These modes coincide with critical points of the M M D 2 u cost function and are therefore tough to escape with gradient descent methods. As the class of functions represented by the network increases, the more distinct MMD-modes arise. This increases the number of critical points, though these probably tend to be saddle points rather than local minima as the dimensionality of the function space increases. (Dauphin et al., 2014) One can escape these local minima, by increasing α pair to the point where the signal from the supervised term overcomes that the signal from the unsupervised cost function. However, if the network is within the pull of the correct minima, it is often better to rely on the robust unsupervised signal than the noisy supervised signal, which requires a small α pair . We found that supervised pre-training helped guide the network parameters to within the basin of attraction for the correct unsupervised minima. From here the unsupervised signal was much more reliable and led to better results on synthetic and language datasets. Furthermore on all datasets, the supervised warm-start greatly reduced fitting time, as convergence of the expensive MMD cost function needed fewer optimization steps. Future work could involve annealing the supervised term to a small number, though this would eliminate the aforementioned computational speedup.
To demonstrate the effect of pre-initialization, we show the unbiased MMD estimator of a simple synthetic experiment. We generate two datasets of two dimensional points. The first, shown in Figure 1 left is sampled from a uniform distribution on the unit square support centered at (0, 0). To generate a simple target shown in Figure 1 middle, we rotate the source cloud of points by an angle θ * = 255 • and add a small Gaussian noise term. Figure 1 right shows the that MMD loss as a function of angle of rotation transformation has several modes caused by the symmetries of the square. To simulate a very noisy MSE, we use the MSE of one randomly sampled point and its respective pair. The noisy MSE loss function has two local minima and the global minimaθ is within the correct basin of attraction of the unsupervised cost function. This basin of attraction of the unsupervised cost has a minima that is indistinguishable from the correct value of theta and much more accurate than the supervised loss term.
Choice of Kernel
The M M D F is able to differentiate between any two distributions if the function class, F, is a unit ball in the reproducing kernel Hilbert space (RKHS) of a universal kernel. (Cortes et al., 2008) One of the simplest and most commonly used universal kernels is the Gaussian or radial basis function kernel, which excels at representing smooth functions.
k σ (x, y) = exp − x − y 2 2σ 2(9)
The parameter σ controls the width of the Gaussian, and needs to be set properly for good performance. If σ is too low, each point's local neighborhood will be effectively empty, and the gradients will vanish. If it is too high, every point will be in each point's local neighborhood and the kernel will not have enough resolution to see the details of the distribution. In this scenario, the gradients vanish. We found that σ was one of the most important hyperparameters for the success of the method. In both our synthetic data and natural language examples, we found that the method performed well in a small window of kernel scale settings.
To improve the robustness of this method, this investigation used the following multi-scale Gaussian kernel:
k(x, y) = n i=0 c i k σi (x, y) Where c i = 1, σ i = s10 w(i/n)−w/2 , w = 4, n = 10.
The scalar s is the average scale of the multi-scale and the width, w, controls the width of the frequency range covered by the kernel. n controls how many samples are taken from this range. Choosing a larger n improves performance as there are more scales in the kernel, but increases computation time. By including multiple scales in the kernel, the gradients from the larger kernels will move the parameters to a region where the distributions are aligned at a large scale, they will then begin to vanish and the smaller scale gradients will become more relevant. Setting w = 4 allows the kernel to be sensitive to functions with scales that are within 2 orders of magnitude of the average scale s. We find that choosing this kernel significantly broadens the areas of parameter space where the method succeeds, without hurting the performance.
Many have investigated the kernel scale problem and there are several heuristics available for choosing the scale based on optimizing statistical power or median distances to nearest neighbors. (Gretton et al., 2012b) For clarity, we explicitly investigated and set the kernel scale based on a grid search evaluating on a held out validation set. Figure 2 demonstrates that the method was fairly robust to settings of average kernel scale on synthetic data and language data.
Globally Corrected (GC) Retrieval
In this analysis, performance of translation methods are compared on their ability to infer the correct translation on a held out test set. More specifically, we use the precision at N , which is the fraction of examples where the correct word was in the top N most likely translations of model. This is a natural choice for translation, as it estimates the probability of translating a word correctly when N = 1.
To generate the list of N most likely translations for a given word, one can use nearest neighbor (NN) retrieval. In this method, one uses the N closest neighbors in the target space of the mapped word vector as the list of best guesses. We find that it is always better to use cosine distance for nearest neighbor calculations. Finding the first nearest neighbor to a pointŷ can be more formally expressed as:
N N 1 (ŷ) = argmin y∈T Rank T (ŷ, y)(10)
Whereŷ is our mapped word vector, T is our target space, and Rank T (ŷ, y) is a function that returns the rank of y in the sorted list of distances betweenŷ and the points in T .
If the space of word embeddings is not uniformly distributed, there will be areas where word embeddings bunch together in higher densities. The points towards the center of these bunches act as hub points, and may be the nearest neighbors of many other points. Dinu et. al. 2014 have shown that naive NN retrieval results in over-weighting these hub points as they are more frequently the neighbors of points. They called this the "Hubness Problem" and introduced a corrected form of the nearest-neighbor retrieval called the globally corrected neighbor retrieval method (GC). In this method, instead of using distance to select translates as in N N 1 , one uses:
GC 1 (ŷ) = argmin y∈T (Rank P (y,ŷ) − cos (ŷ, y))
Where P is a random sampling of points from T and cos(x, y) is the cosine distance between x and y. Instead of returning the nearest neighbor ofŷ, GC returns the point in T that hasŷ ranked the highest. The cosine distance term breaks ties. GC retrieval has been shown to outperform the nearest neighbor retrieval in all frequency bins when the transformation is a linear mapping. (Dinu et al., 2014) Figure 4 shows that it also improves the performance of the semi-supervised translation task.
Neural Network Implementation
This work implemented the network in Theano, (Bergstra et al., 2011) an automatic differentiation software written in python. The net was trained with RMSProp (Tieleman & Hinton, 2012) on both the unpaired and paired batches with a batch size of 200 for each set. The unregularized pre-initialization was trained for 4000 epochs and the regularized network was trained for 250 epochs, which gave ample time for convergence. Hyperparameter optimization was perfomed through parallel grid searches a TORQUE Cluster, where each job ran for ∼ 20 hours. A validation set consisting of a random sample of 10% of the training set was used to choose the parameters for the final reported results.
Data
Synthetic Data
Several synthetic datasets were used to demonstrate the method's ability to accurately learn linear transformations using a very small paired dataset. Furthermore, we used this synthetic data to investigate the effects of the network's hyper-parameters.
Two datasets were created, one with the dimension of the source and target equal to 30 and the other 300, the same dimensionality as the embeddings. The datasets contained 100, 000 points and various sized paired subsets were used to calculate the supervised alignment loss in the experiments.
Source data was generated as a multivariate Gaussian with zero mean and unit variance. A ground truth mapping was generated by sampling the entries of a d × d matrix of independent Gaussians with zero mean and unit variance. The target data was generated by applying the ground truth transformation to the source data and adding Gaussian noise with zero mean and a variance of 0.1.
Embedding Data
This analysis used 300 dimensional English (EN) and Italian (IT) monolingual word embeddings from (Dinu et al., 2014). These embeddings were trained with word2vec's CBOW model on 2.8 billion tokens as input (ukWaC + Wikipedia + BNC) for English and the 1.6 billion itWaC tokens for Italian. (Dinu et al., 2014) The embeddings contained the top 200,000 words in each language. Super-vised training and testing sets were constructed from a dictionary built from Europarl, available at http://opus. lingfil.uu.se/. (Tiedemann, 2012) Two training sets consisted of the 750 and 5, 000 most frequent words from the source language (English) which had translations in the gold dictionary. Five disjoint test sets were created consisting of roughly 400 translation pairs randomly sampled from the frequency ranked words in the intervals 0-5k, 5k-20k, 20k-50k, 100k-200k.
Results
Synthetic Data
Adding the MMD term to the loss function dramatically improved the ability to learn the transformation on all synthetic datasets. The synthetic data also provided a clean environment to see the effect of varying hyper-parameters. The experiment used a "linear network" which is equivalent to learning a linear transformation between the spaces. In general, if the hyper-parameters are set correctly, the MMD assisted learner can approach the true transformation with significantly less paired data.
Our first investigation aimed to understand the effect and robustness of the kernel scale parameter. As one can see from Figure 2, the performance of the method is robust to a setting of the average kernel scale within +/ − 2 orders of magnitude of the optimal scale. This empirically confirms the intuition behind the width parameter of the multiscale kernel. As the width parameter decreases, this valley of good performance becomes narrower by the expected amount. A similar pattern arose in the 300 dimensional dataset.
In order to simulate the environment of the embedding experiment that required a validation set of ∼ 10% of the data, we also removed ∼ 10% of our data. The plots in Figure 2 demonstrate that even with the data removed for a validation set, the method still significantly beats linear regression trained on the training and validation set, justifying the use of data for parameter tuning. The models in d = 30 and d = 300 both reach error rates comparable to the ground truth regressor learned on all 100, 000 data points. Figure 3 investigates various settings of α pair and shows that decreasing α pair drives the performance down to the ground truth level. This trend appears in both the low and high dimensional data and suggests that the supervised preinitialization yields a configuration that is within the basin of attraction of the true parameters in vector field ∇l M M D . Thus, only the unsupervised term is needed as the supervised initialization has already eliminated the ambiguity of the MMD loss function modes. Figure 4 shows that the semi-supervised MMD-Net was able to significantly outperform the standard linear regression on a paired dataset of 750 and 5000 word-translation pairs in every frequency bin . Furthermore, this dominance over linear regression follows a similar pattern in the precisions @5 and @10. The method also outperformed several other linear and nonlinear methods as shown in Table 1.
Embedding Data
Discussion and Future Work
The addition of the MMD cost function term significantly improves the results of regression in the low data regime. Furthermore, to the best knowledge of the authors, this method achieves state of the art results on the embeddings of (Dinu et al., 2014). The authors also experimented with deeper nets, but did not observe significant performance improvements, an observation consistent with the observa-tions of (Mikolov et al., 2013).
Adversarial Distribution Matching
One promising future direction involves replacing the MMD unsupervised term with a Generative Adversarial Network (GAN) (Goodfellow et al., 2014). Like the MMD, the GAN also involves a maximization over a function class of a measure of dissimilarity. Similarly, the GAN loss function can be used for unsupervised learning of probability distributions. However, the GAN is usually optimized directly by stochastic gradient descent, trading the quadratic time dependence on minibatch size with a linear one. In practice however, the maximization over the function class (the discriminator) is usually done in k gradient descent steps for every one step of training the distribution matching net (the generator). Furthermore, the GAN cost function does not have a dependence on kernel scale.
Analogous to the discriminator in the GAN, we can also adversarially learn the MMD. In this setup, the function class takes the the form of a parametrized network. Instead of estimating the supremum of the mean discrepancy over a ball in RKHS, we would be finding the supremum through gradient ascent on the network. This would also have the effect of eliminating the quadratic compute and the dependence on kernel scale. This formulation of the MMD would allow for a more direct comparison between the GAN and MMD loss functions, and warrants future investigation. These two loss functions are in-equivalent, as the only intersection between f -divergences, like the Jensen-Shannon Divergence which is equivalent to the GAN, and integral measures like the MMD is the total variation distance. (Mohamed & Lakshminarayanan, 2016) Thus, one might be able to leverage more diverse information by combining the two.
Bi-Directional Networks
In the case of translation between two spaces of equal dimension, the inverse of the translation transformation should also be a translation from the target to the source space. We can capitalize on this observation to further constrain our set of possible translations. This allows the transformation to also draw information from the structure of the source space. More specifically one can minimize:
L = α target RT − S 2 target + (1 − α target ) R − ST −1 2 source(12)
where T ∈ GL d , α target ∈ [0, 1] and R, S ∈ R d×npair . This would result in twice as much supervisory signal and maintain the same number of parameters. Furthermore, this can also be applied in conjunction with the GAN loss. It is also compatible with the pre-initialization scheme. In the case of a more complex nonlinear network where an inverse transformation cannot be easily calculated, the architecture could include an encoder network which maps from the source to the target and a decoding network which maps from the target to the source. These two mappings could then be constrained to be close to mutual inverses through a reconstruction loss penalty.
Figure 1 .
1Left: Initial dataset X sampled uniformly from the unit square. Colors indicate how points are mapped through the transform. Middle: Y = X 255 • + Gaussian(µ = 0, σ = .1) Where X θ denotes a rotation clockwise by θ. Right: Unit scaled M M D 2 u (X θ , Y ), and unit scaled M SE(X θ,1 , Y1) as a function of θ. Where X1 denotes the first element of X.
Figure 2 .
2Left: Performance comparison on word embeddings in the 0 − 5k frequency bin as a function of the average kernel scale s. Middle: Performance comparison on synthetically generated data in R 30 as a function of αpair. Right: Performance comparison on synthetically generated data in R 300 as a function of αpair.
Figure 3 .
3Performance of methods on synthetically generated data in R 300 as a function of αpair, s = 10.
Figure 4 .
4Model performance as a function of English word frequency bins using the top 5000 (left) and 750 (right) EN-IT word pairs as training data. Precision@1 refers to the fraction of words correctly translated by the method on held out testing sets.
Table 1 .
1Comparison of Precision@1 across different algorithms and dimensionality reduction schemes. PCA S and PCA T refers to projecting the source and target respectively onto their first 270 principal vectors. KR refers to Kernel Ridge Regression an RBF refers to the radial basis function kernel with heuristically set scale 0-5k 5k-20k 20k-50k 50k-100k 100k-200kLinear
0.228
0.052
0.028
0.015
0.011
Linear + PCA S
0.236
0.057
0.031
0.036
0.019
Linear + PCA T
0.207
0.044
0.031
0.028
0.011
Linear + PCA S + T 0.212
0.072
0.033
0.043
0.029
Random Forrest
0.008
0.000
0.000
0.000
0.000
KR 2-deg Poly
0.057
0.003
0.008
0.010
0.008
KR 3-deg Poly
0.049
0.005
0.003
0.013
0.008
KR RBF
0.057
0.003
0.010
0.010
0.008
Linear + MMD
0.347
0.129
0.099
0.094
0.035
Al-Rfou, Rami, Bryan Perozzi, Skiena, Steven, Polyglot, arXiv:1307.1662Distributed word representations for multilingual nlp. arXiv preprintAl-Rfou, Rami, Perozzi, Bryan, and Skiena, Steven. Poly- glot: Distributed word representations for multilingual nlp. arXiv preprint arXiv:1307.1662, 2013.
Deep learning on gpus with python. James Bergstra, Bastien, Frédéric, Breuleux, Olivier, Lamblin, Pascal, Pascanu, Razvan, Delalleau, Olivier, Guillaume Desjardins, Warde - Farley, David, Goodfellow, Ian, Bergeron, Arnaud, NIPS 2011, BigLearning Workshop. Granada, SpainCiteseer3Bergstra, James, Bastien, Frédéric, Breuleux, Olivier, Lam- blin, Pascal, Pascanu, Razvan, Delalleau, Olivier, Des- jardins, Guillaume, Warde-Farley, David, Goodfellow, Ian, Bergeron, Arnaud, et al. Theano: Deep learning on gpus with python. In NIPS 2011, BigLearning Work- shop, Granada, Spain, volume 3. Citeseer, 2011.
Sample selection bias correction theory. Corinna Cortes, Mohri, Mehryar, Michael Riley, Afshin Rostamizadeh, International Conference on Algorithmic Learning Theory. SpringerCortes, Corinna, Mohri, Mehryar, Riley, Michael, and Ros- tamizadeh, Afshin. Sample selection bias correction the- ory. In International Conference on Algorithmic Learn- ing Theory, pp. 38-53. Springer, 2008.
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. Yann N Dauphin, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, Surya Ganguli, Yoshua Bengio, Advances in neural information processing systems. Dauphin, Yann N, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, Ganguli, Surya, and Bengio, Yoshua. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pp. 2933- 2941, 2014.
Semi-Supervised Translation with MMD Networks Improving zero-shot learning by mitigating the hubness problem. Georgiana Dinu, Angeliki Lazaridou, Marco Baroni, arXiv:1412.6568arXiv preprintDinu, Georgiana, Lazaridou, Angeliki, and Baroni, Marco. Semi-Supervised Translation with MMD Networks Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568, 2014.
Why does unsupervised pre-training help deep learning?. Erhan, Dumitru, Bengio, Yoshua, Aaron Courville, Manzagol, Pierre-Antoine, Pascal Vincent, Bengio, Samy, The Journal of Machine Learning Research. 11Erhan, Dumitru, Bengio, Yoshua, Courville, Aaron, Man- zagol, Pierre-Antoine, Vincent, Pascal, and Bengio, Samy. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 11:625-660, 2010.
Generative adversarial nets. Ian Goodfellow, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Warde - Bing, Farley, David, Ozair, Sherjil, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014.
A kernel two-sample test. Arthur Gretton, Karsten M Borgwardt, Rasch, J Malte, Bernhard Schölkopf, Alexander Smola, The Journal of Machine Learning Research. 131Gretton, Arthur, Borgwardt, Karsten M, Rasch, Malte J, Schölkopf, Bernhard, and Smola, Alexander. A kernel two-sample test. The Journal of Machine Learning Re- search, 13(1):723-773, 2012a.
Optimal kernel choice for large-scale two-sample tests. Arthur Gretton, Sejdinovic, Dino, Strathmann, Heiko, Balakrishnan, Sivaraman, Pontil, Massimiliano, Kenji Fukumizu, Sriperumbudur, K Bharath, Advances in neural information processing systems. Gretton, Arthur, Sejdinovic, Dino, Strathmann, Heiko, Bal- akrishnan, Sivaraman, Pontil, Massimiliano, Fukumizu, Kenji, and Sriperumbudur, Bharath K. Optimal kernel choice for large-scale two-sample tests. In Advances in neural information processing systems, pp. 1205-1213, 2012b.
Yujia Li, Kevin Swersky, Richard Zemel, arXiv:1502.02761Generative moment matching networks. arXiv preprintLi, Yujia, Swersky, Kevin, and Zemel, Richard. Gen- erative moment matching networks. arXiv preprint arXiv:1502.02761, 2015.
Exploiting similarities among languages for machine translation. Tomas Mikolov, Le Quoc, V Sutskever, Ilya, arXiv:1309.4168arXiv preprintMikolov, Tomas, Le, Quoc V, and Sutskever, Ilya. Ex- ploiting similarities among languages for machine trans- lation. arXiv preprint arXiv:1309.4168, 2013.
Predicting human brain activity associated with the meanings of nouns. science. Tom M Mitchell, Svetlana V Shinkareva, Carlson, Chang Andrew, Kai-Min, Vicente L Malave, Mason, A Robert, Just , Marcel Adam, 320Mitchell, Tom M, Shinkareva, Svetlana V, Carlson, An- drew, Chang, Kai-Min, Malave, Vicente L, Mason, Robert A, and Just, Marcel Adam. Predicting human brain activity associated with the meanings of nouns. sci- ence, 320(5880):1191-1195, 2008.
Shakir Mohamed, Balaji Lakshminarayanan, arXiv:1610.03483Learning in implicit generative models. arXiv preprintMohamed, Shakir and Lakshminarayanan, Balaji. Learn- ing in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
Parallel data, tools and interfaces in opus. Jörg Tiedemann, Tiedemann, Jörg. Parallel data, tools and interfaces in opus. 2012.
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning. Tijmen Tieleman, Geoffrey Hinton, 4Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5- rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
| [] |
[
"Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors",
"Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors",
"Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors",
"Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors"
] | [
"Shenghuan Sun \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n",
"Travis Zack \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n\nDivision of Hematology/Oncology\nDepartment of Medicine\nUCSF\nSan FranciscoCaliforniaUSA\n",
"Madhumita Sushil \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n",
"† ",
"Atul J Butte \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n\nCenter for Data-driven Insights and Innovation\nOffice of the President\nUniversity of California\nOaklandCAUSA\n\nDepartment of Pediatrics\nUniversity of California\n94158San FranciscoCAUSA\n",
"Shenghuan Sun \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n",
"Travis Zack \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n\nDivision of Hematology/Oncology\nDepartment of Medicine\nUCSF\nSan FranciscoCaliforniaUSA\n",
"Madhumita Sushil \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n",
"† ",
"Atul J Butte \nBakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA\n\nCenter for Data-driven Insights and Innovation\nOffice of the President\nUniversity of California\nOaklandCAUSA\n\nDepartment of Pediatrics\nUniversity of California\n94158San FranciscoCAUSA\n"
] | [
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Division of Hematology/Oncology\nDepartment of Medicine\nUCSF\nSan FranciscoCaliforniaUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Center for Data-driven Insights and Innovation\nOffice of the President\nUniversity of California\nOaklandCAUSA",
"Department of Pediatrics\nUniversity of California\n94158San FranciscoCAUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Division of Hematology/Oncology\nDepartment of Medicine\nUCSF\nSan FranciscoCaliforniaUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Bakar Computational Health Sciences Institute\nUniversity of California\nSan FranciscoSan FranciscoCAUSA",
"Center for Data-driven Insights and Innovation\nOffice of the President\nUniversity of California\nOaklandCAUSA",
"Department of Pediatrics\nUniversity of California\n94158San FranciscoCAUSA"
] | [] | CURRENT WORD COUNT: 248 MAIN TEXT WORD COUNT: 3872 words excluding the structured abstract, tables, figures, references, and acknowledgments.ABSTRACT OBJECTIVEMost research studying social determinants of health (SDoH) has focused on physician notes or structured elements of the Electronic medical record (EMR). We hypothesize that clinical notes from social workers, whose role is to ameliorate social and economic factors, might provide a richer source of data on SDoH. We sought to perform topic modeling to identify robust topics of discussion within a large cohort of social work notes.MATERIALS AND METHODSWe retrieved a diverse, deidentified corpus of 0.95 million clinical social work notes from 181,644 patients at the University of California, San Francisco. We used word frequency analysis and Latent Dirichlet Allocation (LDA) topic modeling analysis to characterize this corpus and identify potential topics of discussion.RESULTSWord frequency analysis identified both medical and non-medical terms associated with specific ICD10 chapters. The LDA topic modeling analysis extracted 11 topics related to social determinants of health risk factors including financial status, abuse history, social support, risk of death, mental health. In addition, the topic modeling approach captured the variation between different types of social work notes and across patients with different types of diseases or conditions.DISCUSSIONWe demonstrated topic modeling as a powerful tool to extract latent topics from clinical notes, serving as an ideal data exploration approach.CONCLUSIONSocial work notes contain rich, unique, and otherwise unobtainable information on an individual's SDoH. These notes contain robust and coherent topics of discussion that can be identified and utilized to evaluate impact SDoH factors have on patient/public-health. | 10.48550/arxiv.2212.01462 | [
"https://export.arxiv.org/pdf/2212.01462v1.pdf"
] | 254,246,886 | 2212.01462 | 7efdca5b55356bd150cee6683164e2954a00c2cf |
Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors
Shenghuan Sun
Bakar Computational Health Sciences Institute
University of California
San FranciscoSan FranciscoCAUSA
Travis Zack
Bakar Computational Health Sciences Institute
University of California
San FranciscoSan FranciscoCAUSA
Division of Hematology/Oncology
Department of Medicine
UCSF
San FranciscoCaliforniaUSA
Madhumita Sushil
Bakar Computational Health Sciences Institute
University of California
San FranciscoSan FranciscoCAUSA
†
Atul J Butte
Bakar Computational Health Sciences Institute
University of California
San FranciscoSan FranciscoCAUSA
Center for Data-driven Insights and Innovation
Office of the President
University of California
OaklandCAUSA
Department of Pediatrics
University of California
94158San FranciscoCAUSA
Topic Modeling on Clinical Social Work Notes for Exploring Social Determinants of Health Factors
*Author to whom correspondence should be addressed. † Equal ContributionNatural language processingTopic modelingElectronic Health RecordsSocial Work NotesSocial determinants of health WORD COUNTS
CURRENT WORD COUNT: 248 MAIN TEXT WORD COUNT: 3872 words excluding the structured abstract, tables, figures, references, and acknowledgments.ABSTRACT OBJECTIVEMost research studying social determinants of health (SDoH) has focused on physician notes or structured elements of the Electronic medical record (EMR). We hypothesize that clinical notes from social workers, whose role is to ameliorate social and economic factors, might provide a richer source of data on SDoH. We sought to perform topic modeling to identify robust topics of discussion within a large cohort of social work notes.MATERIALS AND METHODSWe retrieved a diverse, deidentified corpus of 0.95 million clinical social work notes from 181,644 patients at the University of California, San Francisco. We used word frequency analysis and Latent Dirichlet Allocation (LDA) topic modeling analysis to characterize this corpus and identify potential topics of discussion.RESULTSWord frequency analysis identified both medical and non-medical terms associated with specific ICD10 chapters. The LDA topic modeling analysis extracted 11 topics related to social determinants of health risk factors including financial status, abuse history, social support, risk of death, mental health. In addition, the topic modeling approach captured the variation between different types of social work notes and across patients with different types of diseases or conditions.DISCUSSIONWe demonstrated topic modeling as a powerful tool to extract latent topics from clinical notes, serving as an ideal data exploration approach.CONCLUSIONSocial work notes contain rich, unique, and otherwise unobtainable information on an individual's SDoH. These notes contain robust and coherent topics of discussion that can be identified and utilized to evaluate impact SDoH factors have on patient/public-health.
OBJECTIVE
Social determinants of health (SDoH), non-medical conditions that influence health, are a significant contributor to health disparities due to systemic disadvantages and bias [1,2]. A better understanding of the role of SDoH in diabetes, heart disease, and other conditions has led to increased attention for the medical system to address these factors in the context of treating these conditions [3][4][5][6][7]. However, our capacity to research these correlations is still quite constrained. Most assessments of SDoH are not present in structured data that is easily accessible to researchers [8]. Instead, much of this information is collected in unstructured clinical social work notes.
Social work notes, compared to standardized structured electronic medical records data, are more challenging to analyze given their unstructured format. Inability to easily extract this information limits research into the effects of SDoH on care delivery and success. Topic modeling methods based on Latent Dirichlet Allocation (LDA) are among the most popular approaches and have been shown to be able to find hidden structures (topics) in large corpora in an unsupervised manner [9,10].
To understand the information embedded in the social work notes, we applied LDA topic modeling to characterize the specific SDoH factors covered across nearly one million clinical social work notes. To our knowledge, we analyzed one of the biggest social work notes collections so far and these notes spanned a diverse set of patient demographics and diseases. This allowed us to develop a comprehensive understanding of the underlying topics from different types of notes or for a variety of disease chapters. Understanding the limitations of topic modeling including fixed number of clusters, intrinsic randomness, and need for humanbased interpretation, we used several evaluation approaches to minimize these potential biases.
BACKGROUND AND SIGNIFICANCE
Computational understanding of the free text in clinical notes is well known to be an open challenge, including the extraction of structured information from these documents [11]. Some progress has been made in extracting SDoH factors from clinical text using named entity recognition (NER), an NLP method of extracting pre-defined concepts from text [12,13]. Both machine learning-based and traditional rule-based NER have been developed and tested [13][14][15]. While NER approaches have been shown to be effective, they can be time-consuming and present challenges with regard to bias. For rule-based models, manual vocabulary tuning requires a significant amount of human effort and is prone to a developer's bias on language relevance and meaning. Machine learning models similarly require manual annotation and additionally need sufficient data on which to train models. Both are often tedious, error-prone, and potentially biased by human experiences and research scope. Moreover, the supervised nature of these approaches endangers them to propagating the biases researchers may hold regarding topic or term importance. In order to minimize the biases and limits of NER tasks in clinical research, rigorous and detailed data exploration is critical and strongly recommended before the use of these manual efforts [16].
Topic modeling methods are a set of powerful techniques that have been widely applied towards unbiased topic discovery from unorganized documents [17][18][19] and have been used in the fields of social science [20,21], environmental science [21], political science [22], and even in biological and medical contexts. However, to our knowledge, LDA topic modeling has not been heavily used to assess corpora of social work notes for SDoH factors, likely due to the general unavailability of large enough corpora.
Clinical social workers are licensed professionals that specialize in identifying and removing social and environmental barriers an individual patient is experiencing. In particular, clinical notes generated by clinical social workers are an invaluable data resource for understanding SDoH information in patients. As such, the clinical notes written by social workers, often include specific text capturing an individual's SDoH. Yet, to date, social work notes have been a relatively under-utilized data source and have not been extensively investigated for understanding SDoH [23].
To our knowledge, this is the first topic modeling study across clinical social work notes. This work demonstrates how unsupervised topic modeling approaches can elucidate and categorize the information relating to SDoH within unstructured clinical notes. It further highlights the rich source of SDoH information clinical social worker notes represent and develops methods which can make further research within the field of SDoH more tractable.
MATERIALS AND METHODS
Study design and cohort selection
This study uses the deidentified clinical notes at UCSF recorded between 2012 and 2021 [24]. The study was approved by the Institutional Review Board (IRB) of the University of California, San Francisco (UCSF; IRB #18-25163). We collected clinical notes whose metadata contain the term "social" (case insensitive) in either the encounter type, encounter department name, encounter department specialty, or authentication provider type and collectively refer to these documents as 'social work notes'. In this manner, we obtained a subset of 2.5 million social work notes from a corpus of 106 million notes. Notes shorter than 30 characters were removed because they are likely to contain null values or not be informative. We also removed the duplicated notes to reduce the redundancy and computation time. Finally, 0.95 million notes were retained for the downstream topic modeling analysis (Figure 1).
Word frequency calculation
To investigate the disease-specific features in the social work notes, we first used ICD-10 codes to extract 10 ICD-10 chapters: (1) Diseases of the nervous system (G00-G99), (2) Diseases of the circulatory system (I00-I99), (3) Diseases of the respiratory system (J00-J99), (4) Diseases of the digestive system (K00-K95), (5) Diseases of the musculoskeletal system and connective tissue (M00-M99), (6) Diseases of the genitourinary system (N00-N99), (7) Pregnancy, childbirth and the puerperium (O00-O9A), (8) Congenital malformations, deformations and chromosomal abnormalities (Q00-Q99) (9) Neoplasms (C00-D49), (10) Diseases of the blood and blood-forming organs and certain disorders involving the immune mechanism (D50-D89). Python package scikit-learn was used to conduct the analysis [25]. To embed and tokenize the unstructured notes, we used text.CountVectorizer function from sklearn.feature extraction package. We computed the chi-squared statistics with the chi2 function from sklearn.feature selection. We compared each note category against all the other notes. After ranking the P values and removing stop words, the top five potential meaningful words were visualized by the word frequency calculation.
Latent Dirichlet Allocation (LDA) analysis and topic models
Latent Dirichlet Allocation (LDA) is a generative probabilistic model [19]. It assumes that each document is a combination of a few different topics, and that each word's presence can be attributed to particular topics in the document. The result is a list of clusters, each of which contains a collection of distinct words. The combination of words in a cluster can be used for topic model interpretation. LDA topic modeling can be considered as a clustering algorithm since it takes a collection of documents as vectors of word counts and clusters the data points into a predefined number of cluster centers, which corresponds to the topics. Python package gensim was used for the implementation [26]. We used gensim.models.ldamodel.LdaModel for the actual analysis. The core estimation code is based on Hoffman et al [27].
Text preprocessing was performed before running the topic modeling algorithms, for which python package nltk was implemented. First, the special characters including '\t', '\n', '\s' were excluded. The common stop words were also excluded, using stopwords.words('english') from nltk package [28].
Determine the cluster number
One of the most important parameters that LDA analysis needs is the number of topics K. Generally, if K is chosen to be too small, the model will lack the capacity to provide a holistic summary of complex document collections; and returned topical vectors may combine semantically unrelated words/tokens [29]. Conversely, if K is chosen to be too large, the returned topical vectors may be redundant, and a parsimonious explanation of a complex phenomenon may not be achieved. Instead of using the human-in-the-loop method, we used two evaluation metrics to symmetrically determine the optimal cluster number.
The two metrics that we used are Topic Coherence [30] and Topic Similarity [31]: (1) Topic Coherence measures the score of a single topic by measuring the degree of semantic similarity between high-scoring words in the topic (34). These measurements help distinguish between topics that are semantically interpretable topics and topics that are artifacts of statistical inference. For the coherence metric that we used, the measure is based on a sliding window, one-set segmentation of the top words and an indirect confirmation measure that uses normalized pointwise mutual information (NPMI) and the cosine similarity [32]. (2) Topic Similarity measures how similar two clusters are considering the words that topics contain. The lower the values are, the less redundant the topic distribution is. For that, we use Jaccard similarity [31]. An ideal solution would have a high topic coherence and low similarity metric. To decide the optimal number of clusters, for each analysis we ran the LDA analysis with number of clusters K from 10 to 50, and the C and S were simultaneously calculated. Number of clusters will be chosen if its C value is the ith highest and S value is the jth smallest and i + j is the minimum among all runs (S. Figure 1A).
Topic labeling heuristics
Our analysis screened nearly 20 topic clusters for all 14 categories of notes for 5 independent times resulting in more than 1000 topic clusters that required further labeling. To avoid human variation and save time, we developed a heuristic to automatically assign topic labels while minimizing human efforts. The details are shown below:
We first constructed the dictionary of topic names and the corresponding words by looking through the topic modeling results for one run on the complete corpus of 0.95 million social work notes at UCSF. Then we expanded the individual topic clusters by first finding 20 most similar words to the words comprising topic clusters based on the cosine similarity of their word embeddings [33]. Any generic words included thus were removed manually. The final dictionary is shown in S. Table 3. We thereby automatically assign the topic name for individual word clusters by computing the ratio of intersection versus union of words in a cluster. In this manner, we were able to find topics for all the 1000+ topic clusters the analysis gave. The details are explained in the pseudo-code below***:
Heuristics of automatic assigning topic names for the individual topic cluster
RESULTS
We retrieved a total of 0.95 million de-identified clinical social work notes generated between 2012 and 2021 (see Methods) from our UCSF Information Commons [24] (Figure 1). The majority of notes were classified as Progress Notes, Interdisciplinary Notes, or Telephone Encounter Notes; other note categories included Patient Instructions, Group Note, Letter, which comprised fewer than 5 percent each. These notes cover 181,644 patients of which 95387 (52.5%) were female. The median age of these patients was 33 years. Among them, 69,211 patients had only one note; 65,100 patients had between 2 and 5 notes, and 47,333 patients had more than 5 notes (S. Table 1, Figure 2B). No demographic feature was statistically associated with the number of notes for each patient (S. Table 1).
We next studied which medical conditions were being attributed to these patients receiving social work notes. We collected the ICD-10 codes for the encounters during which social work notes were recorded for the patients. These ICD-10 codes were then mapped at the chapter level [34]. The three most frequent ICD-10 chapters found to be associated with a social work note were "Mental, Behavioral and Neurodevelopmental disorders", "Factors influencing health status and contact with health services", and "Symptoms, signs and abnormal clinical and laboratory findings, not elsewhere classified" (S. Table 2).
Word frequency on individual disease
In order to uncover disease-specific information in social work notes, we compute chi-squared stats of each word feature for the individual disease chapter versus the other chapters (see Methods). After removing frequent words, we calculated the word frequency for the top five words with lowest p-values in social work notes associated with each ICD-10 chapter (Figure 3).
We found that the social work notes associated with each ICD-10 chapter have disease-specific terms and even some disease-specific social determinants of health topics. For instance, notes from patients with neoplasms have a significant enrichment for cancer-related topics: "oncology", "chemotherapy", and "tumor". Similarly, notes from patients with musculoskeletal disorders had disease-related words like "arthritis" and "rheumatology". However, there were also general and disease-specific patterns of enrichment in SDoH-related words. For example, 'planning' and 'mother' were present across many disease categories, while 'mindfulness' was enriched in Pregnancy and Nervous system, while "wheelchair" was enriched in Musculoskeletal disorders. Notably, pregnancy-related conditions showed a very significant enrichment for potentially relevant SDoH topics. Some of these notes are clearly related to mental health, demonstrating that mental health might be frequently assessed in social work notes during pregnancy care.
Using LDA to extract topics in social work notes
While word frequency calculations can provide a window on term relevance, this view is too limited to understand what broader topics may be contained within these notes. In contrast, topic modeling is a field of unsupervised learning that learns statistical associations between words or groups of words to identify "topics": clusters of words that tend to co-occur within the same document.
We implemented Latent Dirichlet Allocation (LDA), a generalized statistical model that allows topic discovery and semantic mining from unordered documents (see Methods) [19]. A required input to the LDA model is the desired number of clusters to partition data into. This requirement to predefined number of data partitions could introduce bias, so we used a combination of statistical evaluations of model results to find the ideal partition number (see Methods). This approach resulted in 17 topic clusters (see S. Figure 1, Methods).
Looking at the word components of each topic ( Table 1), we discovered a few diverse clusters that cover many different social aspects of patients including social service (Topic 11), abuse history (Topic 14), phone call/ online communications (Topic 12), living condition/ lifestyle (Topic 16), risk of death (Topic 8), group session (Topic 7), consultation/ appointment (Topic 5), family (Topic 4, 6), and mental health (Topic 1). Many of these topics are consistent with topics covering social determinants of health; most importantly, most of the information potentially conveyed through these topics are absent in the structured data. Of note, in our parameter exploration, we found that increasing the number of clusters can lead to additional recognizable topics, such as food availability (data not shown), although we also obtain redundant topics. For the remainder of our work, we continued with computationally determined topic clusters number.
Topic modeling on specific note categories After modeling these topics across all the social work notes, we discovered that certain topics appear more often than the other topics ( Table 1). To explore whether this was due to notes having topic subtypes of varying size, we compared the identified topics across the four largest categories of social work notes: Progress Notes, Interdisciplinary, Telephone encounters, and Group Notes (Figure 2A, see Methods). We focused on the major topics in these notes because 1) the rare categories (less than 5 percent) were less likely to influence the topic modeling results; 2) every hospital system is likely to have notes within these categories and thus the analysis results could be more informative and generalizable. We reused the same pipeline on identifying the optimal number of cluster (see Methods). Due to the intrinsic randomness that the LDA method has, we ran each analysis five times per category and then developed a heuristic for labeling these topic clusters based on our previous findings for all the notes (see Methods).
We found that social work notes that belong to the Progress Notes category, compared those that belong to all the other categories, were composed of clinically related topics (e.g., Clinician/Hospital/Medication; Mental Health), along with a smaller proportion of SDoHrelated topics (e.g., Insurance/Income, Abuse history, Social support, Family). This is consistent with the fact that progress notes are routinely collected medical records where healthcare professionals record information to document and update a patient's clinical status. Compared to these, telephone encounter notes are usually used to address patient issues outside of an appointment. Accordingly, telephone encounter notes contain more information about Insurance/Income; Phone call/Online; Social support; Family. Interestingly, telephone encounter notes lack information about the Risk of death ( Figure 4A). While telephone encounters may serve to discuss Family/Social support, it's possible that, given the severity of this topic, discussions on Risk of death are considered not appropriate for telephone encounters and saved for formal in-person visits. In addition, group notes, which are the notes taken during group therapy, describe the group's progress and dynamics. As expected, group notes have more uneven topic category distributions (Figure 4A).
We also applied LDA analysis to the social work notes associated with 10 ICD-10 chapters used earlier (Figure 4B). We observed that individual diseases have a similar topic proportion distribution, indicating that social workers have a similar procedure when communicating with patients with different diseases. The majority of the clusters are focused on social support and family. This is similar to the topic distribution for Interdisciplinary notes, which intuitively makes sense given the social work notes for each ICD-10 chapter contain notes in a mixture of different subtypes mentioned above. However, they indeed have some differences: compared with other disease chapters, notes associated with disorders of mental health and pregnancy contain more abundant SDoH topics on mental health, as would be expected. Mental health topics are even more often mentioned in clinical notes around pregnancy than even nervous system disorders. Interestingly, the family topic area was often mentioned in notes associated with birth malformation abnormalities. In summary, this analysis demonstrates both the commonness and uniqueness of topics around social determinants of health covered across the various diseases and conditions which afflict patients.
DISCUSSION
We used a comprehensive unsupervised topic modeling method called LDA modeling on our corpus of 0.95 million de-identified clinical social work notes. We showed that topic modeling can (1) extract the hidden themes from this huge corpus of clinical notes and identify the critical information embedded in the notes, namely social determinants of health (SDoH) factors; and (2) calculate the proportion of each theme across the note corpus and systemically characterize notes of different types.
Using simple term frequency methods on this large corpus, we found that specific SDoH terms tend to be enriched in notes from patients within different diseases categories, including wheelchair for patient with musculoskeletal disorders and depression for patients with pregnancy diagnoses, suggesting that these populations may be more at risk for these SDoH features.
We performed LDA modeling on this large corpus, which allowed us to extract topic categories from this large corpus of notes. Through this method, we extracted several SDoH-related topics that are intuitive, provide insight into the information that may be extracted from these corpuses, and can be leveraged in future work to understand how these topics correlate with health outcomes. During our comparison of notes of different subtypes, we found that the distribution of SDoH topics in notes of different categories varies. Interestingly, the topic distribution of notes for specific types of diseases contains similar information but showed different levels of enrichment that represent the unique features of each disease set. As one of many examples, our analysis shows how mental health issues are frequently documented around pregnancy ( Figure 4B). This type of information can help us better understand the social determinants of most concern to patients and when interacting with the health system.
The specific topics that we identified are consistent with a few previous publications. A recent study implemented non-negative matrix factorization (NMF) topic modeling method over the 382,666 primary care clinical notes and were also able to extract information regarding physical, mental, and social health. However, this study contained clinical notes generated solely by physicians, while we focused specifically on social work notes which allowed a more comprehensive list of SDoH topics to be identified. Our work is a step closer towards understanding SDoH-related information that is embedded in clinical social work notes and we believe that compared with other supervised methods, unsupervised approaches may be better at identifying coherent, yet comprehensive topics.
Our study has several strengths. We implemented a comprehensive topic modeling analysis on a huge corpus of notes that to our knowledge is the largest social work notes data set. Instead of focusing on a single disease category or specific medical topic, we are aiming at comprehensively finding the potential SDoH topics in all types of clinical social notes for a variety of diseases. Moreover, in order to provide an initial, comprehensive landscape on the embedded information in social worker notes, we performed both a word frequency enrichment analysis, which identified specific terms that more frequently appear in conjunction with specific ICD-10 chapters, and unsupervised topic modeling approaches that identified broad topics of increased relevance in these disease groups. Additionally, we applied rigorous hyperparameter search to identify the most parsimonious LDA solution, cognizant that the requirement of predefining topic number in LDA may lead to imperfect topic selection for a given corpus.
There are limitations to our work. As mentioned above, using word frequencies for inference assumes that each word has one and only one meaning. We also focused this word frequency analysis on social worker notes, without analyzing how these word frequencies vary across disease categories in other note types. On the topic modeling analyses, our method to ensure topic coherence required assigning topic labels to around 20 clusters for more than 10 note cohorts with 5 iterations each made the manual interpretation time-consuming and error-prone. Thus, we developed topic labeling heuristics that allow us to assign topics to the individual clusters (see Methods). We tested that this heuristic could perform as well as manual topic assignment, however, it still can be improved. In the future, it may be interesting to revisit our heuristic to expand upon the topic clusters further to make them more generalizable. Still, we believe this approach serves as an improvement over previous work in the field where human interpretation was the only method.
CONCLUSION
Social work notes contain rich and unique information about social determinants of health factors. Without using notes, it would be impossible to consider many factors in analyzing health outcomes. Latent dirichlet allocation topic modelling of social work notes is a scalable and unsupervised approach for characterizing the impact of social determinants of health on healthcare system and community public health. Although human inference and interpretation are necessary for topic modeling, several computation-based evaluation approaches were implemented to assist in discovering robust results. In addition, the findings indicate that different categories of notes emphasize on different aspects of social determinants of health. Understanding this information will help guide future work using clinical notes to study social determinants of health.
Begin :
BeginConstruct the dictionary of topic names and the words comprising this topic; Expand the individual topic cluster space; > Enrichment[k|h] means the frequency of words belong to topic h for word cluster k > ∩ means intersect, ∪ means union For each iteration of the topic modeling results do: For every word cluster k do: For topic name h, corresponding word set in dictionary do: overlap = word cluster k ∩ topic h total = word cluster k ∪ topic h Enrichment[k|h] = overlap / total The topic assigned to word cluster k = max([Enrichment[k|h] for h in H]) End Code for the paper is available on https://github.com/ShenghuanSun/LDA_TM
Table 1 .
1Topic modeling results for all social work notes. Each row is an inferred topic, which is composed of 10 words. chronic, thought, normal, imminent, status, testing, intervention, speech, suicide 9 client, health, service, caregiver, mental, therapist, therapy, behavioral, individual, group 10 well, when, time, week, also, able, state, more, friend, very 11 social, service, support, family, assessment, medical, time, note, concern, ongoing, 12 care, home, plan, phone, contact, work, information, resource, call, support 13 time, clinician, name, date, code, behavior, risk, number, plan, provider 14 history, child, other, factor, current, none, substance, abuse, psychiatric, year 15 donor, donation, potential, employment, understanding, risk, decision, independent, process, care 16 night, morning, hour, sleep, house, already, less, past, aggressive, eveningClusters
Key words
1
goal, anxiety, problem, term, depression, mood, therapy, symptom, long, treatment
2
recommendation, wife, education, treatment, patient, form, appearance, ongoing,
advocate, trauma
3
hospital, self, day, pain, other, connection, recent, feeling, side, number
4
mother, father, family, room, information, nurse, source, concrete, control,
instruction
5
session, consultation, telehealth, location, time, tool, objective, parking, other,
treatment
6
parent, family, school, child, sister, support, place, year, well, initial
7
group, intervention, patient, discussion, response, time, summary, progress,
participant, skill
8
risk,
transplant, medication, post, support, health, insurance, husband, psychosocial, message, history
ACKNOWLEDGEMENTWe thank all researchers, clinicians, social workers who help collect clinical notes data in our UCSF Information Commons. We thank everyone in Dr. Atul J. Butte's lab for the helpful discussion. We thank everyone who has been helping to construct and organize the UCSF Information Commons. We thank Wynton high-performance computing (HPC) cluster for the computation capacity support.Deidentified patients Clinical notesRetrieve the social notes The social work notes from the UCSF Information Commons between 2012 and 2021 were initially retrieved. Notes that were duplicated or extremely short were excluded, which resulted in a corpus of 0.95 million notes. Later, the notes were analyzed using two methods: word frequency calculation (Bottom Left) and topic modeling (Bottom Right). Later, the word frequency was compared between different disease chapters. For topic modeling, Latent Dirichlet Allocation was used to identify the topics in individual social work notes. Topic coherence metric and Jaccard distance were implemented to decide the optimal clustering results. S.Table 3: Topic assignment heuristic. The words in the Keywords column are the representative words used to define the topics. Topics
Social determinants of health inequalities. M Marmot, The lancet. 365Marmot M. Social determinants of health inequalities. The lancet 2005;365:1099-104.
Social determinants of health. WHO Regional Office for South-East Asia. W H Organization, Organization WH. Social determinants of health. WHO Regional Office for South-East Asia 2008.
Social determinants of health and diabetes: a scientific review. F Hill-Briggs, N E Adler, S A Berkowitz, Diabetes care. 44Hill-Briggs F, Adler NE, Berkowitz SA, et al. Social determinants of health and diabetes: a scientific review. Diabetes care 2021;44:258-79.
The role of social determinants of health in the risk and prevention of group A streptococcal infection, acute rheumatic fever and rheumatic heart disease: a systematic review. P M Coffey, A P Ralph, V L Krause, PLoS neglected tropical diseases. 126577Coffey PM, Ralph AP, Krause VL. The role of social determinants of health in the risk and prevention of group A streptococcal infection, acute rheumatic fever and rheumatic heart disease: a systematic review. PLoS neglected tropical diseases 2018;12:e0006577.
Addressing social determinants of health in the care of patients with heart failure: a scientific statement from the American Heart Association. C White-Williams, L P Rossi, V A Bittner, Circulation. 141White-Williams C, Rossi LP, Bittner VA, et al. Addressing social determinants of health in the care of patients with heart failure: a scientific statement from the American Heart Association. Circulation 2020;141:e841-63.
Socioeconomic status. The relationship with health and autoimmune diseases. O-J Calixto, J-M Anaya, Autoimmunity reviews. 13Calixto O-J, Anaya J-M. Socioeconomic status. The relationship with health and autoimmune diseases. Autoimmunity reviews 2014;13:641-54.
Impact of social determinants of health on the emerging COVID-19 pandemic in the United States. S Singu, A Acharya, K Challagundla, Frontiers in public health. 406Singu S, Acharya A, Challagundla K, et al. Impact of social determinants of health on the emerging COVID-19 pandemic in the United States. Frontiers in public health 2020;:406.
Extracting social determinants of health from electronic health records using natural language processing: a systematic review. B G Patra, M M Sharma, V Vekaria, 10.1093/jamia/ocab170J Am Med Inform Assoc. 28Patra BG, Sharma MM, Vekaria V, et al. Extracting social determinants of health from electronic health records using natural language processing: a systematic review. J Am Med Inform Assoc 2021;28:2716-27. doi:10.1093/jamia/ocab170
LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation. S Pyo, E Kim, M Kim, 10.1109/TCYB.2014.2353577IEEE Trans Cybern. 45Pyo S, Kim E, Kim M. LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation. IEEE Trans Cybern 2015;45:1476-90. doi:10.1109/TCYB.2014.2353577
Topic Modeling of Social Networking Service Data on Occupational Accidents in Korea: Latent Dirichlet Allocation Analysis. K-B Min, S-H Song, Min J-Y , 10.2196/19222J Med Internet Res. 2219222Min K-B, Song S-H, Min J-Y. Topic Modeling of Social Networking Service Data on Occupational Accidents in Korea: Latent Dirichlet Allocation Analysis. J Med Internet Res 2020;22:e19222. doi:10.2196/19222
Data from clinical notes: a perspective on the tension between structure and flexible documentation. S T Rosenbloom, J C Denny, H Xu, Journal of the American Medical Informatics Association. 18Rosenbloom ST, Denny JC, Xu H, et al. Data from clinical notes: a perspective on the tension between structure and flexible documentation. Journal of the American Medical Informatics Association 2011;18:181-6.
Moonstone: a novel natural language processing system for inferring social risk from clinical narratives. M Conway, S Keyhani, L Christensen, Journal of biomedical semantics. 10Conway M, Keyhani S, Christensen L, et al. Moonstone: a novel natural language processing system for inferring social risk from clinical narratives. Journal of biomedical semantics 2019;10:1-10.
Automatic Extraction of Social Determinants of Health from Medical Notes of Chronic Lower Back Pain Patients. D Lituiev, B Lacar, S Pak, 10.1101/2022.03.04.22271541Lituiev D, Lacar B, Pak S, et al. Automatic Extraction of Social Determinants of Health from Medical Notes of Chronic Lower Back Pain Patients. 2022;:2022.03.04.22271541. doi:10.1101/2022.03.04.22271541
Examining the use, contents, and quality of freetext tobacco use documentation in the electronic health record. E S Chen, E W Carter, I N Sarkar, AMIA Annual Symposium Proceedings. American Medical Informatics Association. 366Chen ES, Carter EW, Sarkar IN, et al. Examining the use, contents, and quality of free- text tobacco use documentation in the electronic health record. In: AMIA Annual Symposium Proceedings. American Medical Informatics Association 2014. 366.
Mining 100 million notes to find homelessness and adverse childhood experiences: 2 case studies of rare and severe social determinants of health in electronic health records. C A Bejan, J Angiolillo, D Conway, Journal of the American Medical Informatics Association. 25Bejan CA, Angiolillo J, Conway D, et al. Mining 100 million notes to find homelessness and adverse childhood experiences: 2 case studies of rare and severe social determinants of health in electronic health records. Journal of the American Medical Informatics Association 2018;25:61-71.
A survey of topic modeling in text mining. R Alghamdi, K Alfalqi, Int J Adv Comput Sci Appl. 2015IJACSAAlghamdi R, Alfalqi K. A survey of topic modeling in text mining. Int J Adv Comput Sci Appl(IJACSA) 2015
Probabilistic Latent Semantic Analysis. T Hofmann, 10.48550/arXiv.1301.6705Hofmann T. Probabilistic Latent Semantic Analysis. 2013. doi:10.48550/arXiv.1301.6705
Indexing by latent semantic analysis. S Deerwester, S T Dumais, G W Furnas, Journal of the American society for information science. 41Deerwester S, Dumais ST, Furnas GW, et al. Indexing by latent semantic analysis. Journal of the American society for information science 1990;41:391-407.
. D M Blei, A Y Ng, M I Jordan, Latent dirichlet allocation. the Journal of machine Learning research. 3Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. the Journal of machine Learning research 2003;3:993-1022.
Empirical study of topic modeling in twitter. L Hong, B D Davison, Proceedings of the first workshop on social media analytics. the first workshop on social media analyticsHong L, Davison BD. Empirical study of topic modeling in twitter. In: Proceedings of the first workshop on social media analytics. 2010. 80-8.
Autonomous adaptive underwater exploration using online topic modeling. Y Girdhar, P Giguere, G Dudek, Experimental Robotics. SpringerGirdhar Y, Giguere P, Dudek G. Autonomous adaptive underwater exploration using online topic modeling. In: Experimental Robotics. Springer 2013. 789-802.
What is an opinion about? exploring political standpoints using opinion scoring model. B Chen, L Zhu, D Kifer, Twenty-Fourth AAAI Conference on Artificial Intelligence. Chen B, Zhu L, Kifer D, et al. What is an opinion about? exploring political standpoints using opinion scoring model. In: Twenty-Fourth AAAI Conference on Artificial Intelligence. 2010.
Documentation and review of social determinants of health data in the EHR: measures and associated insights. M Wang, M S Pantell, L M Gottlieb, 10.1093/jamia/ocab194Journal of the American Medical Informatics Association. 28Wang M, Pantell MS, Gottlieb LM, et al. Documentation and review of social determinants of health data in the EHR: measures and associated insights. Journal of the American Medical Informatics Association 2021;28:2608-16. doi:10.1093/jamia/ocab194
ARS. UCSF DeID CDW. R20220207. UCSF Data Resources. San Francisco12University of CaliforniaUniversity of California, San Francisco, ARS. UCSF DeID CDW. R20220207. UCSF Data Resources. https://data.ucsf.edu/research (accessed 12 Sep 2022).
Scikit-learn: Machine learning in Python. the. F Pedregosa, G Varoquaux, A Gramfort, Journal of machine Learning research. 12Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 2011;12:2825-30.
Gensim-statistical semantics in python. Retrieved from genism org. R Řeh\uuřek, P Sojka, Řeh\uuřek R, Sojka P. Gensim-statistical semantics in python. Retrieved from genism org 2011.
Online learning for latent dirichlet allocation. advances in neural information processing systems. M Hoffman, F Bach, D Blei, 23Hoffman M, Bach F, Blei D. Online learning for latent dirichlet allocation. advances in neural information processing systems 2010;23.
NLTK: the natural language toolkit. S Bird, Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions. the COLING/ACL 2006 Interactive Presentation SessionsBird S. NLTK: the natural language toolkit. In: Proceedings of the COLING/ACL 2006 Interactive Presentation Sessions. 2006. 69-72.
Improving Latent Dirichlet Allocation: On Reliability of the Novel Method LDAPrototype. J Rieger, J Rahnenführer, C Jentsch, International Conference on Applications of Natural Language to Information Systems. SpringerRieger J, Rahnenführer J, Jentsch C. Improving Latent Dirichlet Allocation: On Reliability of the Novel Method LDAPrototype. In: International Conference on Applications of Natural Language to Information Systems. Springer 2020. 118-25.
Evaluating topic coherence measures. F Rosner, A Hinneburg, M Röder, arXiv:14036397arXiv preprintRosner F, Hinneburg A, Röder M, et al. Evaluating topic coherence measures. arXiv preprint arXiv:14036397 2014.
The distribution of the flora in the alpine zone. P Jaccard, New phytologist. 1Jaccard P. The distribution of the flora in the alpine zone. 1. New phytologist 1912;11:37- 50.
Full-Text or Abstract? Examining Topic Coherence Scores Using Latent Dirichlet Allocation. S Syed, M Spruit, 10.1109/DSAA.2017.612017 IEEE International Conference on Data Science and Advanced Analytics (DSAA. Syed S, Spruit M. Full-Text or Abstract? Examining Topic Coherence Scores Using Latent Dirichlet Allocation. In: 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA). 2017. 165-74. doi:10.1109/DSAA.2017.61
Big. Embedding projector -visualization of high-dimensional data. Picture, Charles Thorat, Nicholson, 10Picture DS Nikhil Thorat, Charles Nicholson, Big. Embedding projector -visualization of high-dimensional data. http://projector.tensorflow.org (accessed 10 Nov 2022).
International Statistical Classification of Diseases and related health problems: Alphabetical index. World Health Organization. W H Organization, 52.6%) 34285 (52.7%Female. 3637224730Organization WH. International Statistical Classification of Diseases and related health problems: Alphabetical index. World Health Organization 2004. Female 36372 (52.6%) 34285 (52.7%) 24730
. Ethnicity Hispanic. Latino 14891 (21.5%) 14451 (22.2%) 12044 (25.4%)Ethnicity Hispanic/Latino 14891 (21.5%) 14451 (22.2%) 12044 (25.4%)
| [
"https://github.com/ShenghuanSun/LDA_TM"
] |
[
"The Effects of In-Domain Corpus Size on pre-training BERT",
"The Effects of In-Domain Corpus Size on pre-training BERT"
] | [
"Chris Sanchez \nMicrosoft Corporation\n20191RestonVirginia\n",
"Zheyuan Zhang \nColeridge Initiative\n\n"
] | [
"Microsoft Corporation\n20191RestonVirginia",
"Coleridge Initiative\n"
] | [] | Many prior language modeling efforts have shown that pre-training on an in-domain corpus can significantly improve performance on downstream domain-specific NLP tasks. However, the difficulties associated with collecting enough in-domain data might discourage researchers from approaching this pre-training task. In this paper, we conducted a series of experiments by pre-training Bidirectional Encoder Representations from Transformers (BERT) with different sizes of biomedical corpora. The results demonstrate that pre-training on a relatively small amount of in-domain data (4GB) with limited training steps, can lead to better performance on downstream domainspecific NLP tasks compared with fine-tuning models pre-trained on general corpora. 1 | 10.48550/arxiv.2212.07914 | [
"https://export.arxiv.org/pdf/2212.07914v1.pdf"
] | 254,685,547 | 2212.07914 | 64dcb27e5c31c8567725c7f67000feaa487c32e0 |
The Effects of In-Domain Corpus Size on pre-training BERT
Chris Sanchez
Microsoft Corporation
20191RestonVirginia
Zheyuan Zhang
Coleridge Initiative
The Effects of In-Domain Corpus Size on pre-training BERT
Many prior language modeling efforts have shown that pre-training on an in-domain corpus can significantly improve performance on downstream domain-specific NLP tasks. However, the difficulties associated with collecting enough in-domain data might discourage researchers from approaching this pre-training task. In this paper, we conducted a series of experiments by pre-training Bidirectional Encoder Representations from Transformers (BERT) with different sizes of biomedical corpora. The results demonstrate that pre-training on a relatively small amount of in-domain data (4GB) with limited training steps, can lead to better performance on downstream domainspecific NLP tasks compared with fine-tuning models pre-trained on general corpora. 1
Introduction
Pre-training large neural language models based on Transformers (Vaswani et al., 2017) such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) and its variants Lan et al., 2019) has proven to be an excellent strategy and achieved state-of-the-art results on many downstream natural language processing (NLP) tasks. Most models focused their pre-training efforts on general domain text. For example, the original BERT model was trained on Wikipedia and the BookCorpus (Zhu et al., 2015). Many other following efforts focused on adding additional texts to the pre-training process to create even larger models with the intent of improving model performance Raffel et al., 2019). However, recent works have shown that, given the general nature of the corpora these models were pre-trained on, they do not perform as well when introduced to domain-specific corpora as found in categories such as biomedicine, law, or finance, to name a few. Several efforts have demonstrated that by pre-training on domainspecific corpora, either through a "from scratch" or a continual pre-training approach, these same models achieved much better performance on indomain, downstream NLP tasks. (Beltagy et al., 2019;Lee et al., 2020;Huang et al., 2019) The success of these domain-specific BERT models encouraged practitioners to explore the possibilities of pre-training BERT using corpora in their respective domains to get better performing language models and to better tackle their indomain NLP tasks. One of the biggest challenges presented to NLP practitioners, however, lies in the lack of large, readily available domain-specific corpora; for reference, the original BERT was pretrained on 3.3 billion tokens, or roughly 20 GB of uncompressed raw text. To get enough in-domain text data for pre-training, practitioners must resort to what is either available "in-house" or available through public or private resources. Web scraping is one oft-cited method used to gather publicly available documents to increase one's in-domain training corpora. For example, LEGAL-BERT (Chalkidis et al., 2020) authors scraped publicly available legal text from six different sources, to achieve a total corpus size of 12 GB. Nevertheless, this data collection process is laborious and time-consuming and could discourage researchers from conducting such experiments for fear of being unable to collect enough data. On the other hand, it would also be a waste of resources if, after all the data is collected, it turns out the data is still not enough for pre-training and the model ends up having poor performance.
To mitigate these potential situations, we conducted a series of experiments that pre-train the BERT model on different in-domain corpus sizes and evaluate the resulting language model on multiple downstream, in-domain NLP tasks. Because the biomedical field has a rich history of NLP work, and therefore has several readily available datasets for model fine-tuning/testing, we chose to use biomedicine for our example domain. Using this example, we demonstrate the least amount of in-domain data required to see performance improvements against generally-trained BERT models across a variety of domain-specific tasks. We believe this work is useful to practitioners who are considering pre-training their own in-domain BERT model from scratch. They can use our work to inform their own costs-benefits analysis as they consider whether they have enough resources (data/compute/time) and if the potential gains are worth the pre-training undertaking in the first place.
Related work
Many prior works have been completed on the domain-adaptation of the BERT model. There are, in general, three possible approaches: a) One could fine-tune a BERT model against in-domain datasets, as suggested by the original BERT authors (Devlin et al., 2018). However, simple finetuning usually leads to unsatisfactory performance on specific domains where the vocabularies have different distributions from the general text, such as medical clinical notes, legal notes, or biomedical literature (Lee et al., 2020;Huang et al., 2019;Chalkidis et al., 2020). b) Intuitively, one could pre-train BERT from scratch using domain-specific corpora with a new vocabulary. This method has proven effective in many specific domains with an abundant amount of specialized vocabulary or particular syntax. Even in the domains that are not usually considered distinctly different from the general text, such as Twitter and Yelp, pretraining on domain-specific corpus still helps improve downstream performance (Dai et al., 2020). The shortcoming, though, is that this approach typically requires a large amount of data and can be expensive in terms of compute resources and the time/labor costs required to pre-train a model from scratch (Tai et al., 2020). c) Another recent prevailing method is to pre-train using a mixed-domain, where the pre-trained BERT model is continually pre-trained on in-domain data, starting from a predefined general model checkpoint. This method assumes the general text is still helpful and the goal is to improve model performance without having to pre-train the model from scratch, thereby reducing the required in-domain corpora size. Several works have explored this method either directly using BERT's original vocabulary (Lee et al., 2020), or incorporating a set of new domain vocabulary into the existing model (Tai et al., 2020). Though this method has proved to be effective, there are also works pointing out that pre-training entirely from scratch using only in-domain-specific corpora can significantly outperform models using the continual pre-training approach (Gu et al., 2021). There are many experiments conducted to compare the performance between these abovementioned methods (Gu et al., 2021), the effect of the domain-specific vocabulary as well as the model size (Tai et al., 2020). But to our best knowledge, there are yet no prior works focusing on systematically analyzing the effect of the pre-training corpus size itself.
Data
Pre-training corpus
The pre-training corpus was collected from the National Institute of Health's National Library of Medicine's PubMedCentral via the AWS public registry (Sayers et al., 2021). PubMed is a free search engine providing access to papers and scholarly articles primarily focused on biomedical and life sciences topics. The data includes PubMed abstracts and PubMed Central full text articles. For our experiment, we collected 16 GB of full text PubMed articles, discarded all foreign language articles, and used only the main body of the text.
As part of our prepossessing pipeline, we created a set of vocabulary based on the PubMed corpus and the vocabulary size was set at 30,500 to mimic the original BERT. Of note, the overlap between the new tokenized vocabulary and the original BERT's vocabulary is only 52%, in essence, showcasing the dramatic difference in biomedical jargon as compared to words used in general corpora. All words were tokenized as lowercase.
Fine-tuning tasks
The best way to evaluate the effects of model pretraining is to compare the resulting model perfor- Table 2: Performance comparison of pre-trained language models. The models are evaluated on the tasks using the same fine-tuning process. All of our experimental models are pre-trained for 67K steps.
mance on a range of in-domain NLP tasks against the original BERT model. In order to systematically evaluate the performance of the models, we selected a subset of tasks from the Biomedical Language Understanding & Reasoning Benchmark (BLURB) benchmark, which comprises a comprehensive set of biomedical NLP tasks from publicly available datasets (Gu et al., 2021). The tasks we chose for our experiment include Named Entity Recognition (NER), Question Answering (QA), and Document Classification. We list the datasets we used in Table 1 as well as the detailed description of the tasks below.
NCBI-disease
The Natural Center for Biotechnology Information Disease corpus (Dogan et al., 2014) is fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. It contains 793 PubMed abstracts and 6892 annotated disease mentions linked to 790 unique disease concepts. For this task, we used the train, development, test splits given by the paper authors.
PubMedQA
The PubMedQA (Jin et al., 2019) is a biomedical question answering dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe using the corresponding abstracts. PubMedQA has 1k expertannotated, 61.2k unlabeled and 211.3k artificially generated QA instances. This task has many training options. For simplicity, here we used only the labeled data and the train/test splits provided by the authors. The long answers are also available in the data, but we did not involve them in the experiments.
HoC
The Hallmarks of Cancer corpus (Hanahan and Weinberg, 2000) contains annotations from PubMed abstracts with labels signifying a specific cancer hallmark. There are 37 detailed hallmarks but we only focused on the top-levels, which are 10 groups in total. We have to split the train/test sets on our own, because they are not provided in the paper. Though the original dataset provided sentence level annotation, we follow the common practice and evaluate on the abstract level Zhang, 2014).
Experiment Setup
For the pre-training phase, the models were pretrained only on the Masked Language Modeling learning objective (masked at 15%), as a review of the literature indicated that (NSP) is not a necessary loss objective to include in pre-training to achieve SOTA results on downstream tasks . In order to observe the effect of corpus size on model performance, we segmented the data into 4 GB-sized chunks, representing approximately 400,000 documents per chunk. Literature has demonstrated that 20 GB of text data is a good benchmark for pre-training a domain-specialized BERT model, and therefore we experimented with pre-training runs on corpus chunk sizes of 4 GB, 8 GB, and 12 GB respectively. Each corpus was an additive version of the previous size. For example, the 8GB corpus consisted of the original 4 GB corpus plus an additional 4 GB of raw text. For the downstream tasks, we fine-tuned an original BERT model(bert-base-uncased) on each task to set a baseline, and took note of the hyperparameters used. We then conducted the same fine-tuning using the models pre-trained on 4 GB, 8 GB, and 12 GB of PubMed data, using the same hyperparameters from the original BERT model.. We also conducted the fine-tuning on the downstream tasks of our pre-trained models at various pre-training step sizes to showcase the effects of training duration on model performance. Finally, we report the average scores from five runs for each model on each fine-tuning task.
The experiments were conducted on Microsoft Azure virtual machines, specifically, the Standard ND40rs_v2 model which consists of 8 x NVIDIA
Results
We present the results of domain-specific pretraining from scratch on biomedical NLP applications. In this paper, we compare our experimental results against the original BERT model which was trained on a general corpus of 20 GB data, as well as PubMedBERT, which is a mature domainspecialized model also pre-trained using a PubMed corpus, as a reference. The results are demonstrated in two aspects. Table 2 summarizes the results of our in-domain models pre-trained on three different sizes of corpus for 67K steps against the general BERT baseline. We can observe, not surprisingly, that the general trend indicates that the larger the corpus size and the longer the model is pre-trained, the better the results tend to be. However, we can also see that the improvement between the 8GB and 12 GB models is less obvious than that between the 4GB and 8GB models. The performance of the 12GB model is fairly close to, in some tasks even slightly better than, the performance of the PubMedBERT model, though the PubMedBERT model was pre-trained on approximately 21 GB of in-domain data.
It's worth noting that even though the results of the model trained on 4 GB of data are lower than the models pre-trained on a larger corpus, the performance is just as good, if not better than the general BERT model pre-trained on 20 GB and for 1 million steps. Table 3 demonstrates the results of the 4GB model after different training steps. We can see from the results that even after one pass through the data (3,500 steps at a batch size of 112), the model is learning how to represent the in-domain language.
As mentioned previously, biomedical text is drastically different from general text, and we postulate that the tokenization of in-domain data using an in-domain vocabulary contributes to the improvements in model performance, particularly with sequence classification tasks. The model is able to learn whole word representations of terms common to the biomedical domain, such as "gastrointestinal", which would otherwise be broken up into several sub-word tokens if a general vocabulary were used.
For practitioners we present a simple costbenefits analysis using our work as an example. To achieve a roughly 2% improvement in downstream tasks over general BERT required 48 hours of indomain pre-training. Another 48 hours of training led to a further 1-1.5% jump in performance. At a run-rate of $22.33/hour (for 8 x V100s on Azure cloud) the financial cost to achieve the initial performance boost is $1,071, and if pre-training is run for 96 hours the cost is $2,143. As shown in Table 4, the general trend is the longer you trained, the more you will spend for each percent of improvement. We note here that this analysis is representative of the experiment that we conducted, and can easily be improved upon by simply adding optimization techniques such as Whole Word Masking, faster GPUs, and/or the use of deep learning optimizations such as DeepSpeed (Rajbhandari et al., 2022) or NVIDIA mixed precision (Micikevicius et al., 2017).
In this paper, we pre-trained the BERT model on different sizes of in-domain corpus and compared the results with the original BERT model. The results demonstrate that even pre-training on a relatively small amount of in-domain data (4 GB) with limited training steps, can lead to better performance on downstream domain-specific NLP tasks compared with fine-tuning models pre-trained on general corpora. We hope this work could encourages researchers and practitioners who want to pretrain BERT models to solve tasks in specialized domains, but lack access to voluminous stores of raw text data.
Future Work
For future work, it would be useful to compare model performance using continual pre-training methods vs.models pre-trained from scratch on comparable corpus sizes, thereby working towards defining a generalized inflection point against which practitioners can weigh their options on which method to use taking into account the text data available, compute resources, time available, and desired model performance.
Another area that was not explored in our experiment is the effect of vocabulary size on model performance as a function of the corpus size. We set our vocabulary size at 30,500 to mimic the original BERT model, but as pointed out the original BERT model was trained on 20 GB of data. We do not know if our models would have benefited from a smaller vocabulary size based on a ratio between the corpus size and vocab size.
We also readily point out that there are several areas for improvement in our results, starting with using Whole Word Masking as the word masking scheme. This technique was not used in our experiments but could easily be integrated into the pre-processing pipeline by future practitioners. We did not experiment with different batch sizes, we simply set the batch size at the maximum memory load our GPUs could handle in order to reduce training time, therefore, there is considerable room for improvement in determining ideal batch size given a set corpus size. Finally, we would have preferred to test our pre-trained models on a wider variety of downstream tasks so as to determine if our pre-training method is robust across several dimensions. Due to the time constraints on our project, we were unable to do so, but leave addi-tional fine-tuning on a wider variety of tasks (the Biomedical Language Understanding & Reasoning Benchmark (BLURB) (Gu et al., 2021) is a great place to start) for future work. We note that performance gains from in-domain pre-training on some downstream NLP tasks does not necessarily imply that a practitioner will necessarily see gains in other unrelated downstream NLP tasks. Results on non-tested downstream tasks must be empirically validated.
Table 1 :
1Datasets for Fine-tuning Tasks and Their Evaluation Metrics
Table 3 :
3Performance comparison between general BERT baseline, PubMedBERT paper and different steps of our model pre-trained on 4 GB data.Price/hour (8
x V100s)
Hours
Trained
Total
Cost
Approximate Score
Boost
Price per
% Boost
$22.33
48
$1,071.84 2%
$535
$22.33
96
$2,143.68 3.25%
$659
Table 4 :
4The performance improvement and its approximate cost.V100 GPUs. At a Batch Size of 112 and a corpus
size of 4 GB, it took roughly 48 hours to pre-train
a model up to 130,000 steps.
GitHub repo: https://github.com/JasonZhangzy1757/theeffect-of-domain-corpus-size-for-pretraining
AcknowledgementWe thank our instructor Prof. Chris Potts and our Course Facilitator Ankit Chadha for taking the time to advise us over the course of this project on matters both great and small. We appreciate their wisdom and candor. We also want to thank Hoifung Poon and Naoto Usuyama from Microsoft Research, as well as Kexin Huang, the clinical-BERT author, who sparked our motivation to attempt this undertaking in the first place. We are grateful for their advice at the initial stages of our research, which set us on a good trajectory.
Scibert: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, arXiv:1903.10676arXiv preprintIz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scib- ert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676.
Prodromos Malakasiotis. Ilias Chalkidis, Manos Fergadiotis, arXiv:2010.02559Nikolaos Aletras, and Ion Androutsopoulos. 2020. Legal-bert: The muppets straight out of law school. arXiv preprintIlias Chalkidis, Manos Fergadiotis, Prodromos Malaka- siotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. Legal-bert: The muppets straight out of law school. arXiv preprint arXiv:2010.02559.
Cost-effective selection of pretraining data: A case study of pretraining bert on social media. Xiang Dai, Sarvnaz Karimi, Ben Hachey, Cecile Paris, arXiv:2010.01150arXiv preprintXiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2020. Cost-effective selection of pretraining data: A case study of pretraining bert on social me- dia. arXiv preprint arXiv:2010.01150.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Ncbi disease corpus: a resource for disease name recognition and concept normalization. Robert Rezarta Islamaj Dogan, Zhiyong Leaman, Lu, Journal of biomedical informatics. 47Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.
Ml-net: multi-label classification of biomedical texts with deep neural networks. Jingcheng Du, Qingyu Chen, Yifan Peng, Yang Xiang, Cui Tao, Zhiyong Lu, Journal of the American Medical Informatics Association. 2611Jingcheng Du, Qingyu Chen, Yifan Peng, Yang Xiang, Cui Tao, and Zhiyong Lu. 2019. Ml-net: multi-label classification of biomedical texts with deep neural networks. Journal of the American Medical Infor- matics Association, 26(11):1279-1285.
Domainspecific language model pretraining for biomedical natural language processing. Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon, ACM Transactions on Computing for Healthcare. 31Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Domain- specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1-23.
The hallmarks of cancer. cell. Douglas Hanahan, A Robert, Weinberg, 100Douglas Hanahan and Robert A Weinberg. 2000. The hallmarks of cancer. cell, 100(1):57-70.
Clinicalbert: Modeling clinical notes and predicting hospital readmission. Kexin Huang, Jaan Altosaar, Rajesh Ranganath, arXiv:1904.05342arXiv preprintKexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.
Qiao Jin, Bhuwan Dhingra, Zhengping Liu, W William, Xinghua Cohen, Lu, arXiv:1909.06146Pubmedqa: A dataset for biomedical research question answering. arXiv preprintQiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, Bioinformatics. 364Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, arXiv:1710.03740arXiv preprintMixed precision trainingPaulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. 2017. Mixed precision training. arXiv preprint arXiv:1710.03740.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.
Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He. 2022. Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale. Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, arXiv:2201.05596arXiv preprintSamyam Rajbhandari, Conglong Li, Zhewei Yao, Min- jia Zhang, Reza Yazdani Aminabadi, Ammar Ah- mad Awan, Jeff Rasley, and Yuxiong He. 2022. Deepspeed-moe: Advancing mixture-of-experts in- ference and training to power next-generation ai scale. arXiv preprint arXiv:2201.05596.
Database resources of the national center for biotechnology information. W Eric, Jeffrey Sayers, Evan E Beck, Devon Bolton, Bourexis, R James, Kathi Brister, Canese, C Donald, Kathryn Comeau, Sunghwan Funk, William Kim, Klimke, Nucleic acids research. 49D110Eric W Sayers, Jeffrey Beck, Evan E Bolton, Devon Bourexis, James R Brister, Kathi Canese, Donald C Comeau, Kathryn Funk, Sunghwan Kim, William Klimke, et al. 2021. Database resources of the na- tional center for biotechnology information. Nucleic acids research, 49(D1):D10.
exbert: Extending pretrained models with domain-specific vocabulary under constrained training resources. Wen Tai, Xin Luna Kung, Marcus Dong, Chang-Fu Comiter, Kuo, Findings of the Association for Computational Linguistics: EMNLP 2020. Wen Tai, HT Kung, Xin Luna Dong, Marcus Comiter, and Chang-Fu Kuo. 2020. exbert: Extending pre- trained models with domain-specific vocabulary un- der constrained training resources. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1433-1439.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30.
Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, 32Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural infor- mation processing systems, 32.
Zhou z ha reviewon multiglabellearning algorithms. Ml Zhang, IEEETransactionsonKnowl-edge& DataEngneering. 268ML Zhang. 2014. Zhou z ha reviewon multiglabel- learning algorithms. IEEETransactionsonKnowl- edge& DataEngneering, 26(8):1819G1837.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionYukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE inter- national conference on computer vision, pages 19- 27.
| [
"https://github.com/JasonZhangzy1757/theeffect-of-domain-corpus-size-for-pretraining"
] |
[
"On the Context-Free Ambiguity of Emoji",
"On the Context-Free Ambiguity of Emoji"
] | [
"Justyna Częstochowska \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Kristina Gligorić \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Maxime Peyrard \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Yann Mentha \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Michał Bień \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Andrea Grütter andrea.gruetter@es.uzh.ch \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Anita Auer \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Aris Xanthos aris.xanthos@unil.ch \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n",
"Robert West robert.west@epfl.ch \nEPFL\nUniversity of Zurich\nUniversity of Lausanne\n\n"
] | [
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n",
"EPFL\nUniversity of Zurich\nUniversity of Lausanne\n"
] | [] | Due to their pictographic nature, emojis come with baked-in, grounded semantics. Although this makes emojis promising candidates for new forms of more accessible communication, it is still unknown to what degree humans agree on the inherent meaning of emojis when encountering them outside of concrete textual contexts. To bridge this gap, we collected a crowdsourced dataset (made publicly available) of one-word descriptions for 1,289 emojis presented to participants with no surrounding text. The emojis and their interpretations were then examined for ambiguity. We find that, with 30 annotations per emoji, 16 emojis (1.2%) are completely unambiguous, whereas 55 emojis (4.3%) are so ambiguous that the variation in their descriptions is as high as that in randomly chosen descriptions. Most emojis lie between these two extremes. Furthermore, investigating the ambiguity of different types of emojis, we find that emojis representing symbols from established, yet not cross-culturally familiar code books (e.g., zodiac signs, Chinese characters) are most ambiguous. We conclude by discussing design implications. | 10.1609/icwsm.v16i1.19393 | [
"https://arxiv.org/pdf/2201.06302v2.pdf"
] | 247,958,256 | 2201.06302 | d58051264d00ea87995f495a72adce6ab4edcf4c |
On the Context-Free Ambiguity of Emoji
Justyna Częstochowska
EPFL
University of Zurich
University of Lausanne
Kristina Gligorić
EPFL
University of Zurich
University of Lausanne
Maxime Peyrard
EPFL
University of Zurich
University of Lausanne
Yann Mentha
EPFL
University of Zurich
University of Lausanne
Michał Bień
EPFL
University of Zurich
University of Lausanne
Andrea Grütter andrea.gruetter@es.uzh.ch
EPFL
University of Zurich
University of Lausanne
Anita Auer
EPFL
University of Zurich
University of Lausanne
Aris Xanthos aris.xanthos@unil.ch
EPFL
University of Zurich
University of Lausanne
Robert West robert.west@epfl.ch
EPFL
University of Zurich
University of Lausanne
On the Context-Free Ambiguity of Emoji
Due to their pictographic nature, emojis come with baked-in, grounded semantics. Although this makes emojis promising candidates for new forms of more accessible communication, it is still unknown to what degree humans agree on the inherent meaning of emojis when encountering them outside of concrete textual contexts. To bridge this gap, we collected a crowdsourced dataset (made publicly available) of one-word descriptions for 1,289 emojis presented to participants with no surrounding text. The emojis and their interpretations were then examined for ambiguity. We find that, with 30 annotations per emoji, 16 emojis (1.2%) are completely unambiguous, whereas 55 emojis (4.3%) are so ambiguous that the variation in their descriptions is as high as that in randomly chosen descriptions. Most emojis lie between these two extremes. Furthermore, investigating the ambiguity of different types of emojis, we find that emojis representing symbols from established, yet not cross-culturally familiar code books (e.g., zodiac signs, Chinese characters) are most ambiguous. We conclude by discussing design implications.
Introduction
For over a decade, emojis have been playing an increasingly important role in online communication. As of September 2021, there are 3,633 emojis in the Unicode standard (Unicode 2021), and the number is growing, providing users with more ways to express increasingly complicated concepts. Consequently, emojis have received much attention from researchers. Various fields, including natural language processing, human-computer interaction, and Web and social media research, study the usage and function of emojis.
Beyond today's prevalent use cases (social media and instant messaging), emojis have untapped potential for facilitating communication in other contexts as well. While letters, syllables, and words are arbitrary and highly abstract constructs that require a long time to master, emojis come already packed with richly grounded semantics. Emojis can thus be leveraged, e.g., in learning and education (Gilles Doiron 2018) or to describe complex ideas to broad audiences (Thomason 2014).
However, it is unknown which emojis can be used for such goals. As a first step, it is necessary to establish how much people agree about the context-free interpretation of individual emojis. Doing so has broad implications for social media and Web research, communication studies, education, and more. Beyond research communities, identifying which emojis are ambiguous is helpful to online communities and emoji designers to prevent introducing emojis with a high potential for miscommunication. Additionally, studying context-free emoji semantics informs us which concepts can or cannot be easily communicated with emojis.
Despite the practical importance of these questions, it is difficult to approach them with available datasets. Social media content carries inherent selection biases, and emoji studies leveraging social media have been questioned for their generalizability (Herring and Dainas 2020). Additionally, as emojis are almost only used in context, it is difficult to infer context-free interpretations. To complicate matters further, the meaning of emojis on social media evolves with time , making it difficult to study their intrinsic semantics.
Here we ask: Do individuals interpret emojis similarly? Which emojis have the potential to be used in future communication scenarios, and to what extent? Previous work on the ambiguity of emojis focused on differences between platforms (Shurick and Daniel 2020) and their usage in context (Miller et al. 2017). Most closely related past studies focused on frequently used and anthropomorphic emojis (Shurick and Daniel 2020;Miller et al. 2017Miller et al. , 2016, discarding many available emojis. Whereas a lot is known about emoji sentiment and usage in context (Novak et al. 2015), less is known about emoji semantics beyond sentiment and context-free emoji interpretation. To bridge this gap, we designed and executed a crowdsourced study examining an exhaustive set of emojis, many of which are rarely used in online communication. We studied their interpretation in the absence of any textual context. Using the resulting novel dataset of emoji annotations, we address the following research questions: RQ1: To what degree do people agree when interpreting emojis? RQ2: What types of emojis are most and least ambiguous?
2 Related Work Emojis: interpretation and meaning. Previous research has shown that emojis are often misunderstood (Miller et al. 2016(Miller et al. , 2017. Misunderstanding is sometimes related to how the emoji's design is interpreted in context or the way it is shown on the receiving side. In particular, in 2016, Miller et al. examined interpretations of the 25 most popular anthropomorphic emojis without context, across five popular platforms. The study compared differences in sentiment and semantics to identify the most ambiguous emojis. In 2017, Miller et al. conducted a similar study comparing sentiment variability with and without context, for 10 anthropomorphic emojis. An extensive dataset of emoji senses was created by Wijeratne et al., linking Unicode emoji representations to their meanings automatically extracted from the Web. Recent studies of emoji meaning provide a longitudinal perspective (Barbieri et al. 2018b;. Our work studies the intrinsic ability of emojis to convey information, independent of the textual context they are used in. In contrast to Miller et al. (2016Miller et al. ( , 2017, who focused on small subsets of anthropomorphic emojis, we consider a far more exhaustive set of emojis (see Table 1). The aspiration of an emoji-based language. There is growing interest in the linguistic purposes of emojis (Na'aman, Provenza, and Montoya 2017) and their potential to emerge as a graphical language (Ge and Herring 2018). There have been multiple informal initiatives to create an emoji language, such as the attempt to translate Moby Dick into a sequence of emojis. 1 Such efforts demonstrate the potential for viewing emojis as the atomic units of graphical and intuitive language that could remove accessibility barriers inherent in standard written natural languages. Emojis, social media, and natural language processing. Social media researchers have been examining the ways social media users use, interpret, and express emotions and information through emojis. It is well known that emojis shape online language (Feldman et al. 2021;Pavalanathan and Eisenstein 2016). Emoji usage can also be a proxy for studying human behavior; e.g., emojis are a powerful indicator in the context of crisis events (Santhanam et al. 2018) and can be used to identify group belonging (Jones, Nurse, and Li 2021). Researchers have also been analyzing the use of gender and skin-tone modifiers (Barbieri and Camacho-Collados 2018;Goldwater 2020, 2021). As emojis became a standard element of online language, a need to computationally process them emerged. Creating meaningful, latent emoji representations (Eisner et al. 2016) and emoji prediction tasks (Barbieri et al. 2018a) gained importance in NLP. We thus note that our annotations can be used to compute or augment emoji representations and thus support the social media and NLP research communities.
Methods and Data
Emoji selection. We selected emojis for our study as follows. Starting from all available 3,633 emojis, we removed letters, numbers, flags, and gendered and skin-toned anthropomorphic emojis, considering only neutral versions of such emojis. We also removed variations of the same emoji (e.g., Annotation process. We collected emoji interpretations using the Amazon Mechanical Turk (AMT) crowdsourcing platform. Each participant was asked to "Describe emojis with a single, accurate word". Each task consisted of 10 emojis. All participants had to be at least 18 years old, speak English, reside in the USA, have a 99% approval rate, and have completed at least 500 tasks on AMT before. Our annotator compensation was in line with ethical guidelines for AMT (Whiting, Hugh, and Bernstein 2019). Each emoji was annotated by 30 unique participants. This number was chosen via pilot studies (150 annotations for 12 emojis), showing that, as the number of annotations increases beyond 30, the word distribution remained robust. Overall, we collected 38,670 annotations, for an average of 82.5 annotations per participants. In total, there were 445 unique participants. We asked participants to provide their age, gender, and mother tongue. The majority of annotators were native English speakers (97%). Participants' gender was well balanced (55% female, 44% male, 1% other or not stated). The average age was 38.8 (SD = 12.0).
Post-processing.
To improve the quality of annotations, we performed three post-processing steps: low quality annotator detection, validation of honeypots, and spelling correction. We performed detection of annotators with low annotation quality by identifying those who used the same word for all emojis in a task. We discarded one annotator whose vocabulary size was less than 80% of the number of assigned emojis. To further ensure the quality, one unquestionably non-ambiguous emoji was placed in every task. Annotations whose answers did not match any words from the expected set (e.g., "apple" for , "pizza" for , "carrot" for ) were excluded. Finally, to account for spelling mistakes, we crosschecked word validity using the PyEnchant library.
Measuring semantic variation. We use word embeddings (i.e., representations of words as numerical vectors) to quantify semantic similarity. To measure the extent to which annotators agree about emoji meaning, we calculate the dispersion of emoji annotations in a similar way to Miller et al., using GloVe vectors (Pennington, Socher, and Manning 2014) of dimensionality 200 (Řehůřek and Sojka 2011). Let V denote the set of distinct words used by respondents to annotate the considered emoji, which we will call the emoji's vocabulary; f v stands for the relative frequency of word v in the emoji's annotations and v * := arg max v∈V f v is the mode annotation, i.e., the most frequent word in V . We then define the emoji's semantic variation as the weighted average of the cosine distance between the embedding e v of each word v ∈ V and the embedding e v * of the mode annotation in V :
semantic variation = v∈V f v · (1 − cos(e v , e v * ))(1)
Results
RQ1:
To what degree do people agree when interpreting emojis? For each emoji, we measure the consistency among the words chosen to describe it. Consider the example of the fire emoji , for which one annotator used the word "hot" and another, "fire". Since the terms are different, the annotators-strictly speaking-do not agree. Yet, the words "hot" and "fire" are semantically close. We aim to capture such similarities via the notion of semantic variation (Eq. 1). To detect semantic variations significantly different from random, we compute the semantic variation of a random baseline. We sample n = 30 random words from the distribution of vocabulary across all annotations and calculate the semantic variation. We repeat the process 1,000 times to compute 95% confidence intervals (CI) and obtain a baseline semantic variation of 0.69 (95% CI [0.57, 0.88]). When an emoji's semantic variation falls in the 95% CI, its vocabulary could have equally well been a random set of words. In such cases, humans clearly do not agree in their interpretation. We repeat the same procedure to compute a random baseline with respect to vocabulary size (rather than variation), obtaining an average of 30 (i.e., all words are different), with 95% CI [27,30]. All emojis, with their semantic variation and vocabulary size, are presented in Fig. 1.
First, we find a strong positive correlation between vocabulary size and semantic variation (Spearman's ρ = 0.84, p < 10 −10 ). We note that emojis with a small vocabulary size can range from very ambiguous to not ambiguous at all, while the ones with a rich vocabulary have higher ambiguity.
Second, we find that 16 out of 1,289 emojis (1.2%) reach a variation of 0, i.e., they were described with a single, unique word by all 30 annotators. These include . For communication applications, these emojis are likely to be useful-they are unlikely to introduce misunderstanding. Table 1). Color represents the extent to which emojis within a category can be seen as belonging to an established code book of symbols. Black dashed lines represent random baselines and gray bands their bootstrapped 95% confidence intervals.
Third, the semantic variation of 55 out of 1,289 emojis (4.3%) falls into the random baseline confidence interval. Given the intuition that emojis come with built-in semantics, it is striking that some of them exhibit such low agreement. For future communication applications, these emojis are unlikely to be useful, as they introduce high levels of ambiguity.
In summary, human agreement about the context-free meaning of emojis ranges from completely unambiguous (16 emojis) to indistinguishable from random (55 emojis), with emojis covering the whole spectrum of ambiguity. Our dataset can guide communication applications in choosing appropriate emojis to facilitate understanding.
RQ2: What types of emojis are most and least ambiguous? We further investigate discrepancies in ambiguity, expecting different categories of emojis to exhibit different average variations. We report the average semantic variation per category in Fig. 1.
Our results add nuance to the findings of Miller et al. (2016), who found that anthropomorphic emojis can be more ambiguous than emojis characterizing things. In Fig. 1, the faces category takes a middle place in the semantic-variation ranking. Still, it is more ambiguous than objects. Categories with the lowest average variation are food & drink, clothes & accessories, nature, and hearts.
Interestingly, the top five most ambiguous categories are the ones that emerged from further dividing the original symbols category. Every emoji is, of course, a symbol, but whereas some emojis derive their meaning by immediately representing the shape of commonly known objects (e.g., ), those in the symbols category refer to entries from established code books of symbols of a less immediately pictographic nature (e.g.,
). Such emojis are, in a sense, "symbols of symbols" or "second-order symbols", and, without prior knowledge, may be impossible to interpret. In particular, astrological (zodiac) signs form the only category as ambiguous as the random baseline. They tended to be described with very different words or with names of other astrological signs. One could argue that astrological signs have an unambiguous mapping to their names, but without background knowledge, they yield ambiguous standalone interpretations. Similarly, emojis representing Japanese signs or having origins in Japanese culture (e.g., ) occupy the third place in terms of semantic variation, likely due to annotators' demographics and cultural background (United States residents, native English speakers). To describe emojis of Japanese, Chinese, and Korean characters (e.g., ), annotators consistently used words such as: japanese, chinese, asian, sign. This was not the case for some emojis (e.g., ) of whose Japanese origin annotators may not have been aware.
Based on these observations, we next investigate such "symbol-of-symbol" emojis in more detail. To quantify the degree to which an emoji belongs to an established code book of symbols (henceforth "symbolicalness"), two authors independently annotated all 1,289 emojis by indicating their level of agreement with the statement "This emoji is a symbol" on a five-point Likert scale where 1 corresponded to "absolutely disagree" and 5 to "absolutely agree".
We pre-established an annotation framework where we assigned levels from 5 to 1, respectively, to emojis representing objects and concepts that (5) are established symbols and can be encountered in everyday or specialized activities (e.g., ); (4) can have a symbolic meaning and be encountered in everyday or specialized activities (e.g., ); (3) may or may not have a symbolic meaning (e.g., );
(2) typically do not have a symbolic meaning (e.g., ); (1) are not established symbols (faces, gestures, people) (e.g., ). Following this framework, we obtained Kendall's τ = 0.8 (p = 1.55 × 10 −216 ) between the authors. We computed the average symbolicalness for each emoji and averaged the values for emojis within a category to obtain the category's symbolicalness rating. We represent the rating with a color scale in Fig. 1. There is a weak positive correlation (Spearman's ρ = 0.25, p = 1.61 × 10 −19 ) between semantic variation and symbolicalness. In addition, the most ambiguous categories of emojis indeed are the ones with the highest symbolicalness rating. Even though symbols are designed to facilitate communication, our results indicate that symbolic emojis can, maybe unintuitively, be ambiguous, as their interpretation requires specific prior knowledge.
In summary, we find that human agreement about the interpretation of emojis varies across different emoji types. The symbolicalness, or the extent to which an emoji is a symbol from an established code book of symbols, is an important dimension explaining the differences.
Discussion
Summary of main findings. Investigating whether people interpret emojis in the same way (RQ1), we find that emojis come with very different amounts of prepacked semantics. Some emojis are completely unambiguous, with all annotators describing them with the same word. On the opposite, others are as ambiguous as if their descriptions were drawn at random. To support the goal of using emojis to facilitate communication, the unambiguous emojis are the best candidates to bring direct benefits. Investigating what types of emojis are ambiguous (RQ2), we find that different types of emojis have very different levels of agreement in interpretation. An important dimension explaining the agreement differences is the degree to which an emoji belongs to an established code book of symbols. Emojis referring to symbols require background knowledge for interpretation and are less likely to be unambiguously recognized.
Concrete objects and things can easily be illustrated by an emoji, whereas abstract ideas and concepts are harder to represent without referring to symbolic ideas from shared cultural knowledge. Yet to support complex communication goals, it is necessary to refer to abstract ideas. This has to be via pre-established symbols, as there is no immediately pictographic way to represent abstract concepts such as peace ( ) or resistance ( ). Luckily, there exist ubiquitous symbols whose interpretations are widely agreed upon (e.g., for love). These symbols can be leveraged to convey complex ideas universally. Design implications. We highlight two mechanisms likely fueling measured variation in ambiguity and discuss their implications. First, the fact that concrete objects can more easily be universally described with emojis highlights the importance of considering the intended participants and their shared cultural background to appropriately choose the symbolic emojis to use. Second, emoji design is known to contribute to misinterpretations (Miller et al. 2017), since emojis are limited in size and need to be comprehensible even if displayed tiny; e.g., the "pine decoration" emoji contains a fair amount of details that are difficult to display on a small scale, further jeopardizing the understanding of the concept. Our data capturing empirical emoji ambiguity can thus help make emojis more accessible and user-friendly. Limitations and future work. Our goal is to provide initial measurements of the ambiguity of emojis. Therefore, our study is not without its limitations. All annotators provided a single word to describe an emoji. In the future, it will be interesting to extend the study to descriptions beyond one word. Also, all annotators were English speakers residing in the United States. Future work should generalize to other cultures and languages and understand how emoji ambiguity is associated with social media usage. Code and data. Code and data are publicly available at https://github.com/epfl-dlab/emoji-ambiguity.
Figure 1 :
1Top: the relationship between semantic variation (x-axis), and vocabulary size (y-axis). Bottom: average semantic variation across emoji categories (cf.
Table 1 :
1Emoji categorization. Twenty categories, category descriptions, number of emojis, and three examples.
https://www.kickstarter.com/projects/fred/emoji-dick family with three or four members). This resulted in the final set of 1,289 emojis. Furthermore, we collected emoji categories from Emojipedia.org and hand-crafted a categorization extending the seven existing categories to 20 finegrained types, outlined inTable 1.
Acknowledgments. This work was funded by Collaborative Research on Science and Society (CROSS), with further support from the Swiss National Science Foundation (grant 200021_185043), Microsoft, Google, and Facebook.
Interpretable Emoji Prediction via Label-Wise Attention LSTMs. F Barbieri, L E Anke, J Camacho-Collados, S Schockaert, H Saggion, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics2018Barbieri, F.; Anke, L. E.; Camacho-Collados, J.; Schock- aert, S.; and Saggion, H. 2018a. Interpretable Emoji Pre- diction via Label-Wise Attention LSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2018, 4766-4771. Association for Computational Linguistics.
How Gender and Skin Tone Modifiers Affect Emoji Semantics in Twitter. F Barbieri, J Camacho-Collados, Proceedings of the 7th Joint Conference on Lexical and Computational Semantics. the 7th Joint Conference on Lexical and Computational SemanticsBarbieri, F.; and Camacho-Collados, J. 2018. How Gender and Skin Tone Modifiers Affect Emoji Semantics in Twitter. In Proceedings of the 7th Joint Conference on Lexical and Computational Semantics, 101-106.
Exploring Emoji Usage and Prediction Through a Temporal Variation Lens. F Barbieri, L Marujo, P Karuturi, W Brendel, Proceedings of the 1st International Workshop on Emoji Understanding and Applications in Social Media. the 1st International Workshop on Emoji Understanding and Applications in Social Media2130Barbieri, F.; Marujo, L.; Karuturi, P.; and Brendel, W. 2018b. Exploring Emoji Usage and Prediction Through a Temporal Variation Lens. Proceedings of the 1st International Work- shop on Emoji Understanding and Applications in Social Media, 2130.
emoji2vec: Learning Emoji Representations from their Description. B Eisner, T Rocktäschel, M Bošnjak, S Riedel, Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media. The Fourth International Workshop on Natural Language Processing for Social MediaEisner, B.; Rocktäschel, T.; Bošnjak, M.; and Riedel, S. 2016. emoji2vec: Learning Emoji Representations from their Description. In Proceedings of The Fourth Interna- tional Workshop on Natural Language Processing for Social Media, 48-54.
Emojis and Words Work Together in the Service of Communication. L B Feldman, E Barach, V Srinivasan, S Shaikh, Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media. Feldman, L. B.; Barach, E.; Srinivasan, V.; and Shaikh, S. 2021. Emojis and Words Work Together in the Service of Communication. Workshop Proceedings of the 15th Inter- national AAAI Conference on Web and Social Media.
Communicative functions of emoji sequences on Sina Weibo. J Ge, S C Herring, 23First MondayGe, J.; and Herring, S. C. 2018. Communicative functions of emoji sequences on Sina Weibo. First Monday, 23(11).
Emojis: Visual Communication in Higher Education. J A Gilles Doiron, International Journal of Teaching. Learning J. A. Gilles Doiron22Gilles Doiron, J. A. 2018. Emojis: Visual Communication in Higher Education. International Journal of Teaching, Ed- ucation and Learning J. A. Gilles Doiron, 2(2): 1-11.
Gender and Age Influences on Interpretation of Emoji Functions. S C Herring, A R Dainas, ACM Transactions on Social Computing. 32Herring, S. C.; and Dainas, A. R. 2020. Gender and Age In- fluences on Interpretation of Emoji Functions. ACM Trans- actions on Social Computing, 3(2): 1-26.
The Shadowy Lives of Emojis: An Analysis of a Hacktivist Collective's Use of Emojis on Twitter. K Jones, J R Nurse, S Li, H Miller, D Kluver, J Thebault-Spieker, L Terveen, B Hecht, Understanding Emoji Ambiguity in Context: The Role of Text in Emoji-Related Miscommunication. AAAI Conference on Web and Social Media (ICWSM). Workshop Proceedings of the 15th International AAAI Conference on Web and Social MediaJones, K.; Nurse, J. R.; and Li, S. 2021. The Shadowy Lives of Emojis: An Analysis of a Hacktivist Collective's Use of Emojis on Twitter. In Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media. Miller, H.; Kluver, D.; Thebault-Spieker, J.; Terveen, L.; and Hecht, B. 2017. Understanding Emoji Ambiguity in Con- text: The Role of Text in Emoji-Related Miscommunication. AAAI Conference on Web and Social Media (ICWSM).
Blissfully happy" or "ready to fight. H Miller, J Thebault-Spieker, S Chang, I Johnson, L Terveen, B Hecht, Varying Interpretations of Emoji. AAAI Conference on Web and Social Media (ICWSM). Miller, H.; Thebault-Spieker, J.; Chang, S.; Johnson, I.; Ter- veen, L.; and Hecht, B. 2016. "Blissfully happy" or "ready to fight": Varying Interpretations of Emoji. AAAI Conference on Web and Social Media (ICWSM).
Mo-jiSem: Varying linguistic purposes of emoji in (Twitter) context. N Na'aman, H Provenza, O Montoya, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics-Student Research Workshop. the 55th Annual Meeting of the Association for Computational Linguistics-Student Research WorkshopNa'aman, N.; Provenza, H.; and Montoya, O. 2017. Mo- jiSem: Varying linguistic purposes of emoji in (Twitter) con- text. Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics-Student Research Workshop, 136-141.
P K Novak, J Smailović, B Sluban, I Mozetič, Sentiment of Emojis. 10Novak, P. K.; Smailović, J.; Sluban, B.; and Mozetič, I. 2015. Sentiment of Emojis. PLoS ONE, 10(12).
View of More emojis, less :) The competition for paralinguistic function in microblog writing | First Monday. U Pavalanathan, J Eisenstein, Pavalanathan, U.; and Eisenstein, J. 2016. View of More emojis, less :) The competition for paralinguistic function in microblog writing | First Monday.
GloVe: Global Vectors for Word Representation. J Pennington, R Socher, C D Manning, EMNLP 2014 -2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. ACLPennington, J.; Socher, R.; and Manning, C. D. 2014. GloVe: Global Vectors for Word Representation. In EMNLP 2014 -2014 Conference on Empirical Methods in Natural Lan- guage Processing, Proceedings of the Conference, 1532- 1543. Association for Computational Linguistics (ACL).
A Robertson, F F Liza, D Nguyen, B Mcgillivray, A Hale, S , Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media. Semantic Journeys: Quantifying Change in Emoji Meaning from 2012Robertson, A.; Liza, F. F.; Nguyen, D.; McGillivray, B.; and A. Hale, S. 2021. Semantic Journeys: Quantifying Change in Emoji Meaning from 2012-2018. In Workshop Proceed- ings of the 15th International AAAI Conference on Web and Social Media.
Emoji Skin Tone Modifiers: Analyzing Variation in Usage on Social Media. A Robertson, W Magdy, S Goldwater, ACM Transactions on Social Computing. 32Robertson, A.; Magdy, W.; and Goldwater, S. 2020. Emoji Skin Tone Modifiers: Analyzing Variation in Usage on So- cial Media. ACM Transactions on Social Computing, 3(2): 1-25.
Identity Signals in Emoji do not Influence Perception of Factual Truth on Twitter. A Robertson, W Magdy, S Goldwater, Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media. Robertson, A.; Magdy, W.; and Goldwater, S. 2021. Iden- tity Signals in Emoji do not Influence Perception of Factual Truth on Twitter. In Workshop Proceedings of the 15th In- ternational AAAI Conference on Web and Social Media.
I Stand With You: Using Emojis to Study Solidarity in Crisis Events. S Santhanam, V Srinivasan, S Glass, S Shaikh, Proceedings of the 1st International Workshop on Emoji Understanding and Applications in Social Media. Wijeratne, S.Kiciman, E.Saggion, H.and Sheth, A.the 1st International Workshop on Emoji Understanding and Applications in Social MediaSanthanam, S.; Srinivasan, V.; Glass, S.; and Shaikh, S. 2018. I Stand With You: Using Emojis to Study Solidar- ity in Crisis Events. In Wijeratne, S.; Kiciman, E.; Saggion, H.; and Sheth, A., eds., Proceedings of the 1st International Workshop on Emoji Understanding and Applications in So- cial Media.
What's behind those smiling eyes: Examining emoji sentiment across vendors. A A Shurick, J Daniel, Workshop Proceedings of the 14th International AAAI Conference on Web and Social Media. Shurick, A. A.; and Daniel, J. 2020. What's behind those smiling eyes: Examining emoji sentiment across vendors. Workshop Proceedings of the 14th International AAAI Con- ference on Web and Social Media.
Finally! Academics Describe Their Research in Terms We Can Understand. A Thomason, Thomason, A. 2014. Finally! Academics Describe Their Re- search in Terms We Can Understand.
Emoji Counts. I Unicode, v14.0Unicode, I. 2021. Emoji Counts, v14.0.
Fair Work: Crowd Work Minimum Wage with One Line of Code. M E Whiting, G Hugh, M S Bernstein, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. the AAAI Conference on Human Computation and Crowdsourcing7Whiting, M. E.; Hugh, G.; and Bernstein, M. S. 2019. Fair Work: Crowd Work Minimum Wage with One Line of Code. Proceedings of the AAAI Conference on Human Computa- tion and Crowdsourcing, 7: 197-206.
EmojiNet: An Open Service and API for Emoji Sense Discovery. S Wijeratne, L Balasuriya, A Sheth, D Doran, Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017. the 11th International Conference on Web and Social Media, ICWSM 2017Wijeratne, S.; Balasuriya, L.; Sheth, A.; and Doran, D. 2017. EmojiNet: An Open Service and API for Emoji Sense Dis- covery. Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017, 437-446.
Gensim-python framework for vector space modelling. R Rehůřek, P Sojka, NLP Centre, Faculty of Informatics. 32Masaryk UniversityRehůřek, R.; and Sojka, P. 2011. Gensim-python framework for vector space modelling. NLP Centre, Faculty of Infor- matics, Masaryk University, Brno, Czech Republic, 3(2).
| [
"https://github.com/epfl-dlab/emoji-ambiguity."
] |
[
"Harvesting comparable corpora and mining them for equivalent bilingual sentences using statistical classification and analogy- based heuristics",
"Harvesting comparable corpora and mining them for equivalent bilingual sentences using statistical classification and analogy- based heuristics"
] | [
"Krzysztof Wołk \nDepartment of Multimedia Polish -Japanese Academy of Information Technology\n\n",
"Emilia Rejmund erejmund@pja.edu.pl \nDepartment of Multimedia Polish -Japanese Academy of Information Technology\n\n",
"Krzysztof Marasek kmarasek@pja.edu.pl \nDepartment of Multimedia Polish -Japanese Academy of Information Technology\n\n"
] | [
"Department of Multimedia Polish -Japanese Academy of Information Technology\n",
"Department of Multimedia Polish -Japanese Academy of Information Technology\n",
"Department of Multimedia Polish -Japanese Academy of Information Technology\n"
] | [] | Parallel sentences are a relatively scarce but extremely useful resource for many applications including cross-lingual retrieval and statistical machine translation. This research explores our new methodologies for mining such data from previously obtained comparable corpora. The task is highly practical since non-parallel multilingual data exist in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Here we propose a web crawling method for building subject-aligned comparable corpora from e.g. Wikipedia dumps and Euronews web page. The improvements in machine translation are shown on Polish-English language pair for various text domains. We also tested another method of building parallel corpora based on comparable corpora data. It lets automatically broad existing corpus of sentences from subject of corpora based on analogies between them. | 10.1007/978-3-319-25252-0_46 | [
"https://arxiv.org/pdf/1511.06285v1.pdf"
] | 17,524,612 | 1511.06285 | bc41d42351dd340593e4920aa5cbc663d4e25dc1 |
Harvesting comparable corpora and mining them for equivalent bilingual sentences using statistical classification and analogy- based heuristics
Krzysztof Wołk
Department of Multimedia Polish -Japanese Academy of Information Technology
Emilia Rejmund erejmund@pja.edu.pl
Department of Multimedia Polish -Japanese Academy of Information Technology
Krzysztof Marasek kmarasek@pja.edu.pl
Department of Multimedia Polish -Japanese Academy of Information Technology
Harvesting comparable corpora and mining them for equivalent bilingual sentences using statistical classification and analogy- based heuristics
Parallel sentences are a relatively scarce but extremely useful resource for many applications including cross-lingual retrieval and statistical machine translation. This research explores our new methodologies for mining such data from previously obtained comparable corpora. The task is highly practical since non-parallel multilingual data exist in far greater quantities than parallel corpora, but parallel sentences are a much more useful resource. Here we propose a web crawling method for building subject-aligned comparable corpora from e.g. Wikipedia dumps and Euronews web page. The improvements in machine translation are shown on Polish-English language pair for various text domains. We also tested another method of building parallel corpora based on comparable corpora data. It lets automatically broad existing corpus of sentences from subject of corpora based on analogies between them.
Introduction
In this article we present methodologies that allow us to obtain truly parallel corpora from not sentence-aligned data sources, such as noisy-parallel or comparable corpora [1]. For this purpose we used a set of specialized tools for obtaining, aligning, extracting and filtering text data, combined together into a pipeline that allows us to complete the task. We present the results of our initial experiments based on text samples obtained from the Wikipedia dumps and the Euronews web page. We chose the Wikipedia as a source of the data because of a large number of documents that it provides (1,047,423 articles on PL Wiki and 4,524,017 on EN, at the time of writing this article). Furthermore, Wikipedia contains not only comparable documents, but also some documents that are translations of each other. The quality of our approach is measured by improvements in MT results.
Second method is based on sequential analogy detection. We seek to obtain parallel corpora from unaligned data. Solution proposed by our team is based on sequential analogy detection. Such approach was presented in literature [2] [3], but all applications concern similar languages with similar grammar like English-French, Chinese-Japanese. We try to apply this method for English-Polish corpora. These two languages have different grammar, which makes our approach innovative and let easily broad this method for different languages pairs. In our approach, to enhance quality of identified analogies, sequential analogies clusters are sought.
State of the art
Two main approaches for building comparable corpora can be distinguished. Probably the most common approach is based on the retrieval of the cross-lingual information. In the second approach, source documents need to be translated using any machine translation system. The documents translated in that process are then compared with documents written in the target language in order to find the most similar document pairs. The authors in [4] suggested obtaining only title and some meta-information, such as publication date and time for each document instead of its full contents in order to reduce the cost of building the comparable corpora (CC). The cosine similarity of titles term frequency vectors were used to match titles and contents of matched pairs. An interesting idea for mining parallel data from Wikipedia was described in [5]. The authors propose in their word two separate approaches. The first idea is to use an online machine translation (MT) system to translate Dutch pages of the Wikipedia into English and they try to compare original EN pages with translated ones.
The authors of [6] facilitate a BootCat method that was proven to be fast and effective when a corpus building is concerned. The authors try to extend this method by adding support for multilingual data and also present a pivot evaluation.
Interwiki links were facilitated by Tyers and Pienaar in [7]. Based on the Wikipedia link structure a bilingual dictionary is extracted. In their work they measured that depending on the language pair the mismatch between linked Wikipedia pages averages.
What is more, authors of [8] introduce an automatic alignment method of parallel text fragments by using a textual entailment technique and a phrase-base Statistical Machine Translation (SMT) system. Authors state that significant improvement in SMT quality by using obtained data was obtained (increase in BLEU by 1.73).
Preparation of the data
Our procedure starts with a specialized web crawler implemented by us. Because PL Wiki contains less data of which almost all articles have their correspondence on EN Wiki, the program crawls data starting from non-English site first. The crawler can obtain and save bilingual articles of any language supported by the Wikipedia. The tool requires at least two Wikipedia dumps on different languages and information about language links between the articles in the dumps (obtained from the interwiki links). For the Euronews.com a standard web crawler was used. This web crawler was designed to use the Euronews.com archive page. In first phase it generates a database of parallel articles in two selected languages in order to collect comparable data from it.
For the experiments in the statistical machine translation we choose TED lectures domain, to be more specific the PL-EN TED 1 corpora prepared for IWSLT (International Workshop on Spoken Language Translation) 2014 evaluation campaign by the FBK (Fondazione Bruno Kessler). This domain is very wide and covers many not related subjects and areas. The data contains almost 2,5M untokenized words [9]. Additionally we choose two more narrow domains: The first parallel corpus is made out of PDF documents from the European Medicines Agency (EMEA) and medicine leaflets [10]. The second was extracted from the proceedings of the European Parliament (EUP) [11]. What is more we also conducted experiments on the Basic Travel Expression Corpus (BTEC), a multilingual speech corpus containing tourism-related sentences similar to those that are usually found in phrasebooks for tourists going abroad [12]. Lastly we used a corpus built from the movie subtitles (OPEN) [10].
In Table 1 we present details about number of unique words (WORDS) and their forms as well as about number of bilingual sentence pairs (PAIRS).
As mentioned, the solution can be divided into three main steps. First the data is collected, then it is aligned at article level, and lastly the results of the alignment are mined for parallel sentences. Sentence alignment must be computationally feasible in order to be of practical use in various applications [13].
CORPORA
Parallel data mining
In order to extract the parallel sentence pairs we decided to try two different strategies. The first one facilitates and extends methods used in Yalign Tool 2 and the second is based on analogy detection.
The MT results we present in this article were obtained with the first strategy. The second method is still in development phase, nevertheless the initial results are promising and worth mentioning.
4.1! Improvements to Yalign's method
In Yalign tool [14] for the sequence alignment A* search algorithm is used [15] to find an optimal alignment between the sentences in two given documents. Unfortunately it can't handle alignments that cross each other or alignments from two sentences into a single one [15]. To overcome this and other minor problems, in order to improve mining quality, we used the Needleman-Wunch algorithm (originally used for DNA sequences) instead. Because it would require N * M calls to the sentence similarity matrix we implemented its GPU version for accelerated processing [16]. The classifier must be trained in order to determine if a pair of sentences is translation of each other or not. The particular classifier used in this research was a Support Vector Machine [17].
What is more our solution facilitated multithreading and proved to increase the mining time by the factor of 5 in comparison with standard Yalign tool (using Core i7 CPU).
To train the classifier a good quality parallel data was necessary as well as a dictionary with translation probability included. For this purposes we used TED talks [18] corpora enhanced by us during the IWSLT'13 Evaluation Campaign [13]. In order to obtain a dictionary we trained a phrase table and extracted 1-grams from it. We used the MGIZA++ tool for word and phrase alignment. The lexical reordering was set to use the msd-bidirectional-fe method and the symmetrization method was set to grow-diag-final-and for word alignment processing [20]. For bi-lingual training data we used four corpora previously described. We obtained four different classifiers and repeated mining procedure with each of them. The detailed results for the Wiki are showed in Table 2
Analogy based method
This method is based on sequential analogy detection. Based on parallel corpus we detect analogies that exists in both languages. To enhance quality of identified analogies sequential analogies clusters are sought. However our current research on Wikipedia corpora shows that it is both extremely difficult and machine time consuming to seek out clusters of higher orders. Therefore we restrained ourselves to simple analogies such as A is to B in the same way as C to D.
A:B::C:D
Such analogies are found using distance calculation. We seek such sentences that:
dist(A,B)=dist(C,D)
and
dist(A,C)=dist(B,D)
Additional constrain was added that requires the same relation of occurrences of each character in the sentences. E.g. if number of character "a" in sentence A is equal to x and equal to y in sentence B then the same relation must occur in sentences C and D.
We used Levenshtein metric in our distance calculation. We tried to apply it directly into the characters in the sentence, or considering each word in the sentence, as individual symbol, and calculating Levenshtein distance between symbol coded sentences. The latter approach was employed due to the fact that this method was earlier tested on Chinese and Japanese languages [19] that use symbols to represent entire words.
After clustering, data from clusters are compared to each other to find similarities between them. For each four sentences A:B::C:D
We look for such E and F that: However none were found in our corpus, therefore we restrained ourselves to small clusters with size of 2 pairs of sentences. In every cluster matching sentences from parallel corpus were identified. It let us generate new sentences similar to the one which are in our corpus and add it to broad resulting data set. For each of sequential analogies that were identified, rewriting model is constructed. This is achieved by string manipulation. Common pre-and suffixes for each of the sentences are calculated using LCS (Longest Common Subsequence) method. Sample of rewriting model is shown on this example (prefix and suffix are shown in bold)
Poproszę koc i poduszkę. "A blanket and a pillow, please.
Czy mogę poprosić o śmietankę i cukier? " Can I have cream and sugar?
Rewriting model consist of prefix, suffix and their translation. It is now possible to construct parallel corpus form non-parallel monolingual source. Each sentence in the corpus is tested for match with the model. If the sentence contains prefix and suffix is considered matching sentence.
Poproszę bilet. " A unknown, please.
In the matched sentence some of the words remain not translated but general meaning of the sentence is conveyed. Remaining words may be translated word-by-word while translated sentence would remain grammatically correct.
bilet " ticket Substituting unknown words with translated ones we are able to create a parallel corpus entry.
Poproszę bilet. " A ticket, please.
As a result of sequential analogy detection based method we mined 8128 models from of Wikipedia parallel corpus. This enabled generation of 114,000 new pair sentences to extend parallel corpus. Sentences were generated from Wikipedia comparable corpus that is basically an extract of Wikipedia articles. Therefore we have article in Polish and English on the same topic, but sentences are not aligned in any particular way. We use rewriting models to match sentences from Polish article to sentences in English. Whenever model could be successfully applied to a pair of sentences, this pair is considered to be parallel resulting in generation of quasi-parallel corpus (quasi since sentences were aligned artificially using approach described above). Those parallel sentences can be used to extend parallel corpora in order to improve quality of translation.
In order to evaluate the corpora we divided each corpus into 200 segments and randomly selected 10 sentences from each segment for testing purposes. This methodology ensured that the test sets covered entire corpus. The selected sentences were removed from the corpora. We trained the baseline system, as well as system with extended training data with the Wikipedia corpora and lastly we used Modified Moore Levis Filtering for the Wikipedia corpora domain adaptation. Additionally we used monolingual part of the corpora as language model and we tried to adapt it for each corpus by using linear interpolation [2]. For scoring purposes we used four well-known metrics that show high correlation with human judgments. Among the commonly used SMT metrics are: Bilingual Evaluation Understudy (BLEU) [11] the U.S. National Institute of Standards & Technology (NIST) metric [20], the Metric for Evaluation of Translation with Explicit Ordering (METEOR) [8], and Translation Error Rate (TER) [20].
The baseline system testing was done using the Moses open source SMT toolkit with its Experiment Management System (EMS) [16] with settings described in [13].
Starting from baseline systems (BASE) tests in PL to EN and EN to PL direction, we raised our score through extending the language model (LM), interpolating it (ILM) and by the corpora extension with additional data (EXT) and by filtering additional data with Modified Moore Levis Filtering (MML) [2]. It must be noted that extension of language models was done on systems with corpora after MML filtration. The LM and ILM experiments already contain extended training data. The results of the experiments are showed in Table 3.
Polish to English
Conclusions
The results showed in Table 4 and Table 5, to be more specific BLEU, Meteor and TER values in TED corpus were checked whether the differences were relevant. We measured the variance due to the BASE and MML set selection. It was calculated using bootstrap resampling 3 for each test run. The result for BLEU was 0.5 and 0.3 and 0.6 for METEOR and TER respectively. The results over 0 mean that there is significant (to some extent) difference between the test sets and it indicates that a difference of this magnitude is likely to be generated again by some random translation process, which would most likely lead to better translation results in general. [21] The results of SMT systems based only on mined data were not too surprising. Firstly, they confirm quality and high parallelism level of the corpora that can be concluded from the translation quality especially on the TED data set. Only 2 BLEU points gap can be observed when comparing systems trained on strict in-domain (TED) data and mined data, when it comes to EN -PL translation system. It also seems natural that the best SMT scores were obtained on TED data. It is not only most similar to the Wikipedia articles and overlaps with it in many topics but also the classifier trained on the TED data set recognized most of parallel sentences. In the results it can also be observed that the METEOR metric in some cases rises whereas other metrics decrease. Most likely reason for this is fact that other metrics suffer, in comparison to the METEOR, from the lack of scoring mechanism for synonyms. The Wikipedia is very wide not only when we consider its topics but also vocabulary, which leads to conclusions that mined corpora, is good source for extending sparse text domains. It is also the reason why the test sets originating from wide domains outscore narrow domain ones and also the most likely explanation why training on larger mined data sometimes slightly decreases test sets from very specific domains. Nonetheless it must be noted that after manual analysis we conceded that in many cases translations were good but automatic metric became lower because of the usage of synonyms.
Nowadays, the bi-sentence extraction task is becoming more and more popular in unsupervised learning for numerous specific tasks. The method overcomes disparities between two languages. It is a language independent method that can easily be adjusted to a new environment, and it only requires parallel corpora for initial training. The experiments show that the method performs well. The obtained corpora increased MT quality in wide text domains. From a practical point of view, the method neither requires expensive training nor requires language-specific grammatical resources, while producing satisfying results.
C
:D::E:F and E:F::A:B
With this methodology we were able to obtain 4,498 topic-aligned articles from the Euronews and 492,906 from the Wikipedia.PL WORDS
EN WORDS
PAIRS
BTEC
50,782
24,662
220,730
TED
218,426
104,117
151,288
EMEA
148,230
109,361
1,046,764
EUP
311,654
136,597
632,565
OPEN
1,236,088
749,300
33,570,553
Table 1 Corpora specification
.Classifier Value
PL
EN
TED
Size in MB
41,0
41,2
No. of sentences
357,931
357,931
No. of words
5,677,504 6,372,017
No. of unique words 812,370
741,463
BTEC
Size in MB
No. of sentences
3,2
41,737
3,2
41,737
No. of words
439,550
473,084
No. of unique words 139,454
127,820
EMEA
Size in MB
0,15
0,14
No. of sentences
No. of words
1,507
18,301
1,507
21,616
No. of unique words 7,162
5,352
EUP
Size in MB
8,0
8,1
No. of sentences
74,295
74,295
No. of words
No. of unique words
1,118,167
257,338
1,203,307
242,899
OPEN
Size in MB
5,8
5,7
No. of sentences
25,704
25,704
No. of words
779,420
854,106
No. of unique words 219,965
198,599
Table 2
2Data mined from the Wikipedia for each classifier
Table 3
3Polish to English and English to Polish MT Experiments
https://www.ted.com/talks
https://github.com/machinalis/yalign
https://github.com/jhclark/multeval
Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi-Comparable Corpora"; Natural Language Processing. D Wu, P Fung, Lecture Notes in Computer Science. 3651IJCNLPWu D., Fung P.; "Inversion Transduction Grammar Constraints for Mining Parallel Sentences from Quasi- Comparable Corpora"; Natural Language Processing -IJCNLP 2005; Lecture Notes in Computer Science Volume 3651, 2005, pp 257-268
Automatic Building and Using Parallel Resources for SMT from Comparable Corpora. S Pal, P Pakray, S Naskar, Pal S., Pakray P., Naskar S., "Automatic Building and Using Parallel Resources for SMT from Comparable Corpora", 2014
Extracting bilingual words pairs from Wikipedia. F Tyer, J Pienaar, Tyer F. and Pienaar J., "Extracting bilingual words pairs from Wikipedia", 2008
Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. J Clark, C Dyer, A Lavie, N Smith, Proceedings of the Association for Computational Lingustics. the Association for Computational LingusticsPortland, Oregon, USAClark J., Dyer C., Lavie A., Smith N., "Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability", Proceedings of the Association for Computational Lingustics, Portland, Oregon, USA, 2011
TED Polish-to-English translation system for the IWSLT. K Marasek, Proceedings of the 9 th International Workshop on Spoken Language Translation IWSLT 2012. the 9 th International Workshop on Spoken Language Translation IWSLT 2012Hong KongMarasek K., "TED Polish-to-English translation system for the IWSLT 2012". Proceedings of the 9 th International Workshop on Spoken Language Translation IWSLT 2012, p.126-129, Hong Kong 2012.
Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment. J Smith, C Quirk, K Toutanova, Smith J., Quirk C., Toutanova K., "Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment", 2010
Chinese-Japanese parallel sentence extraction from quasicomparable corpora. C Chu, T Nakazawa, S Kurohashi, Proceedings of ACL 2013. ACL 2013Chu, C., Nakazawa, T., Kurohashi, S.: Chinese-Japanese parallel sentence extraction from quasi- comparable corpora. In: Proceedings of ACL 2013. pp 34-42 (2013)
Finding Similar Sentences across Multiple Languages in Wikipedia. S Adafree, M De Rijke, Adafree S., de Rijke M., "Finding Similar Sentences across Multiple Languages in Wikipedia,", 2006
Collecting and Using Comparable Corpora for Statistical Machine Translation. I Skadiņa, A Aker, Proceedings of LREC2012. LREC2012InstanbulSkadiņa I., Aker A., Collecting and Using Comparable Corpora for Statistical Machine Translation, in Proceedings of LREC2012, Instanbul 2012
Towards Effective Use of Training Data in Statistical Machine Translation. P Koehn, B Haddow, WMT '12 Proceedings of the Seventh Workshop on Statistical Machine Translation. Stroudsburg, PA, USAKoehn P., Haddow B., "Towards Effective Use of Training Data in Statistical Machine Translation", WMT '12 Proceedings of the Seventh Workshop on Statistical Machine Translation, pp. 317-321, Stroudsburg, PA, USA, 2012
. G Berrotarán, R Carrascosa, A Vine, Yalign documentationBerrotarán G., Carrascosa R., Vine A., Yalign documentation, http://yalign.readthedocs.org/en/latest/
Parallel Data, Tools and Interfaces in OPUS. J Tiedemann, Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012). the 8th International Conference on Language Resources and Evaluation (LREC 2012)Tiedemann J., "Parallel Data, Tools and Interfaces in OPUS."; In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012), pp. 2214-2218.
Real-Time Statistical Speech Translation. K Wołk, K Marasek, 2194-5357, ISBN 978-3-319-05950-1Advances in Intelligent Systems and Computing. Madeira Island, PortugalSpringer275Wołk K., Marasek K., "Real-Time Statistical Speech Translation", Advances in Intelligent Systems and Computing volume 275, p.107-114, Publisher: Springer, ISSN 2194-5357, ISBN 978-3-319-05950-1, Madeira Island, Portugal, 2014
BootCatting Comparable Corpora. A Kilgarriff, Pvs A, J Pomikalek, Proceedings of 9th International Conference on Terminology and Artificial Intelligence. 9th International Conference on Terminology and Artificial IntelligenceParis, FranceKilgarriff A., PVS A., Pomikalek J., "BootCatting Comparable Corpora", Proceedings of 9th International Conference on Terminology and Artificial Intelligence, Paris, France, 2011
J Strotgen, M Gertz, Temporal Tagging on Different Domains:Challenges, Strategies, and Gold Standards, Proceedings of LREC2012. InstanbulStrotgen J., Gertz M., Temporal Tagging on Different Domains:Challenges, Strategies, and Gold Standards, Proceedings of LREC2012, Instanbul 2012
WIT3: Web Inventory of Transcribed and Translated Talks. M Cettolo, C Girardi, M Federico, Proc. of EAMT. of EAMTTrento, ItalyCettolo M, Girardi C., Federico M., "WIT3: Web Inventory of Transcribed and Translated Talks". In Proc. of EAMT, pp. 261-268, Trento, Italy, 2012
Finding shortest paths on real road networks: the case for A. W Zeng, R L Church, International Journal of Geographical Information Science. 234Zeng W., Church R. L., "Finding shortest paths on real road networks: the case for A*". International Journal of Geographical Information Science 23 (4): 531-543, 2009.
Alignment of the Polish-English Parallel Text for a Statistical Machine Translation. K Wołk, K Marasek, Computer Technology and Application. 4David PublishingPrintWołk K., Marasek K., "Alignment of the Polish-English Parallel Text for a Statistical Machine Translation.", Computer Technology and Application 4, Publisher: David Publishing, ISSN:1934-7332 (Print), ISSN: 1934-7340 (Online), p. 575-583, 2013
Inflating a Training Corpus for SMT by Using Unrelated Unaligned Monolingual Data; Advances in Natural Language Processing. W Yang, Y Lepage, Lecture Notes in Computer Science. 8686Yang W., Lepage Y.; Inflating a Training Corpus for SMT by Using Unrelated Unaligned Monolingual Data; Advances in Natural Language Processing; Lecture Notes in Computer Science Volume 8686, 2014, pp 236-248
G Musso, Sequence Alignment (Needleman-Wunsch. Musso G., Sequence Alignment (Needleman-Wunsch, Smith-Waterman), http://www.cs.utoronto.ca/~brudno/bcb410/lec2notes.pdf
Text Categorization with Support Vector Machines: Learning with Many Relevant Features. Thorsten Joachims, Lecture Notes in Computer Science. 1398Thorsten Joachims, "Text Categorization with Support Vector Machines: Learning with Many Relevant Features", Lecture Notes in Computer Science Volume 1398, 1998, pp 137-142, 2005
| [
"https://github.com/machinalis/yalign",
"https://github.com/jhclark/multeval"
] |
[
"Incorporating Discourse Aspects in English { Polish MT: Towards Robust Implementation",
"Incorporating Discourse Aspects in English { Polish MT: Towards Robust Implementation"
] | [
"Ma Lgorzata ",
"E Sty ",
"Stefan S Zemke ",
"\nDepartment of Computer and Information Science Link\nComputer Laboratory University of Cambridge New Museums\nSite Pembroke StreetCB2 3QGCambridgeEngland\n",
"\noping University\n58183Link oping Sweden\n"
] | [
"Department of Computer and Information Science Link\nComputer Laboratory University of Cambridge New Museums\nSite Pembroke StreetCB2 3QGCambridgeEngland",
"oping University\n58183Link oping Sweden"
] | [] | The main aim of translation is an accurate transfer of meaning so that the result is not only grammatically and lexically correct but also communicatively adequate. This paper stresses the need for discourse analysis in order to preserve the communicative meaning in English{Polish machine translation. Unlike English, which is a positional language with word order being grammatically determined, Polish displays a strong tendency to order constituents according to their degree of salience, so that the most informationally salient elements are placed towards the end of the clause regardless of their grammatical function. The Centering Theory developed for tracking down given information units in English and the Theory of Functional Sentence Perspective predicting informativeness of subsequent constituents provide theoretical background for this work. The notion of center is extended to accommodate not only for pronominalisation and exact reiteration but also for de niteness and other center pointing constructs. Center information is additionally graded and applicable to all primary constituents in a given utterance. This information is used to order the post-transfer constituents correctly, relying on statistical regularities and some syntactic clues. | null | [
"https://arxiv.org/pdf/cmp-lg/9510006v1.pdf"
] | 16,290,003 | cmp-lg/9510006 | bec893dcb8afe1809da3d54466838755681c1a53 |
Incorporating Discourse Aspects in English { Polish MT: Towards Robust Implementation
cmp-lg/9510006 15 Oct 1995
Ma Lgorzata
E Sty
Stefan S Zemke
Department of Computer and Information Science Link
Computer Laboratory University of Cambridge New Museums
Site Pembroke StreetCB2 3QGCambridgeEngland
oping University
58183Link oping Sweden
Incorporating Discourse Aspects in English { Polish MT: Towards Robust Implementation
cmp-lg/9510006 15 Oct 1995centeringconstituent orderFSPmachine translationdiscourse analysis
The main aim of translation is an accurate transfer of meaning so that the result is not only grammatically and lexically correct but also communicatively adequate. This paper stresses the need for discourse analysis in order to preserve the communicative meaning in English{Polish machine translation. Unlike English, which is a positional language with word order being grammatically determined, Polish displays a strong tendency to order constituents according to their degree of salience, so that the most informationally salient elements are placed towards the end of the clause regardless of their grammatical function. The Centering Theory developed for tracking down given information units in English and the Theory of Functional Sentence Perspective predicting informativeness of subsequent constituents provide theoretical background for this work. The notion of center is extended to accommodate not only for pronominalisation and exact reiteration but also for de niteness and other center pointing constructs. Center information is additionally graded and applicable to all primary constituents in a given utterance. This information is used to order the post-transfer constituents correctly, relying on statistical regularities and some syntactic clues.
Introduction
Machine translation tends to concentrate on examining and conveying the meaning and structure of individual sentences. However, such action is not always su cient. This paper discusses how analysis of intersentential connections could be performed and then exploited in MT. Such undertaking is thought to be necessary since the transfer of meaning has to be accurate not only on the lexical and grammatical level but also needs to carry across the communicative meaning of each utterance.
English and Polish exhibit certain idiosyncratic features which impose di erent ways of expressing the information status of constituents in succeeding clauses. Unlike English, which is a positional language with word order being grammatically determined, Polish displays a strong tendency to order constituents according to their degree of salience, so that the most informationally salient elements are placed towards the end of the clause regardless of their grammatical function. Such ordering of constituents yields solid knowledge about their degree of salience.
The paper is organised as follows. Section 2 includes a description of the center concept and an explanation of how it is carried across English clauses. A separate section is devoted to our extensions of the classic notion of center in view of machine translation. We then go on to describe the idiosyncratic properties of Polish and their implications for center transfer. Finally, practical rules for ordering Polish constituents are outlined.
The computational and theoretical background is supplied by the Centering Theory and the Theory of Functional Sentence Perspective. The former is used during English analysis while the latter provides theoretical framework for Polish gen-eration.
Centering Model for English Analysis
Centering as presented by Grosz,Joshi,Weinstein (Grosz et al. 86) and extended by Brennan,Friedman,Pollard (Brennan et al. 87) is a useful discourse model based on a system of rules for tracking down given information units on utterance level. Center, expressed as a noun phrase, is a pragmatic construct and it is intentionally de ned as the discourse entity that the utterance is about.
Original Centering Algorithm
The current presentation of centering follows that by Grosz,et al. (Grosz et al. 86), (Grosz et al. 95). Identi cation of center is based on purely coreferential relations. Each utterance segment consists of utterances U 1 , ..., U m and each of them exhibits one center. Associated with each utterance is a forward-looking center list Cf(U n ) of all nominal expressions within U n .
The backward-looking center Cb(U n ), which is the center proper, is the highest-ranked element of Cf(U n ) realized in U n 1 . Pronominalisation and subjecthood are the main criteria underlying this ranking. The entities on the Cf list are ordered by grammatical function which corresponds to the linear order of constituents in English. The rst utterance in discourse has subject as its center by default. Generally, however, resolvable pronouns are the preferred center candidates, since they are the most common devices signalling the relation of coreference. Comparison of centers can generally relate utterances in the way of continuing (Cb(U n ) = Cb(U n 1 )) or shifting (Cb(U n ) 6 = Cb(U n 1 )). Composite computation of a center value depending on a number of clues Introduction of a limited referential distance which depends on constituent length Incorporation of synonyms in reiteration detection We choose the constituent with the highest center value as the discrete center of an utterance. If more than one constituent has been assigned the same value, we take the entity that is highest-ranked according to the ranking introduced by (Grosz et al. 86), (Grosz et al. 95),(Brennan et al. 87).
Within the next few sections, we provide a description of those centering criteria that have been added to the original algorithm.
De niteness
De nite noun phrases are often co-speci ers of current centers. The correlation between deniteness and an entity having been introduced in previous discourse in English is high but not total. (For example, proper names can be textually new yet de nite.) We therefore include de niteness among factors contributing to center evaluation. Inde nite noun phrases are treated as new discourse entities.
Lexical Reiteration
Lexically reiterated items include repeated or synonymous noun phrases often preceded by definite articles, possessives or demonstratives. We also propose to consider semantic equivalence based on the synonyms coded in the lexicon as valid instances of reiteration.
Referential Distance
For pronouns and reiterated nouns, we propose the allowed maximal referential distance, measured in the number of clauses scanned back, to correlate with the word length of the constituent involved (Siewierska 93b). This relates to the observation that short referring expressions have their resolvents closer than longer ones. Such precaution limiting the referential distance minimises the danger of over-interpretation of common generic expression such as it.
Although we haven't yet experimented with various functions relating the type of referent to its allowed referential distance, a simple linear dependence (with factor 1-2) seems to be reasonable. Thus, in the following example, we will assume the referential distance to be twice the length of the (resulting Polish) constituent.
Center-pointing Constructions
Certain English constructions unambiguously point to the center thus making more detailed analysis unnecessary.
Although the subject is obligatory in an English sentence, occasionally a formal slot-lling item is substituted in its place giving rise to a cleft construction. The information structure becomes explicit by virtue of the fact that it exhibits a structurally marked center. (Eg. It was John who came.) The center could also be fronted (Eg. Apples, Adam likes) or introduced using a sort of sentence equivalent (Eg. As for Adam, he doesn't like apples). Sidetracking from the main thread of discourse is a common device used by the speaker to direct the attention of the addressee. Expressions such as as for, concerning, with regard to are such prompts.
Center Gradation
Considering the priority scale of referential items, the mechanisms underlying centering in English could then be outlined as follows, Preference of pronouns over full nouns Preference of de nites over inde nites Preference of reiterated items over nonreiterated ones Preference of constituents involving more \givenness" indicators These considered along with special centerpointing constructions give rise to the following numerical guidelines (some of which agree with the idea of a givenness hierarchy cf. (Gundel 93)), 1. Highest center value is given to \unquestionable" centers: pointed out by center-indicating constructions such as clefts, according to (Grosz et al. 86): (resolved) personal pronouns, consisting of (de nite) reiterated possessive expressions. 2. Lower priority is given to de nite reiterations, other resolved pronouns, not fully reiterated or de nite possessive expressions, demonstratives. 3. The lowest positive value is given to dubious centers such as non-de nite reiterations, non-iterated de nites. 4. Value -1 given to new information units introduced by inde nite articles \a/an", determiners \another", \other". 5. Neutral value 0 is assigned to all other Np.
The rules for Composite Centers allow us to calculate center value increase over the default value 0. Thus, for example, the center value for the scientists' colleagues will be arrived at by adding the contribution for the (+1) to the contributions for scientists and colleagues (each 0 or 1 depending on whether the item is reiterated) giving a value between 1 and 3 depending on the context. In Figure 2, we illustrate the application of rules included in Figure 1.
The assumtion for all center rules is that the highest possible center value is derived.
Local Discourse Mechanisms in Translation
In discourse analysis, we relate particular utterances to their linguistic and non-linguistic environment. Below, we shall describe the relationship between the grammatical sentence pattern (Subject Verb Object) and the communicative pattern (Theme Transition Rheme).
Functional Sentence Perspective
FSP is an approach used by the Prague School of linguists to analyse utterances of Slavic languages in terms of their information content (Firbas 92). In a coherent text, the given or known information, theme, usually appears rst thus forming a co-referential link with the preceding text. The new information, rheme, provides some information about the theme. It is the essential piece of information of the utterance. There are clear linear e ects of FSP 1 . Utterance non-nal positions usually have given information interpretation and the nal section of the utterance represents the new. This phenomenon could be explained by word order arranged in such a way that rst come words pointing to details already familiar from the preceding utterances/external context and only then come words describing new detail. Similarly, in the process of mental activities rst comes the process of identi cation and then augmentation of received perception. It is then followed by details individually connected with the given idea (Szwedek 76).
Constituent Order in Polish Translation
The distribution of new/old information determines the order of constituents within clauses.
Since the grammatical function is determined by in ection in Polish, there is great scope for constituent order to express contextual distinctions and the order often seems free due to virtual absence of structural obstacles. Just as it is not valid to assign any order to a sequence of constituents, one cannot keep repeating the SVO sequence in all cases. If we were to translate all sentences of an English text into Polish following the canonical SVO pattern, we would get a grammatically correct but often communicatively inadequate and incoherent text. Thus in order to decide which ordering to use, we have to take into consideration the commonly occurring contextual functions and their implications for the probability and the frequency of occurrence of a given order. The degree of emphasis is also a factor and it is worth noting that the more frequently an order occurs the less emphatic it is (Siewierska 93b).
Restrictions on phrasal constituent order can be broadly placed under three categories: contextual, grammatical and stylistic. The grammatical restrictions are not as strict as in English and the stylistic constraints are omitted within the scope of this paper. The remaining sections concentrate on the former two categories.
Ordering of Polish Constituents
The Ordering Approach
As it has already been argued, center information is crucial for the communicatively correct positioning of Polish constituents in a ow of text. However, there are other factors in uencing the order which can co-specify or even override it. This presents a delicate task of balancing a number of clues selecting the most justi ed order(s) or { in the case of a discriminating approach { the ones which do not have any strong arguments against them.
Our choice of ordering criteria has been directly based on the ndings of the Prague School discussed above, our own linguistic experience (both of us bilingual, native speakers of Polish) and on some statistical data provided by (Siewierska 93b), (Siewierska 93a), (Siewierska 87).
The intended approach to ordering could be characterised as follows, Permissive: Generate more (imperfect) versions rather than none at all. If need be, restrict by further lters.
Composite: Generate all plausible orders before some of them will be discriminated. (This approach is side-tracked when a special construction is encountered.) Discrete: No gradings/probability measures are assigned to competing orders as to discriminate between them. This could be an extension.
Ordering Criteria
The ordering of constituents in Polish utterances generally follows the communicative order from given to new. Below we present some rules which are obeyed by Polish clauses under normal conditions, End weight principle: Last primary constituent is the anti-center. Given information fronting: Constituents belonging to the given information sequence are fronted. Short precede long principle: Shorter constituents go rst. Relative order principle: Certain partial orders are only compatible with speci c patterns of constituents. Additionally, there is a strong tendency to omit subject pronouns. Such omission, however, exhibits di erent degrees of optionality.
What follows is a list of constructs used in subsequent tables to generate plausible orders of (translated) Polish constituents.
Center information: has the highest rank in the ordering procedure and is used in two aspects:
center ( Features of next utterance: e.g. center(S,Un+1) > 0, can be used together with the features of the current utterance in order to obtain more speci c conditions.
In the following tables S denotes (Polish) subject, V { verb, O { object, X { adjunct, Prim { S or O, \-" { (sequence of) any, ] { omitted constituent. The di erence for \ " to hold must be at least 2.
Building on Orders of Constituents
The Preference Table presents some of the main PREFERENCEs for generating orders of Polish constituents depending on CONDITIONS. Each line of the table can be treated as an independent if-then rule co-specifying (certain aspects of) an order. CONDITIONS being simple conjunctions (of regular expressions) are intended to allow straightforward transformation into a Prolog program. Di erent rules can be applied independently thus possibly better determining a given order 4 . The JUSTIFICATION column provides some explanation of the validity of each rule; 'bare' indicates the percentage of bare constructions including three primary constituents only. Both the Preference Table and the Discrimination Table are mostly based on statistical data gathered by (Siewierska 87), (Siewierska 93b), (Siewierska 93a).
Discriminating Orders
It might be the case that as a result of applying the Preference Table, we obtain too many orders. The Discrimination Table provides some rationale for excluding those matching ORDERs for which one of their DISCRIMINATION conditions fails. If the building stage left us with no possible orders at all, we could allow any order and pick only those which successfully pass all their discrimination tests. It is purposeful that all orders apart from the canonical SVO have some discrimination conditions attached to them. The rarer the order tends to be the more strict the condition. Therefore, SVO is expected to be the prevailing order.
Special Cases
There remains a number of cases which escape simple characterisation in terms of \preferred and not-discriminated". The Preprocessing Table offers some solutions under such circumstances. It is to be checked for its conditions before any of the previous tables are involved. If a condition holds, its result (eg. 0-anaphora) should be noted and only then the other tables applied to cospecify features of the translation as described above. The Preprocessing Table can yield erroneous results when applied repeatedly for the same clause. Therefore, unlike the other tables, it should be used only once per utterance.
Example
In Figure 6 we continue the example from Figure 2. The orderings built on by a cooperation of the Preprocessing/Preference and not refused by the Discrimination Table appear in the last column.
Conclusion
One of the aims of this research was to exploit the notion of center in Polish and put it forward in context of machine translation. Centers are conceptualised and coded di erently in Polish and English utterances. This fact has clear repercussions in the process of translation. Through exploring the pragmatic, semantic and syntactic conditions underlying the organisation of utterances in both languages, we have been able to devise a set of rules for communicatively motivated ordering of Polish constituents.
Among the main factors determining this positioning are pronominalisation, lexical reiteration, de niteness, grammatical function and special centered constructions in the source language. Their degree of topicality is coded by the derived center values. Those along with additional factors, such as the length of the originating Polish constituents and the presence of adjuncts, are used to determine justi able constituent order in the resulting Polish clauses.
In future research, we wish to extend the scope of translated constructions to di-transitives and passives. We shall also give due attention to relative clauses. Centering in English can be further Figure 6: Example continued: Deriving constituent orders re ned by allowing verbal and adjectival centers as well as by determining anti-center constructs.
iv -V-S-O-& -X- XV-S-O- Statistical (66%; bare 11%) v -O-S-& X- XV-O-S- Statistical vi -V-O-S-& -X- XV-O-S- Statistical (53%; bare 28%) vii -S-V-O-& -X- XS-V-O- Statistical (32%; bare 18%) viii -S-V-O-& -X- S-V-OX Statistical (30%; bare 18%) ix -O-V-S-& -X- O-V-SX Statistical (29%; bare 28%) x -O-V-S-& -X- O-VXS
We have thus tackled the question of information distribution in terms of communicative functions and examined its in uence on the syntactic structure of the source and target utterances. How and why intersentential relations are to be transmitted across the two languages remains an intricate question, but we believe to have partially contributed to the solution of this problem.
Figure 2 :
2Center values for example clauses
Constituent) returns the center value of the Constituent's Np, or 0 if unde ned, center shift(Utterance) holds if Utterance relates to the preceding one in the way allowed by the shift transition cf. (Grosz et al. 86) discrete center(Constituent) holds if Constituent is the chosen center of the of a constituent, eg. being a subject (S) or object (O) pron(S) & pron(O) if both subject and object are pronominal or Sub(Un) = Sub(Un 1) { if subject stays the same. certain expressions, e.g. a focus binding expression such as 'only', can trigger speci c translation patterns.
Figure 5 :
5Preprocessing
For de nition of more subtle relations look at (Brennan et al. 87).2.2 Extension of the Algorithm
Various re nements have been added to the origi-
nal centering model since its introduction (Bren-
nan et al. 87), (Kameyama 86), (Mitkov 94),
(Walker et al. 94). Center identi cation is
mostly based on syntactic phenomena. It con-
centrates on the analysis and representation of
noun phrases, since the function of nominals in
the information structure is considered to be cru-
cial.
Below we present some of our extensions to
centering,
Additional criteria for center evaluation
{ Special center-pointing constructions
{ Demonstrative pronouns
{ Possessive and demonstrative modi ers
{ Extra credit for de nite articles
{ Inde nites decreasing center value of a
constituent
Gradation of center values
Center values given to all Nps (not just one)
Table Utter PREFFERENCE PARTIAL
UtterDISCRIMINATION RESULTING
-ance CRITERIA
ORDERINGS (FAILING)
ORDER(S)
1
Pref.xii
SVO
SVO
Pref.xiii
VSO
(Discr.iii)
2
No rules apply, order unchanged
SVX
3
Pref.iiib
OVS
Discr.x
OVS
(Pref.xii)
VOS
(Discr.v)
OSV
(Discr.viii)
4
Pre.iii
S= ]
V S]X
Pref.xi
-VS-
5
Pref.iiib
SVO
SVO
(Pref.xii)
VSO
(Discr.i)
The information structure also changes depending on the accentuation pattern, but we shall leave the intonation aspects aside in this presentation.
To a great extent, this measure depends on the translation of constituents. It could be simpli ed by measuring the length of the original English, instead of Polish, units. We make use of that simpli ed measure in the example
It is interesting to note, however, that for the otherwise rare order OSV, the opposite applies.4 Orders derived by co-operation of several rules could be preferred in some way.
Functional sentence perspective in written and spoken communication. ( References, Brennan, Proceedings of ACL, 1987. (Firbas 92) J. Firbas. Grosz et al. 86) B.J. Grosz, A.K. Joshi, and S. WeinsteinACL, 1987. (Firbas 92) J. FirbasCambridgeCambridge University Press87Preliminary draftTowards a computational theory of discourse interpretationReferences (Brennan et al. 87) S.E. Brennan, M.W. Fried- man, and C.J. Pollard. A centering approach to pronouns. In Proceedings of ACL, 1987. (Firbas 92) J. Firbas. Functional sentence per- spective in written and spoken communication. Cambridge: Cambridge University Press, 1992. (Grosz et al. 86) B.J. Grosz, A.K. Joshi, and S. Weinstein. Towards a computational theory of discourse interpretation. Preliminary draft, 1986.
Gundel 93) J. K. Gundel. Centering and the givenness hierarchy: A proposed synthesis. ( Grosz, Workshop on Centering Theory in Naturally Occurring Discourses. University of Pennsylvania. Proceedings of ACL(Grosz et al. 95) B.J. Grosz, A.K. Joshi, and S. Weinstein. Centering: A framework for modelling the local coherence of discourse. In Proceedings of ACL, 1995. (Gundel 93) J. K. Gundel. Centering and the givenness hierarchy: A proposed synthesis. In Workshop on Centering Theory in Naturally Occurring Discourses. University of Pennsyl- vania, May 1993.
Mitkov 94) R. Mitkov. A new approach for tracking center. M Kameyama, Proceedings of the International Conference: New Methods in Language Processing. the International Conference: New Methods in Language ProcessingProceedings of ACLM. Kameyama. A property shar- ing constraint in centering. In Proceedings of ACL, 1986. (Mitkov 94) R. Mitkov. A new approach for tracking center. In Proceedings of the Interna- tional Conference: New Methods in Language Processing, September 1994.
Getting One's Words into Line. Dordrecht, 1987. (Siewierska 93a) A. Siewierska. Subject and object order in written Polish: some statistical data. Folia Linguistica XXVII, 1993. (Siewierska 93b) A. Siewierska. Syntactic weight vs information structure and word order variation in Polish. A Siewierska, ; M Walker, M Ida, S Cote, Computational Linguistics. J. Nuyts and G. de SchutterEdmonton29Japanese discourse and the process of centering(Siewierska 87) A. Siewierska. Postverbal sub- ject pronouns in Polish in the light of topic continuity and the topic/focus distinction. In J. Nuyts and G. de Schutter, editors, Getting One's Words into Line. Dordrecht, 1987. (Siewierska 93a) A. Siewierska. Subject and ob- ject order in written Polish: some statistical data. Folia Linguistica XXVII, 1993. (Siewierska 93b) A. Siewierska. Syntactic weight vs information structure and word order vari- ation in Polish. J. Linguistics, 29, 1993. (Szwedek 76) A.J. Szwedek. Word order, sen- tence stress in English and Polish. Edmonton, 1976. (Walker et al. 94) M. Walker, M. Ida, and S. Cote. Japanese discourse and the process of center- ing. In Computational Linguistics, 20 (2), 1994.
| [] |
[
"CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in Social Media Posts",
"CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in Social Media Posts",
"CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in Social Media Posts",
"CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in Social Media Posts"
] | [
"Muskan Garg muskangarg@ufl.edu \nThapar Institute of Engineering & Technology\nIndia\n\nUniversity of Florida\nUSA\n",
"Chandni Saxena chandnisaxena@cuhk.edu.hk \nThe Chinese University of Hong Kong\nHong Kong\n\nSAR\n\n",
"Veena Krishnan vkrishnan@ddn.upes.in \nUniversity of Petroleum And Energy Studies\nIndia\n",
"Ruchi Joshi rjoshi@jpr.amity.edu \nAmity University Rajasthan\nIndia\n",
"Sriparna Saha sriparna@iitp.ac.in \nIndian Institute of Technology\nPatna\n",
"Vijay Mago vmago@lakeheadu.ca \nLakehead University\nCanada\n",
"Bonnie J Dorr bonniejdorr@ufl.edu \nUniversity of Florida\nUSA\n",
"Muskan Garg muskangarg@ufl.edu \nThapar Institute of Engineering & Technology\nIndia\n\nUniversity of Florida\nUSA\n",
"Chandni Saxena chandnisaxena@cuhk.edu.hk \nThe Chinese University of Hong Kong\nHong Kong\n\nSAR\n\n",
"Veena Krishnan vkrishnan@ddn.upes.in \nUniversity of Petroleum And Energy Studies\nIndia\n",
"Ruchi Joshi rjoshi@jpr.amity.edu \nAmity University Rajasthan\nIndia\n",
"Sriparna Saha sriparna@iitp.ac.in \nIndian Institute of Technology\nPatna\n",
"Vijay Mago vmago@lakeheadu.ca \nLakehead University\nCanada\n",
"Bonnie J Dorr bonniejdorr@ufl.edu \nUniversity of Florida\nUSA\n"
] | [
"Thapar Institute of Engineering & Technology\nIndia",
"University of Florida\nUSA",
"The Chinese University of Hong Kong\nHong Kong",
"SAR\n",
"University of Petroleum And Energy Studies\nIndia",
"Amity University Rajasthan\nIndia",
"Indian Institute of Technology\nPatna",
"Lakehead University\nCanada",
"University of Florida\nUSA",
"Thapar Institute of Engineering & Technology\nIndia",
"University of Florida\nUSA",
"The Chinese University of Hong Kong\nHong Kong",
"SAR\n",
"University of Petroleum And Energy Studies\nIndia",
"Amity University Rajasthan\nIndia",
"Indian Institute of Technology\nPatna",
"Lakehead University\nCanada",
"University of Florida\nUSA"
] | [] | Research community has witnessed substantial growth in the detection of mental health issues and their associated reasons from analysis of social media. We introduce a new dataset for Causal Analysis of Mental health issues in Social media posts (CAMS). Our contributions for causal analysis are two-fold: causal interpretation and causal categorization. We introduce an annotation schema for this task of causal analysis. We demonstrate the efficacy of our schema on two different datasets: (i) crawling and annotating 3155 Reddit posts and (ii) re-annotating the publicly available SDCNL dataset of 1896 instances for interpretable causal analysis. We further combine these into the CAMS dataset and make this resource publicly available along with associated source code: https://github.com/drmuskangarg/CAMS. We present experimental results of models learned from CAMS dataset and demonstrate that a classic Logistic Regression model outperforms the next best (CNN-LSTM) model by 4.9% accuracy. | 10.48550/arxiv.2207.04674 | [
"https://export.arxiv.org/pdf/2207.04674v1.pdf"
] | 250,150,926 | 2207.04674 | 35e62229d97f2d814c02c8ebe4919def8f3a8757 |
CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in Social Media Posts
Muskan Garg muskangarg@ufl.edu
Thapar Institute of Engineering & Technology
India
University of Florida
USA
Chandni Saxena chandnisaxena@cuhk.edu.hk
The Chinese University of Hong Kong
Hong Kong
SAR
Veena Krishnan vkrishnan@ddn.upes.in
University of Petroleum And Energy Studies
India
Ruchi Joshi rjoshi@jpr.amity.edu
Amity University Rajasthan
India
Sriparna Saha sriparna@iitp.ac.in
Indian Institute of Technology
Patna
Vijay Mago vmago@lakeheadu.ca
Lakehead University
Canada
Bonnie J Dorr bonniejdorr@ufl.edu
University of Florida
USA
CAMS: An Annotated Corpus for Causal Analysis of Mental Health Issues in Social Media Posts
clinical depressionclinical psychologyintent classificationsuicidal tendency
Research community has witnessed substantial growth in the detection of mental health issues and their associated reasons from analysis of social media. We introduce a new dataset for Causal Analysis of Mental health issues in Social media posts (CAMS). Our contributions for causal analysis are two-fold: causal interpretation and causal categorization. We introduce an annotation schema for this task of causal analysis. We demonstrate the efficacy of our schema on two different datasets: (i) crawling and annotating 3155 Reddit posts and (ii) re-annotating the publicly available SDCNL dataset of 1896 instances for interpretable causal analysis. We further combine these into the CAMS dataset and make this resource publicly available along with associated source code: https://github.com/drmuskangarg/CAMS. We present experimental results of models learned from CAMS dataset and demonstrate that a classic Logistic Regression model outperforms the next best (CNN-LSTM) model by 4.9% accuracy.
Introduction
With substantial growth in digitization of psychological phenomena, automated Natural Language Processing (NLP) has been applied by academic researchers and mental health practitioners to detect, classify or predict mental illness on social media. However, there is a critical need for identifying underlying causes of mental illness in the face of dire outcomes. For example, a person commits suicide every 11.1 minutes in the US 1 and 23% of deaths in the world are associated with mental disorders according to the World Health Organization. The pandemic lockdown has heightened the mental health crisis in UK (Pierce et al., 2020) and US (McGinty et al., 2020).
In this context, people with mental disorders who decide to visit mental health practitioners for social well-being may face difficulty due to social stigma or unavailability of mental health practitioners, leaving those most in need to be neglected by the community. As a result, sufferers of mental health conditions are unable to take necessary steps for their treatment. Unfortunately, 80% of those with mental health conditions do not undergo clinical treatment and about 60% of those who take their own lives previously denied having any suicidal thoughts (Sawhney et al., 2021). Accordingly, social media platforms (e.g., Reddit, Twitter) are important resources for investigating the mental health of users based on their writings.
Motivation
The research community has witnessed tremendous growth in the study of mental health classification on social media since 2013 . However, there is minimal au- Figure 1: The intent-cause analysis of mental health on social media tomation for identifying potential causes that underlie mental illness. Online users suffering from depression may express their thoughts and grievances on social media unintentionally, for instance, the post (P ). P : I cannot deal with this breakup anymore and want to finish my life
The reason behind depression in P is clearly interpreted from the word breakup, which serves as an indicator of a cause related to the notion of relationship. Through the application of automatic causal analysis, underlying reasons of this type may be extracted and potentially leveraged to address mental health problems.
Social, financial and emotional disturbances have a huge impact on the mental health of online users. Here, we consider three levels of mental disorder analysis from automation to latent cause as shown in Figure 1. We identify the intent (level 0 task) of a user by mental illness prediction and classification of social media posts. We further automate the process of identifying and categorizing the direct cause (level 1) that a user may mention in the post. In the future, causal analysis may discern crucial protective fac-Dataset Details Task Avail. CLPsych (Coppersmith et al., 2015) Three types of annotated information: Depression-v-Control [DvC], PTSD-v-Control [PvC], and Depression-v-PTSD [DvP] Suicide risk detection S MDDL (Shen et al., 2017) 300 million users and 10 billion tweets in D1: Depression D2: Non-depression, D3: Depression candidate
Depression detection A
RSDD (Yates et al., 2017) Reddit dataset of 9210 users in depression and 1,07,274 users in control group Depression detection ASA SMHD (Cohan et al., 2018) Reddit dataset for multi-task mental health illness Mental health classification ASA eRISK (Losada et al., 2018) Early risk detection by CLEF lab about problems of detecting depression, anorexia and self-harm
Depression detection A
Pirina18 (Pirina and Çöltekin, 2018) Acquired filtered data from Reddit Depression detection A Ji18 (Ji et al., 2018) Reddit: 5,326 suicide risk samples out of 20k; Twitter: 594 tweets out of 10k Suicide risk detection AR Aladag18 (Aladag et al., 2018) 10,785 posts were randomly selected and 785 were manually annotated as suicidal v/s non-suicidal (Demszky et al., 2020) Manually annotated 58k Reddit comments for 27 emotion categories Emotion detection A UMD-RD 2 (Shing et al., 2020) 11,129 users who posted on r/SuicideWatch and 11,129 users who did not Suicide risk detection ASA SDCNL (Haque et al., 2021) Reddit dataset of 1895 posts of depression and suicide Suicide v/s depression classification A CAMS (Ours) Interpretable causal analysis of mental illness in social media (Reddit) posts Causal analysis A
Challenges and Contributions
Mental health illness detection and analysis on social media presents many linguistic, technical and psychological challenges. Among many under-explored dimensions, some substantial research gaps are addressed as follows:
1. Social media posts are first-hand user-generated data containing informal and noisy text. The nature of the text in a post may vary for different platforms.
2. Dataset availability may be limited due to the sensitive nature of personal information.
3. There are many existing level 0 studies for mental health detection but no substantial study for Level 1, e.g., in-depth causal analyses of disorders.
To address these challenges, we introduce the task of causal analysis. We first introduce an annotation scheme for causal analysis. The dataset annotations are carried out in two ways: (i) crawling and annotation of the Reddit dataset (ii) re-annotation of the existing SDCNL dataset (Haque et al., 2021) for the proposed task of Causal Analysis of Mental health on Social media (CAMS). There are no existing studies for this task as observed from Table 1. To the best of our knowledge, our work is the first to address causal analysis and to provide a publicly available dataset for this purpose. Our major contributions are:
1. Definition of Interpretable Causal Analysis and construction of an annotation schema for this new task.
2. Annotated web-crawled Reddit dataset of 3362 instances using our annotation schema.
3. Re-annotation of the existing SDCNL dataset as a robustness test for our annotation schema.
4. Combination of the datasets above and introduction of our new, publicly available CAMS dataset.
5. Demonstration of the performance of machine learning and deep learning models using CAMS.
Below we discuss relevant background (Section 2) and introduce the annotation scheme for causal analysis (Section 3). Section 4 presents our new CAMS resource, annotation, and validation. Annotations are verified by experts (clinical psychologist and rehabilitation counselor) and validated using statistical testing of Fleiss' Kappa agreement (Falotico and Quatto, 2015). We further use existing multi-class classifiers for interpretable causal analysis in Section 5. Section 6 provides concluding remarks and future research directions.
Background
Reddit has become one of the most widely used social media platforms. Haque et al. (2021) use two subreddits r/depression and r/suicidewatch to scrape the SDCNL data and to validate a label correction methodology through manual annotation of this dataset for depression versus suicide. They also address ethical issues impacting dataset availability and make their dataset publicly available. In this section, we discuss the evolution of mental health studies and the historical perspective of causal analysis.
Evolution of Mental Health Studies
Many machine learning researchers have identified mental health disorders for social media posts (De Choudhury et al., 2013). Sawhney et al. (2021) examines the evolution of suicidal tendencies from historical posts of users (longitudinal studies). Figure 2 highlights recent developments in analysis of mental health disorders from social media. New NLP questions have emerged from investigations into predicting depression (De Choudhury et al., 2013) and suicidal tendencies (Masuda et al., 2013). Researchers consider users' profiles (Conway and O'Connor, 2016) to introduce the CLPsych shared task dataset (Coppersmith et al., 2015) for solving the problem of Mental Illness Detection and Analysis on Social media (MIDAS). MIDAS has further benefited from exploiting social network features (?), attention mechanisms (Nam et al., 2017), handling imbalanced dataset (Cong et al., 2018), and explainability (Cao et al., 2019).
Additional research directions have emerged from the use of knowledge graphs (Cao et al., 2020), feature optimization techniques (Shah et al., 2020), longitudinal studies (Sawhney et al., 2021), and handling noisy labels (Haque et al., 2021). The 3-step theory (3ST) (Klonsky et al., 2021) of suicide supports the argument of gradual development of suicidal tendencies (over time) associated with a range of potential causes.
Historical Perspective: Causal Analysis on Social Media
Our work is relevant to causal analysis of human behavior on social media. Recent approaches are developed to study 'why online users post fake news' (Cheng et al., 2021), beliefs and stances behind online influence (Mather et al., 2022), and causal explanation analysis on social media (Son et al., 2018). The work of Son et al. (2018) is the closest to ours in that it detects a connection between two discourse arguments to extract a causal relation based on annotated Facebook data. However, the dataset is limited and is not publicly available; thus, no recent developments are observed. To address this issue, we annotate the Reddit dataset (the nature of Reddit data is different from that of Facebook) and further categorize causal explanations.
Annotation Scheme
Inferences from Literature
Potential reasons behind mental illness may be detected in posts that refer to insomnia, weight gain, or other indicators of worthlessness or excessive or inappropriate guilt. Underlying reasons may include: bias or abuse ( physical/emotional illness leading to, or induced by, medication use (Smith, 2015;Tran et al., 2019), relationship dysfunction, e.g., marital issues (Beach and Jones, 2002), and alienation (Edition and others, 2013). This list is not exhaustive, but it is a starting point for our study, giving rise to five categories of reasons (plus 'no reason') for our automatic causal analysis: no reason, bias or abuse, jobs and careers, medication, 3 relationship, and alienation. Task Table 2 presents examples of data annotation involving the labeling of direct causes of mental health disorders in social media posts. There are two types of annotations: cause category and Inference. The Inference column contains textual data which represents the actual reason behind mental disorders. This inferred reason is further classified as one of the six different causal categories.
Annotation
Problem Formulation
The architecture for our automatic causal analysis is shown in Figure 3. Social media text is provided to prediction/classification algorithms that filter out non-mental disorder from posts. The remaining mental disorder posts are then analyzed to detect reasons behind users' depression or suicidal tendencies. Finally, the reasons are classified into 5 causal categories and one 'no reason' category. More formally, we introduce the problem of Causal Analysis of Mental health on Social media (CAMS) as a multiclass classification problem. We extract a set of social media posts as p = p 1 , p 2 , p 3 , ..., p n for n number of
Alienation
Nothing is worth the effort Does anyone feel like the only person that could understand your depression would be someone else that was depressed? But also feel like if they were to date someone who was depressed they couldn't handle it because it might suck you into a place that you don't want to be in again. I was with a group of friends last night, and another friend started talking to another friend about how many girls secretly liked him and stuff. It was crazy, because no one has ever talked to me about things like that. Then, on the way to the pub, a group of girls basically called me unattractive. Funny how girls are never shy about calling me ugly, but they're apparently too shy to "approach me".
No reason -
Bias or Abuse girls call me unattractive posts. We interpret the cause for every i th post p i as C pi and classify it into one of the predetermined categories y = y 0 , y 1 , y 2 , y 3 , y 4 , y 5 where y 0 : 'no reason', y 1 : 'bias or abuse', y 2 : 'Jobs and careers', y 3 : 'Medication', y 4 : 'Relationship', and y 5 : 'Alienation' as y pi .
Guidelines for Annotation
Our professional guidelines support annotation of the post p i with causal inference C pi and class y pi . The guidelines are developed through collaborative efforts with a clinical psychologist and a rehabilitation counselor. Student annotators label posts with their causal inference and causal class. The latter are annotated as follows: The student annotators were trained by experts (clinical psychologist and rehabilitation counselor) to pick those words and/ or phrases through which they have identified the y pi of the post p i , and rank them. The student annotators followed these guidelines thoroughly.
Annotation Perplexity
The judgement of reasons behind online users' intentions is a complex task for human annotators, generally due to mentions of multiple reasons or the presence of ambiguity in human interpretations. Causal analysis can be viewed as a multivariate problem, resulting in multiple labels. The annotation scheme is not sophisticated enough to capture all the aspects of this phenomenon. We thus propose perplexity guidelines to simplify the task and facilitate future annotations. Our mental health therapists and social NLP practitioners have constructed perplexity guidelines to handle the trade-off between task complexity and simplicity of the annotation scheme. The perplexity guidelines are:
1. Multiple reasons in the post: There are some posts with multiple reasons for conveyed feelings. To resolve this, annotators must find a root cause among the direct causes mentioned by the user.
Example 1: I was of 11 years since when i realized and facing constant ignorance of my parents. 8 yrs later i lost my first girlfriend and alcoholic since then. My beer belly and obesity has made people biased towards me. I have lost everything and want to end up now.
In Example 1, the root cause of the mental disorder is negligence of parents. Thus, this post is assigned the Relationship category. That is, we handle multiple causes by prioritizing the root cause or the most emphasized reason by the user. This perplexity guideline reduces annotation ambiguity and helps develop better models for automation of this task in the near future.
Ambiguity in human interpretations:
The subjective nature of causal analysis makes this task even more complicated for human annotators. The six different causes are not atomic in nature. The human interpretation of the same post and the same inference may vary, even among experts.
Example 2: I wish I could stay alone somewhere and cry my self to sleep. I wish i won't wake up.
Example 2 contains some important words like alone and cry which convey the category as Alienation. However, two out of three annotators considered this to be the No reason category. As this is the subjective decision of every human annotator, we leave it at their discretion. However, the final category assigned by the human expert (following human annotation) is based on "majority rules," in this case, 'no reason.'
3. Subject of intent in the post: Many posts refer to the depression of loved ones and other acquaintances. Given that the goal of this task is to identify the cause behind mental depression of online users, experts agree that such posts are candidates for causal analysis.
Example 3: I love to do prepare meals for my cousin because I think he is suffering from depression due to his car accident last month.
In Example 3, the user is talking about their cousin who is purportedly suffering from depression. This text is passed through classification to detect depression; however, the third person usage precludes detection of a reason behind this condition. Although the user presents their own perspective on the reason for their cousin's condition (car accident), the input must include self-reported evidence for the reason. Thus, this example is annotated as No reason.
The professional training and guidelines are supported by perplexity guidelines. We have further deployed student annotators to label the dataset after they annotate the first 25 posts under the supervision of experts.
CAMS Dataset
We introduce a new language resource for CAMS and elucidate the process of data collection (4.1) and data annotations (4.2). We further discuss the challenges and discuss future research directions (4.3). We make our dataset and the source code publicly available for future use.
Overview of Data Collection
We demonstrate the efficacy of our annotated scheme as follows:
Dataset Annotations
After verification of the dataset by experts, three duly trained student annotators manually annotate the data in the format: <text, cause, inference> as shown in Table 2. In this section, we discuss the annotation process, verification by experts, and validation using statistical tests. Annotation is carried out manually by annotators who are proficient in the language. They work independently for each post and follow the given guidelines. Each annotator takes one hour to annotate about 15 − 25 Reddit posts and 180 and 90 hours to annotate the crawled and existing dataset, respectively. The annotations are obtained as three separate files.
Several challenges have emerged during the annotation process, e.g., a countable (< 10) set of non-English posts. For such cases, augmented guidelines instruct the annotator to mark the post as No reason. We recommend the removal of non-English posts as the CAMS dataset is proposed for English only. The annotated files are verified by a clinical psychologist and a rehabilitation counselor. This verification is performed over the annotations given by our trained annotators without bringing this to their knowledge and experts have given the final annotations.
We further validate three annotated files using Fliess' Kappa inter-observer agreement study. The agreement study for the crawled dataset is found to be 64.23%. We also study the inter-agreement annotations for the existing dataset as 73.42% and 60.23% for testing and training data of SDCNL, respectively. The trained annotators agree with 61.28% agreement among the annotators for the CAMS dataset. The resulting values are interpreted as per Table 3. Despite the increased subjectivity of the task, the student annotators substantially agree with their judgements.
Discussion
Existing work on causal analysis is associated with finding discourse relations among words and identifying the segments that represent the reason behind the intended information. We extend this work to find the category of the cause using these interpreted segments. Since we are extending the causal interpretations to automatic categorization, our work is not directly comparable to any of the existing works. However, we glean insights into the characteristics of the CAMS dataset through further analysis. This section examines the word length of posts in the dataset (4.3.1) and varying number of instances in each class (4.3.2). Additionally, we discuss the social nature of the dataset (4.3.3).
Length of the Posts
The length of posts varies from a few characters to thousands of words. One of the major challenges for automation is the construction of a multi-class classifier that is suitable for posts of varying word lengths. One of the possible solutions to this challenge is to extract the inference from the post and classify it using the inference text. We choose to explore this in the near future. Table 4, shows that, although there is consistency in the average number of words among all the classes, there is a huge variation in the word counts across posts. This shows that the data is unstructured and to handle this kind of text, we need handcrafted features or other semantic measures for causal analysis. Table 5 shows that the number of posts for every cause varies widely, perhaps signifying that causes of mental health disorders are not well-distributed in society. No reason 292 332 70 694 Bias or abuse 122 194 35 351 Jobs/careers 399 181 48 628 Medication 410 170 43 623 Relationship 956 297 91 1344 Alienation 976 340 92 1408 Total 3155 1517 379 5051 Table 5: Sample distribution of the CAMS dataset for different causes where CC is Crawled Corpus, Train S is the Training data of SDCNL dataset, Test S is the Test data of SDCNL dataset, and CAMS column contains the total number of samples in the dataset for each cause.
Imbalanced Dataset
Cause CC Train S Test S CAMS
In the crawled corpus, the highest number of samples is observed for the Relationship and Alienation causal categories, which is perhaps an indicator that our society is less equipped to deal with issues pertaining to 'near & dear ones' and 'loneliness / worthlessness', respectively. The number of posts with 'no reason' is smaller in the crawled corpus due to the cleaning of the dataset. Interestingly, there are fewer posts assigned 'Bias or abuse'-less than half of each of the two additional categories: 'Jobs and careers' and 'Medication'.
Social nature of the dataset
Our expert clinical psychologists have explored the social nature of the dataset, in light of our analysis above. During re-annotation of existing dataset, the prevalence of some causes, e.g., Alienation and Relationship, point to the importance of the ability to take a societal pulse on a regular basis, especially in these unprecedented times of pandemicinduced distancing and shut-downs. Other problems, e.g., Jobs and careers and bias and abuse depend upon good governance. The problem of medication depends upon technological/medical advances and accessible healthcare, or lack thereof. Additionally, online users often feel depressed but do not address any specific reason behind it, indicating that inferring relevant causes is a challenge if one uses NLP alone.
Ethical Considerations
We emphasize that the sensitive nature of our work necessitates that we use the publicly available Reddit dataset (Haque et al., 2021) in a purely observational manner (Broer, 2020). We claim that the given dataset does not disclose the user's personal information or identity. We further acknowledge the trade-off between privacy of data and effectiveness of our work (Eskisabel-Azpiazu et al., 2017). We ensure that our CAMS corpus is shared selectively and is subject to IRB approval to avoid any misuse. Our dataset is susceptible to the biases and prejudices of annotators who were trained by experts. There will be no ethical issues or legal impact with this causal analysis of mental illness.
Corpus Utility for Machine Learning
We include traditional multi-class classifiers trained on CAMS training dataset and evaluate it on the CAMS test data. We choose the following Recurrent Neural Network (RNN) architectures: Long Short Term Memory (LSTM) model, Convolution Neural Network (CNN), Gated Recurrent Unit (GRU), Bidirectional GRU/LSTM (Bi-GRU/Bi-LSTM) and other hybrid models. In this section, we discuss the experimental setup (5.1) and analyze the results (5.2).
Experimental Setup
We use the existing re-annotated SDCNL dataset for experimental results and analysis. We clean the dataset, preprocess the posts and then use GloVe 4 word embedding with 100 dimensions trained on Wikipedia for each token. We further set up RNN architectures with default settings (lr = 0.001, β 1 = 0.9, β 2 = 0.999, = 1e − 08) for the batch-size of 256. The categorical Cross Entropy loss function and the ADAM optimizer are used to perform the back-propagation learning on 20 epochs. The number of samples for three classes of the existing SDCNL dataset ('Bias or abuse', 'Jobs and careers', and 'Medication') are very few in comparison to the other three classes. We add 120 + 120 + 120 samples from the crawled corpus, to help balance the number of instances across the classes. As a result, the number of training samples increases from 1517 to 1877. We use these training data to build and validate the multi-class classifier. We test this classifier on the 379 sample test data and analyze the results. The evaluation metrics used for this task are F1measure and Accuracy. 4 https://nlp.stanford.edu/projects/glove/
Results and Discussion
We use multi-class classifiers to find causal categories and obtain the results reported in Table 6. We test our performance with both machine learning and deep learning approaches. The two top-performing machine learning algorithms are based on Logistic Regression and Support Vector Machine. Whereas the former outperforms all existing techniques, the latter shows comparative results with deep learning models. The hybrid model, CNN-LSTM, attains the best performance among all deep learning mechanisms with 47.78% accuracy. The CNN+GRU model performs the worst with 40.27% accuracy on test data. It is interesting to observe that the results are consistent for all the classifiers with few exceptions for classes Bias or abuse and Medication. We further analyze the best performing deep-learning classifier (CNN+LSTM) below.
Error Analysis
The accuracy of multi-class classification is found to be around 40% to 50%. We undertake a comprehensive error analysis to explore the intricacies of our task.
1. Cause classification error: We obtain the confusion matrix for CNN+LSTM as shown in Figure 4. We highlight the cells with more than 40% incorrect predictions. The predictions for Alienation and Relationship are incorrect and overlap with Bias or Abuse and Medication. This is due to complex interactions, as illustrated in the following perceivable overlap between Bias or Abuse and Relationship:
Example 4: My friends are ignoring me and I am feeling bad about it. I have lost all my friends.
Example 4 is associated with biasing and friendship, in a case where someone feels ostracized by their friends. The emphasis on friends tips the balance in favor of the class Relationship. However, the major challenge is to train the model in such a way that it understands the inferences and then chooses the most emphasized causal category using optimization techniques. We view this challenge as an open research direction.
2. Overlapping class: The overlapping problem of classes is observed with ambiguous results for samples, e.g., for Relationship and Alienation. This class representation problems can be resolved with data augmentation (Ansari et al., 2021) and demarcation of boundaries among classes. In a real-time scenario, demarcation of fixed boundaries is not possible due to subjectivity of the task. We recommend the approximation of a newly built model over handcrafted / automated features accordingly.
3. Uncertainty: Related to the overlapping class issue above, all learning-based models obtain low performance for class 1 ('Bias or abuse') due to annotators' perceived overlap with classes 4 and 5 ('Relationship' and 'Alienation'). Future work is needed to mitigate such uncertainty. For example, delineation of 4. Semantic Parsing: In a multi-class classification task, as the length of posts varies over a wide range, one may choose to summarize every post before applying a classifier. Our experiments with YAKE (Campos et al., 2020) for keyword extraction yielded results that are further deteriorated. From this we determine that it is important to identify the causal interpretation from the full text in order to perform multi-class classification. A future avenue of research involves the exploration of discourse relations to identify segments that represent independent causes that underlie mental health disorders.
Implications and Limitations
The CAMS dataset provides a means for exploring the identification of reasons behind mental health disorders of online users. The notion of causal categorization is defined and used to proactively identify cases where users are at potential risk of mental depression and suicidal tendencies.
The results of this work may be employed to explore the impact of unemployment, low grades, etc. Our analysis may also be useful for the study of online behavior. A major limitation of the CAMS dataset is that the users may intentionally post their intent of mental disorder on social media, e.g., for deliberately making new friends. In this work, we have assumed that the data has no such biasing.
Conclusion
This paper introduces the task of causal analysis to identify the reasons behind mental depression and suicidal tendencies (intent). We have introduced CAMS: a dataset of 5051 instances, to categorize the direct causes of mental disorders through mentions by users in their posts. We transcend the work of Level 0 studies, moving to the next level (Level 1) for causal analysis. Our work is the combined effort of experts in the field of Social Natural Language Processing (Social NLP), including a rehabilitation counselor and clinical psychologists (CPsych). We have further implemented machine learning and deep learning models for causal analysis and found that Logistic Regression and CNN+LSTM gives the best performance, respectively. In the future, we plan to extend the problem of causal analysis of mental health detection on social media as a multi-task problem. Another major future challenge for this work is the generation of explanations for multi-class classification, by leveraging causal analysis within the CAMS framework.
Acknowledgement
Figure 2 :
2Evolution of studies for mental health detection on social media
Figure 3 :
3Radell et al., 2021), loss of jobs or career (Mandal et al., 2011), Architecture of the causal analysis for mental health in social media posts
6 :
6Experimental results with CAMS dataset. F1 is computed for all six causal classes: 'No reason' (C0), 'Bias or abuse' (C1), 'Jobs and careers' (C2), 'Medication' (C3), 'Relationship' (C4), 'Alienation' (C5).
Figure 4 :
4Confusion matrix of CNN+LSTM model on test data discourses within the text would support a more definitive interpretation and reliable annotation.
Table 1 :
1Different mental health datasets and their availability. A: Available, ASA: Available via Signed Agreement, AR:Available on Request for research work
tors for mental health and address some important societal
needs. Domain experts refer to level 1 as a direct cause
mentioned by a user, often accompanied by a latent cause
(Level 2) when they are posted on social media. In this
work, we focus on automation by introducing a dataset for
level 1: interpretable causal analysis.
Text Cause Inference
CauseThat's all I can really say. Nothing is worth the effort... I don't think I am capable of taking steps to improve my life, because I just don't even fucking care. Whatever I'm doing, wherever I go in my life, I'll find a way to be miserable. Why try... I just... ugh...
God help me.... I know I should go to the hospital. I know I have to keep fighting....if only to prove to my children, cursed with these genetic tendencies of mine, that life is worth living. I made my son promise at Christmas to get help, and he did and he is thriving. My life long battle is starting to wear on this old soul of mine. It feels like the same pattern over and over, no matter how many variables I change. I am a very hard person to love. My scars and cynicism are just a little too hard for anyone who tries to stay around too long.Medication go to the hospital, scars and cynicism, genetic tendencies I hate my job .. I cant stand living with my dad.. Im afraid to apply to any developer jobs or show my skills off to employers..I dont even have a car rn... I just feel like a failure..Im really lonely.. I feel like everything is getting worse and worse.. maybe I should start taking anti depressant medication.. it doesn't make any sense. can figure out what i may have done to trigger it, but 5 of my closest friends from high school have stopped responding to my calls or texts. i thought it was just a phone issue at first, but it is too unlikely of just coincidence.Jobs and Careers
hate my job, feel like fail-
ure, really lonely
Relationships
5 of my closest friends,
stopped responding
Table 2 :
2Sample of the CAMS dataset for causal analysisRange
Interpretation
< 0:
Less than chance agreement
0.01-0.20:
Slight agreement
0.21-0.40:
Fair agreement
0.41-0.60:
Moderate agreement
0.61-0.80:
Substantial agreement
0.81-0.99:
Almost perfect agreement
Table 3 :
3Interpretation of resulting values of Fliess' Kappa agreement study.
1 .
1No reason: When there is no reason that identifies the cause of mental disorder in the post.C y0 : ["I just want to die", "Want to end my
life".]
2. Bias or abuse: A strong inclination of the mind or
a preconceived opinion about something or someone.
To avoid someone intentionally, or to prevent some-
one from taking part in the social activities of a group
because they dislike the person or disapprove of their
activities. It includes body shaming, physical, sexual,
or emotional abuse.
C y1 : ["No one speak to me because I am fat
and ugly.", "It has been 5 years now when
that horrible incident of ragging shattered
down all my confidence".]
3. Jobs and careers: Financial loss can have catas-
trophic effects on mental illness, relationships and
even physical health. Poor, meaningless and un-
manageable education, unemployment, un-affordable
home loans, poor financial advice, and losing a job
are some of the major concerns. It includes gossiping
and/or social cliques, aggressive bullying behavior,
poor communication and unclear expectations, dicta-
torial management techniques that don't embrace em-
ployee feedback. The educational problems like pick-
ing up courses under some external pressure and poor
grades are also part of this category.
C y2 : ["cant afford food or home anymore",
"I do not want to read literature but my par-
ents forced me to do so. Not happy with my
grades"]
4. Medication: The general drugs and other antiviral
drugs can increase the risk of depression. The habit of
using substances and alcohols can aggravate the prob-
lem of mental disorders. Moreover, medical problems
like tumors, cancer, and other prolonged diseases can
boost the presence of mental depression.
C y3 : ["I am chain smoker, want to quit, but
I cant. My life is mess", "tried hard to leave
drugs but this dire craving is killing me.."]
5. Relationships: When two people or a group of peo-
ple fight, it may lead to a relationship or friendship
drifting apart, for example, regular fights, breakups,
divorce, mistrust, jealousy, betrayal, difference in
opinion, inconsistency, conflicts, bad company, non-
commitment, priority, envy. Problems like bad parent-
ing and childhood trauma are also part of this category.
C y4 : ["Cannot deal with this breakup any-
more", "He dumped me and its killing me"]
6. Alienation: Alienation is the feeling of life being
worthless even after doing everything. There may
be indicators of meaninglessness, loneliness, tired of
daily routines, powerlessness, normlessness, isolation,
and cultural estrangement.
C y5 : ["I don't know why am I living, every-
thing seems to be meaningless"]
Table
https://suicidology.org/wp-content/uploads/2021/01/2019 datapgsv2b.pdf
We recognize 'medication' as both an indicator of physical/emotional illness (e.g., an intent to alleviate illness) and a potential cause of illness (e.g., medication-induced depression).
. We combine these two corpora, introducing them as the CAMS dataset, which is further annotated by our trained student annotators. 4. We consult mental health practitioners, a clinical psychologist and a rehabilitation counselor, to verify the combined dataset.
We acknowledge our three student annotators: Simran jeet Kaur, Astha Jain and Ritika Bhardwaj. We also acknowledge Amrith Krishna for his kind support and for proofreading this manuscript. Publication costs are funded by NSERC Discovery Grant (RGPIN-2017-05377), held by Vijay Mago, Department of Computer Science, Lakehead University, Canada.
Detecting suicidal ideation on forums: proof-of-concept study. A E Aladag, S Muderrisoglu, N B Akbas, O Zahmacioglu, H O Bingol, Journal of medical Internet research. 206215Aladag, A. E., Muderrisoglu, S., Akbas, N. B., Zahma- cioglu, O., and Bingol, H. O. (2018). Detecting suicidal ideation on forums: proof-of-concept study. Journal of medical Internet research, 20(6):e215.
Data augmentation for mental health classification on social media. G Ansari, M Garg, C Saxena, arXiv:2112.10064arXiv preprintAnsari, G., Garg, M., and Saxena, C. (2021). Data aug- mentation for mental health classification on social me- dia. arXiv preprint arXiv:2112.10064.
Marital and family therapy for depression in adults. S R Beach, D J Jones, Beach, S. R. and Jones, D. J. (2002). Marital and family therapy for depression in adults.
Technology for our future? exploring the duty to report and processes of subjectification relating to digitalized suicide prevention. T Broer, Information. 113170Broer, T. (2020). Technology for our future? exploring the duty to report and processes of subjectification relating to digitalized suicide prevention. Information, 11(3):170.
Yake! keyword extraction from single documents using multiple local features. R Campos, V Mangaravite, A Pasquali, A Jorge, C Nunes, Jatowt , A , Information Sciences. 509Campos, R., Mangaravite, V., Pasquali, A., Jorge, A., Nunes, C., and Jatowt, A. (2020). Yake! keyword ex- traction from single documents using multiple local fea- tures. Information Sciences, 509:257-289.
Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention. L Cao, H Zhang, L Feng, Z Wei, X Wang, N Li, X He, arXiv:1910.12038arXiv preprintCao, L., Zhang, H., Feng, L., Wei, Z., Wang, X., Li, N., and He, X. (2019). Latent suicide risk detection on mi- croblog via suicide-oriented word embeddings and lay- ered attention. arXiv preprint arXiv:1910.12038.
Building and using personal knowledge graph to improve suicidal ideation detection on social media. L Cao, H Zhang, L Feng, IEEE Transactions on Multimedia. Cao, L., Zhang, H., and Feng, L. (2020). Building and using personal knowledge graph to improve suicidal ideation detection on social media. IEEE Transactions on Multimedia.
Causal understanding of fake news dissemination on social media. L Cheng, R Guo, K Shu, H Liu, Cheng, L., Guo, R., Shu, K., and Liu, H. (2021). Causal understanding of fake news dissemination on social me- dia.
Smhd: a large-scale resource for exploring online language usage for multiple mental health conditions. A Cohan, B Desmet, A Yates, L Soldaini, S Macavaney, N Goharian, 27th International Conference on Computational Linguistics. ACLCohan, A., Desmet, B., Yates, A., Soldaini, L., MacAvaney, S., and Goharian, N. (2018). Smhd: a large-scale re- source for exploring online language usage for multiple mental health conditions. In 27th International Confer- ence on Computational Linguistics, pages 1485-1497. ACL.
Xa-bilstm: A deep learning approach for depression detection in imbalanced data. Q Cong, Z Feng, F Li, Y Xiang, G Rao, C Tao, 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEECong, Q., Feng, Z., Li, F., Xiang, Y., Rao, G., and Tao, C. (2018). Xa-bilstm: A deep learning approach for depres- sion detection in imbalanced data. In 2018 IEEE Inter- national Conference on Bioinformatics and Biomedicine (BIBM), pages 1624-1627. IEEE.
Social media, big data, and mental health: current advances and ethical implications. Current opinion in psychology. M Conway, D Connor, 9Conway, M. and O'Connor, D. (2016). Social media, big data, and mental health: current advances and ethical im- plications. Current opinion in psychology, 9:77-82.
Clpsych 2015 shared task: Depression and ptsd on twitter. G Coppersmith, M Dredze, C Harman, K Hollingshead, Mitchell , M , Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical RealityCoppersmith, G., Dredze, M., Harman, C., Hollingshead, K., and Mitchell, M. (2015). Clpsych 2015 shared task: Depression and ptsd on twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clini- cal Psychology: From Linguistic Signal to Clinical Re- ality, pages 31-39.
Predicting depression via social media. M De Choudhury, M Gamon, S Counts, E Horvitz, Seventh international AAAI conference on weblogs and social media. De Choudhury, M., Gamon, M., Counts, S., and Horvitz, E. (2013). Predicting depression via social media. In Seventh international AAAI conference on weblogs and social media.
D Demszky, D Movshovitz-Attias, J Ko, A Cowen, G Nemade, Ravi , S , arXiv:2005.00547Goemotions: A dataset of fine-grained emotions. arXiv preprintDemszky, D., Movshovitz-Attias, D., Ko, J., Cowen, A., Nemade, G., and Ravi, S. (2020). Goemotions: A dataset of fine-grained emotions. arXiv preprint arXiv:2005.00547.
Diagnostic and statistical manual of mental disorders. F Edition, Am Psychiatric Assoc. 21Edition, F. et al. (2013). Diagnostic and statistical manual of mental disorders. Am Psychiatric Assoc, 21.
An ethical inquiry into youth suicide prevention using social media mining. A Eskisabel-Azpiazu, R Cerezo-Menéndez, D Gayo-Avello, 227Internet Research Ethics for the Social AgeEskisabel-Azpiazu, A., Cerezo-Menéndez, R., and Gayo- Avello, D. (2017). An ethical inquiry into youth suicide prevention using social media mining. Internet Research Ethics for the Social Age, 227.
Fleiss' kappa statistic without paradoxes. R Falotico, P Quatto, Quality & Quantity. 492Falotico, R. and Quatto, P. (2015). Fleiss' kappa statistic without paradoxes. Quality & Quantity, 49(2):463-470.
Knowledge-aware assessment of severity of suicide risk for early intervention. M Garg, M Gaur, A Alambo, J P Sain, U Kursuncu, K Thirunarayan, R Kavuluru, A Sheth, R Welton, J Pathak, arXiv:2110.03663Quantifying the suicidal tendency on social media: A survey. arXiv preprintThe World Wide Web ConferenceGarg, M. (2021). Quantifying the suicidal tendency on so- cial media: A survey. arXiv preprint arXiv:2110.03663. Gaur, M., Alambo, A., Sain, J. P., Kursuncu, U., Thirunarayan, K., Kavuluru, R., Sheth, A., Welton, R., and Pathak, J. (2019). Knowledge-aware assessment of severity of suicide risk for early intervention. In The World Wide Web Conference, pages 514-525.
Deep learning for suicide and depression identification with unsupervised label correction. A Haque, V Reddi, T Giallanza, arXiv:2102.09427arXiv preprintHaque, A., Reddi, V., and Giallanza, T. (2021). Deep learning for suicide and depression identifica- tion with unsupervised label correction. arXiv preprint arXiv:2102.09427.
Supervised learning for suicidal ideation detection in online user content. S Ji, C P Yu, S.-F Fung, S Pan, G Long, Complexity. Ji, S., Yu, C. P., Fung, S.-f., Pan, S., and Long, G. (2018). Supervised learning for suicidal ideation detection in on- line user content. Complexity, 2018.
The three-step theory of suicide: Description, evidence, and some useful points of clarification. E D Klonsky, M C Pachkowski, A Shahnaz, A M May, Preventive medicine. 152106549Klonsky, E. D., Pachkowski, M. C., Shahnaz, A., and May, A. M. (2021). The three-step theory of suicide: Descrip- tion, evidence, and some useful points of clarification. Preventive medicine, 152:106549.
Overview of erisk: early risk prediction on the internet. D E Losada, F Crestani, J Parapar, International conference of the cross-language evaluation forum for european languages. SpringerLosada, D. E., Crestani, F., and Parapar, J. (2018). Overview of erisk: early risk prediction on the internet. In International conference of the cross-language eval- uation forum for european languages, pages 343-361. Springer.
Job loss and depression: The role of subjective expectations. B Mandal, P Ayyagari, W T Gallo, Social Science & Medicine. 724Mandal, B., Ayyagari, P., and Gallo, W. T. (2011). Job loss and depression: The role of subjective expectations. Social Science & Medicine, 72(4):576-583.
Suicide ideation of individuals in online social networks. N Masuda, I Kurahashi, H Onari, PloS one. 8462262Masuda, N., Kurahashi, I., and Onari, H. (2013). Suicide ideation of individuals in online social networks. PloS one, 8(4):e62262.
From stance to concern: Adaptation of propositional analysis to new tasks and domains. B Mather, B J Dorr, A Dalton, W De Beaumont, O Rambow, S M Schmer-Galunder, Findings of the Association for Computational Linguistics: Human Language Technologies. Dublin, IrelandMather, B., Dorr, B. J., Dalton, A., de Beaumont, W., Ram- bow, O., and Schmer-Galunder, S. M. (2022). From stance to concern: Adaptation of propositional analysis to new tasks and domains. In Findings of the Association for Computational Linguistics: Human Language Tech- nologies, Dublin, Ireland, May.
Psychological distress and loneliness reported by us adults. E E Mcginty, R Presskreischer, H Han, C L Barry, Jama. 3241McGinty, E. E., Presskreischer, R., Han, H., and Barry, C. L. (2020). Psychological distress and loneliness re- ported by us adults in 2018 and april 2020. Jama, 324(1):93-94.
Dual attention networks for multimodal reasoning and matching. H Nam, J.-W Ha, Kim , J , Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionNam, H., Ha, J.-W., and Kim, J. (2017). Dual attention net- works for multimodal reasoning and matching. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 299-307.
Mental health before and during the covid-19 pandemic: a longitudinal probability sample survey of the uk population. M Pierce, H Hope, T Ford, S Hatch, M Hotopf, A John, E Kontopantelis, R Webb, S Wessely, S Mc-Manus, The Lancet Psychiatry. 710Pierce, M., Hope, H., Ford, T., Hatch, S., Hotopf, M., John, A., Kontopantelis, E., Webb, R., Wessely, S., Mc- Manus, S., et al. (2020). Mental health before and dur- ing the covid-19 pandemic: a longitudinal probability sample survey of the uk population. The Lancet Psychi- atry, 7(10):883-892.
Identifying depression on reddit: The effect of training data. I Pirina, Ç Çöltekin, Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared Task. the 2018 EMNLP Workshop SMM4H: The 3rd Social Media Mining for Health Applications Workshop & Shared TaskPirina, I. and Çöltekin, Ç . (2018). Identifying depression on reddit: The effect of training data. In Proceedings of the 2018 EMNLP Workshop SMM4H: The 3rd So- cial Media Mining for Health Applications Workshop & Shared Task, pages 9-12.
The impact of different types of abuse on depression. Depression research and treatment. M L Radell, E G Hamza, W H Daghustani, A Perveen, A A Moustafa, Radell, M. L., Abo Hamza, E. G., Daghustani, W. H., Per- veen, A., and Moustafa, A. A. (2021). The impact of different types of abuse on depression. Depression re- search and treatment, 2021.
Phase: Learning emotional phase-aware representations for suicide ideation detection on social media. R Sawhney, H Joshi, L Flek, R Shah, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeSawhney, R., Joshi, H., Flek, L., and Shah, R. (2021). Phase: Learning emotional phase-aware representations for suicide ideation detection on social media. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2415-2428.
A hybridized feature extraction approach to suicidal ideation detection from social media post. F M Shah, F Haque, R U Nur, S Al Jahan, Z Mamud, 2020 IEEE Region 10 Symposium (TENSYMP). IEEEShah, F. M., Haque, F., Nur, R. U., Al Jahan, S., and Ma- mud, Z. (2020). A hybridized feature extraction ap- proach to suicidal ideation detection from social media post. In 2020 IEEE Region 10 Symposium (TENSYMP), pages 985-988. IEEE.
Depression detection via harvesting social media: A multimodal dictionary learning solution. G Shen, J Jia, L Nie, F Feng, C Zhang, T Hu, T.-S Chua, W Zhu, IJCAI. Shen, G., Jia, J., Nie, L., Feng, F., Zhang, C., Hu, T., Chua, T.-S., and Zhu, W. (2017). Depression detection via har- vesting social media: A multimodal dictionary learning solution. In IJCAI, pages 3838-3844.
A prioritization model for suicidality risk assessment. H.-C Shing, P Resnik, D W Oard, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsShing, H.-C., Resnik, P., and Oard, D. W. (2020). A pri- oritization model for suicidality risk assessment. In Pro- ceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8124-8137.
Depression in cancer patients: Pathogenesis, implications and treatment. H R Smith, Oncology letters. 94Smith, H. R. (2015). Depression in cancer patients: Patho- genesis, implications and treatment. Oncology letters, 9(4):1509-1514.
Causal explanation analysis on social media. Y Son, N Bayas, H A Schwartz, arXiv:1809.01202arXiv preprintSon, Y., Bayas, N., and Schwartz, H. A. (2018). Causal explanation analysis on social media. arXiv preprint arXiv:1809.01202.
Depression among patients with hiv/aids: research development and effective interventions (gapresearch). B X Tran, R Ho, C S Ho, C A Latkin, H T Phan, G H Ha, G T Vu, J Ying, M W Zhang, International journal of environmental research and public health. 16101772Tran, B. X., Ho, R., Ho, C. S., Latkin, C. A., Phan, H. T., Ha, G. H., Vu, G. T., Ying, J., and Zhang, M. W. (2019). Depression among patients with hiv/aids: research de- velopment and effective interventions (gapresearch). In- ternational journal of environmental research and public health, 16(10):1772.
Dreaddit: A reddit dataset for stress analysis in social media. E Turcan, K Mckeown, arXiv:1911.00133arXiv preprintTurcan, E. and McKeown, K. (2019). Dreaddit: A reddit dataset for stress analysis in social media. arXiv preprint arXiv:1911.00133.
Depression and self-harm risk assessment in online forums. A Yates, A Cohan, N Goharian, arXiv:1709.01848arXiv preprintYates, A., Cohan, A., and Goharian, N. (2017). Depression and self-harm risk assessment in online forums. arXiv preprint arXiv:1709.01848.
| [
"https://github.com/drmuskangarg/CAMS."
] |
[
"A Low Dimensionality Representation for Language Variety Identification",
"A Low Dimensionality Representation for Language Variety Identification"
] | [
"Francisco Rangel francisco.rangel@autoritas.es \nUniversitat Politècnica de València\nSpain\n\nAutoritas Consulting\nSpain\n",
"Marc Franco-Salvador \nUniversitat Politècnica de València\nSpain\n",
"Paolo Rosso prosso@dsic.upv.es \nUniversitat Politècnica de València\nSpain\n"
] | [
"Universitat Politècnica de València\nSpain",
"Autoritas Consulting\nSpain",
"Universitat Politècnica de València\nSpain",
"Universitat Politècnica de València\nSpain"
] | [] | Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ∼35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality -and increasing the big data suitability -to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages. | 10.1007/978-3-319-75487-1_13 | [
"https://arxiv.org/pdf/1705.10754v1.pdf"
] | 4,128,593 | 1705.10754 | 2def310b92f233165c7fca5792422ff842e61b49 |
A Low Dimensionality Representation for Language Variety Identification
Francisco Rangel francisco.rangel@autoritas.es
Universitat Politècnica de València
Spain
Autoritas Consulting
Spain
Marc Franco-Salvador
Universitat Politècnica de València
Spain
Paolo Rosso prosso@dsic.upv.es
Universitat Politècnica de València
Spain
A Low Dimensionality Representation for Language Variety Identification
low dimensionality representationlanguage variety identificationsimilar languages discriminationauthor profilingbig datasocial media
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with its specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). In this work we propose a low dimensionality representation (LDR) to address this task with five different varieties of Spanish: Argentina, Chile, Mexico, Peru and Spain. We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ∼35%. Furthermore, we compare LDR with two reference distributed representation models. Experimental results show competitive performance while dramatically reducing the dimensionality -and increasing the big data suitability -to only 6 features per variety. Additionally, we analyse the behaviour of the employed machine learning algorithms and the most discriminating features. Finally, we employ an alternative dataset to test the robustness of our low dimensionality representation with another set of similar languages.
Introduction
Language variety identification aims at labelling texts in a native language (e.g. Spanish, Portuguese, English) with their specific variation (e.g. Argentina, Chile, Mexico, Peru, Spain; Brazil, Portugal; UK, US). Although at first sight language variety identification may seem a classical text classification problem, cultural idiosyncrasies may influence the way users construct their discourse, the kind of sentences they build, the expressions they use or their particular choice of words. Due to that, we can consider language variety identification as a double problem of text classification and author profiling, where information about how language is shared by people may help to discriminate among classes of authors depending on their language variety. This task is specially important in social media. Despite the vastness and accessibility of the Internet destroyed frontiers among regions or traits, companies are still very interested in author profiling segmentation. For example, when a new product is launched to the market, knowing the geographical distribution of opinions may help to improve marketing campaigns. Or given a security threat, knowing the possible cultural idiosyncrasies of the author may help to better understand who could have written the message.
Language variety identification is a popular research topic of natural language processing. In the last years, several tasks and workshops have been organized: the Workshop on Language Technology for Closely Related Languages and Language Variants @ EMNLP 2014 1 ; the VarDial Workshop @ COLING 2014 -Applying NLP Tools to Similar Languages, Varieties and Dialects 2 ; and the LT4VarDial -Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialect 3 @ RANLP [14] [12]. We can find also several works focused on the task. In [10] the authors addressed the problem of identifying Arabic varieties in blogs and social fora. They used character n-gram features to discriminate between six different varieties and obtained accuracies between 70%-80%. Similarly, [13] collected 1,000 news articles of two varieties of Portuguese. They applied different features such as word and character n-grams and reported accuracies over 90%. With respect to the Spanish language, [6] focused on varieties from Argentina, Chile, Colombia, Mexico and Spain in Twitter. They used meta-learning and combined four types of features: i) character n-gram frequency profiles, ii) character n-gram language models, iii) Lempel-Ziv-Welch compression and iv) syllable-based language models. They obtained an interesting 60%-70% accuracy of classification.
We are interested in discovering which kind of features capture higher differences among varieties. Our hypothesis is that language varieties differ mainly in lexicographic clues. We show an example in Table 1.
English
I was goofing around with my dog and I lost my mobile.
ES-Argentina
Estaba haciendo boludeces con mi perro y extravié el celular.
ES-Mexico
Estaba haciendo el pendejo con mi perro y extravié el celular.
ES-Spain
Estaba haciendo el tonto con mi perro y perdí el móvil. In this work we focus on the Spanish language variety identification. We differentiate from the previous works as follows: i) instead of n-gram based representations, we propose a low dimensionality representation that is helpful when dealing with big data in social media; ii) in order to reduce the possible over-fitting, our training and test partitions do not share any author of instance between them 4 ; and iii) in contrast to the Twitter dataset of [6], we will make available our dataset to the research community.
Low Dimensionality Representation
The key aspect of the low dimensionality representation (LDR) is the use of weights to represent the probability of each term to belong to each one of the different language varieties. We assume that the distribution of weights for a given document should be closer to the weights of its corresponding language variety. Formally, the LDR is estimated as follows:
Term-frequency -inverse document frequency (tf-idf) matrix creation. First, we apply the tf-idf [11] weighting for the terms of the documents of the training set D. As result we obtain the following matrix:
∆ = w 11 w 12 ... w 1m δ(d 1 ) w 21 w 22 ... w 2m δ(d 2 ) ... ... ... ... w n1 w n2 ... w nm δ(d n ) ,(1)
where each row in the matrix ∆ represents a document d, each column represents a vocabulary term t, w ij represents its tf-idf, and δ(d i ) represents the assigned class c of the document i, that is, the language variety actually assigned to this document.
Class-dependent term weighting. Using the matrix ∆, we obtain the class-dependent term weight matrix β. This matrix contains the weights of each term t for each language variety C on the basis of Eq. 2:
W (t, c) = d∈D/c=δ(d) w dt d∈D w dt , ∀d ∈ D, c ∈ C(2)
Basically, the term weight W (t, c) is the ratio between the weights of the documents belonging to a concrete language variety c and the total distribution of weights for that term t.
Class-dependent document representation. We employ the class-dependent term weights β to obtain the final representation of the documents as follows:
d = {F (c 1 ), F (c 2 ), ..., F (c n )} ∼ ∀c ∈ C,(3)F (c i ) = {avg, std, min, max, prob, prop}(4)
where each F (c i ) contains the set of features showed in Eq. 4 and described in Table 2.
As we can see, our class-dependent weights β are employed to extract a small 5 -but and test sets, their particular style and vocabulary choice may contribute at training time to learn the profile of the authors. In consequence, over-fitting would be biasing the results. 5 Our hypothesis is that the distribution of weights for a given document should be closer to the weights of its corresponding language variety, therefore, we use the most common descriptive statistics to measure this variability among language varieties.
very discriminant -number of features for each language variety. 6 We note that the same process can be followed in order to represent a test document d ∈ D . We just need to use the β matrix obtained with D to index the document d by means of Eq. 3.
avg
The average weight of a document is calculated as the sum of weights W(t,c) of its terms divided by the total number of vocabulary terms of the document. std
The standard deviation of the weight of a document is calculated as the root square of the sum of all the weights W(t,c) minus the average. min
The minimum weight of a document is the lowest term weight W(t,c) found in the document. max
The maximum weight of a document is the highest term weight W(t,c) found in the document. prob
The overall weight of a document is the sum of weights W(t,c) of the terms of the document divided by the total number of terms of the document. prop
The proportion between the number of vocabulary terms of the document and the total number of terms of the document.
Evaluation Framework
In this section, we describe the corpus and the alternative representations that we employ in this work.
HispaBlogs Corpus
We have created the HispaBlogs dataset 7 by collecting posts from Spanish blogs from five different countries: Argentina, Chile, Mexico, Peru and Spain. For each country, there are 450 and 200 blogs respectively for training and test, ensuring that each author appears only in one set. Each blog contains at least 10 posts. The total number of blogs is 2,250 and 1,000 respectively. Statistics of the number of words are shown in Table 3.
Alternative representations
We are interested in investigating the impact of the proposed representation and compare its performance with state-of-the-art representations based on n-grams and with two approaches based on the recent and popular distributed representations of words by means of the continuous Skip-gram model [1]. 6 Using the LDR a document is represented by a total set of features equal to 6 multiplied by the number of categories (the 5 language varieties), in our case 30 features. This is a considerable dimensionality reduction that may be helpful to deal with big data environments. 7 The HispaBlogs dataset was collected by experts on social media from the Autoritas Consulting company (http://www.autoritas.net). Autoritas experts in the different countries selected popular bloggers related to politics, online marketing, technology or trends. The His-paBlogs dataset is publicly available at: https://github.com/autoritas/RD-Lab/tree/master/data/ HispaBlogs State-of-the art representations State-of-the-art representations are mainly based on n-grams models, hence we tested character and word based ones, besides word with tf-idf weights. For each of them, we iterated n from 1 to 10 and selected 1,000, 5,000 and 10,000 most frequent grams. The best results were obtained with the 10,000 most frequent BOW, character 4-grams and tf-idf 2-grams. Therefore, we will use them in the evaluation.
Distributed representations Due to the increasing popularity of the distributed representations [4], we used the continuous Skip-gram model to generate distributed representations of words (e.g. n-dimensional vectors), with further refinements in order to use them with documents. The continuous Skip-gram model [7,8] is an iterative algorithm which attempts to maximize the classification of the context surrounding a word. Formally, given a word w(t), and its surrounding words w(t−c), w(t−c+1), ..., w(t+ c) inside a window of size 2c + 1, the training objective is to maximize the average of the log probability shown in Equation 5:
1 T T t=1 −c≤j≤c,j =0 log p(w t+j |w t )(5)
To estimate p(w t+j |w t ) we used negative sampling [8] that is a simplified version of the Noise Contrastive Estimation (NCE) [3,9] which is only concerned with preserving vector quality in the context of Skip-gram learning. The basic idea is to use logistic regression to distinguish the target word W O from draws from a noise distribution P n (w), having k negative samples for each word. Formally, the negative sampling estimates p(w O |w I ) following Equation 6:
log σ(v w O T v w I ) + k i=1 E wi ∼ P n (w) log σ(−v wi T v w I )(6)
where σ(x) = 1/(1+exp(−x)). The experimental results in [8] show that this function obtains better results at the semantic level than hierarchical softmax [2] and NCE. In order to combine the word vectors to represent a complete sentence we used two approaches. First, given a list of word vectors (w 1 , w 2 , ..., w n ) belonging to a document, we generated a vector representation v of its content by estimating the average of their dimensions: v = n −1 n i=1 w i . We call this representation Skip-gram in the evaluation. In addition, we used Sentence vectors (SenVec) [5], a variant that follows Skip-gram architecture to train a special vector sv representing the sentence. Basically, before each context window movement, SenVec uses a special vector sv in place of w(t) with the objective of maximizing the classification of the surrounding words. In consequence, sv will be a distributed vector of the complete sentence.
Following state-of-the-art approach [5], in the evaluation we used a logistic classifier for both SenVec and Skip-gram approaches. 8
Experimental Results
In this section we show experimental results obtained with the machine learning algorithms that best solve the problem with the proposed representation, the impact of the preprocessing on the performance, the obtained results in comparison with the ones obtained with state-of-the-art and distributed representations, the error analysis that provides useful insights to better understand differences among languages, a depth analysis on the contribution of the different features and a cost analysis that highlights the suitability of LDR for a big data scenario.
Machine learning algorithms comparison
We tested several machine learning algorithms 9 with the aim at selecting the one that best solves the task. As can be seen in Table 4, Multiclass Classifier 10 obtains the best result (results in the rest of the paper refer to Multiclass Classifier). We carried out a statistical test of significance with respect to the next two systems with the highest performance: SVM (z 0.05 0, 880 < 1, 960) and LogitBoost (z 0.05 = 1, 983 > 1, 960).
Algorithm
Accuracy Algorithm Accuracy Algorithm Accuracy Multiclass Classifier 71. Table 4. Accuracy results with different machine learning algorithms. 8 We used 300-dimensional vectors, context windows of size 10, and 20 negative words for each sample. We preprocessed the text with word lowercase, tokenization, removing the words of length one, and with phrase detection using word2vec tools: https://code.google.com/p/word2vec/ 9 http://www.cs.waikato.ac.nz/ml/weka/ 10 We used SVM with default parameters and exhaustive correction code to transform the multiclass problem into a binary one.
Preprocessing impact
The proposed representation aims at using the whole vocabulary to obtain the weights of its terms. Social media texts may have noise and inadequately written words. Moreover, some of these words may be used only by few authors. With the aim at investigating their effect in the classification, we carried out a preprocessing step to remove words that appear less than n times in the corpus, iterating n between 1 and 100. In Figure 1 the corresponding accuracies are shown. In the left part of the figure (a), results for n between 1 and 10 are shown in a continuous scale. In the right part (b), values from 10 to 100 are shown in a non-continuous scale. As can be seen, the best result was obtained with n equal to 5, with an accuracy of 71.1%. As it was expected, the proposed representation takes advantage from the whole vocabulary, although it is recommendable to remove words with very few occurrences that may alter the results. We show examples of those infrequent words in Table 5. Table 5. Very infrequent words.
In Figure 2, when analysing the evolution of the number of remaining words in function of the value of n, we can see a high number of words with very low frequency of occurrence. These words may introduce a high amount of noise in our LDR weight estimation. In addition, removing these words may be also beneficial in order to reduce the processing time needed to obtain the representation. This fact has special relevance for improving the performance in big data environments.
Language variety identification results
In Table 6 we show the results obtained by the described representations employing the Multiclass Classifier. As can be appreciated, the proposed low dimensionality representation improves more than 35% the results obtained with the state-of-the-art representations. BOW obtains slightly better results than character 4-grams, and both of them improve significantly the ones obtained with tf-idf 2-grams. Instead of selecting the most frequent n-grams, our approach takes advantage from the whole vocabulary and assigns higher weights to the most discriminative words for the different language varieties as shown in Equation 2. We highlight that our LDR obtains competitive results compared with the use of distributed representations. Concretely, there is no significant difference among them (Skip-gram z 0.05 = 0, 5457 < 1, 960 and SenVecz 0.05 = 0, 7095 < 1, 960). In addition, our proposal reduces considerably the dimensionality of one order of magnitude as shown in Table 6.
Error analysis
We aim at analysing the error of LDR to better understand which varieties are the most difficult to discriminate. As can be seen in Table 7, the Spanish variety is the easiest to discriminate. However, one of the highest confusions occurs from Argentinian to Spanish. Mexican and Spanish were considerably confused with Argentinian too. Finally, the highest confusion occurs from Peruvian to Chilean, although the lowest average confusion occurs with Peruvian. In general, Latin American varieties are closer to each other and it is more difficult to differentiate among them. Language evolves over time. It is logical that language varieties of nearby countries -as the Latin American ones -evolved in a more similar manner that the Spanish variety. It is also logical that even more language variety similarities are shared across neighbour countries, e.g. Chilean compared with Peruvian and Argentinian. Table 7. Confusion matrix of the 5-class classification. In Figure 3 we show the precision and recall values for the identification of each variety. As can be seen, Spain and Chile have the highest recall so that texts written in these varieties may have less probability to be misclassified as other varieties. Nevertheless, the highest precisions are obtained for Mexico and Peru, implying that texts written in such varieties may be easier to discriminate.
Most discriminating features
In Table 8 we show the most discriminant features. The features are sorted by their information gain (IG). As can be seen, the highest gain is obtained by average, maximum and minimum, and standard deviation. On the other hand, probability and proportionality features has low information gain.
We experimented with different sets of features and show the results in Figure 4. As may be expected, average-based features obtain high accuracies (67.0%). However, although features based on standard deviation have not the highest information gain, they obtained the highest results individually (69.2%), as well as their combination with average ones (70,8%). Features based on minimum and maximum obtain low results (48.3% and 54.7% respectively), but in combination they obtain a significant increase (61.1%). The combination of the previous features obtains almost the highest accuracy (71.0%), equivalent to the accuracy obtained with probability and proportionality features (71.1%).
Cost analysis
We analyse the cost from two perspectives: i) the complexity to the features; and ii) the number of features needed to represent a document. Defining l as the number of different language varieties, and n the number of terms of the document to be classified, the cost of obtaining the features of Table 2 (average, minimum, maximum, probability and proportionality) is O(l · n). Defining m as the number of terms in the document that coincides with some term in the vocabulary, the cost of obtaining the standard deviation is O(l · m). As the average is needed previously to the standard deviation calculation, the total cost is O(l · n) + O(l · m) that is equal to O(max(l · n, l · m)) = O(l · n). Since the number of terms in the vocabulary will always be equal or greater than the number of coincident terms (n ≥ m), and as the number of terms in the document will always be much higher than the number of language varieties (l << n), we can determine the cost as lineal with respect to the number of terms in the document O(n). With respect to the number of features needed to represent a document, we showed in Table 6 the considerable reduction of the proposed low dimensionality representation.
Robustness
In order to analyse the robustness of the low dimensionality representation to different languages, we experimented with the development set of the DSLCC corpus 11 from the Discriminating between Similar Languages task [12]. The corpus consists of 2,000 sentences per language or variety, with between 20 and 100 tokens per sentence, obtained from news headers. In Table 9 we show the results obtained with the proposed representation and the two distributed representations, Skip-gram and SenVec. It is important to notice that, in general, when a particular representation improves for one language is at cost of the other one. We can conclude that the three representations obtained comparative results and support the robustness of the low dimensionality representation. Table 9. Accuracy results in the development set of the DSLCC. The significance is marked in bold when some representation obtains significantly better results than the next best performing representation (e.g. results for SenVec in Portugal Portuguese are significantly higher than LDR, which at the same time are significantly higher than Skip-gram).
Conclusions
In this work, we proposed the LDR low dimensionality representation for language variety identification. Experimental results outperformed traditional state-of-the-art representations and obtained competitive results compared with two distributed representationbased approaches that employed the popular continuous Skip-gram model. The dimensionality reduction obtained by means of LDR is from thousands to only 6 features per language variety. This allows to deal with large collections in big data environments such as social media. Recently, we have applied LDR to the age and gender identification task obtaining competitive results with the best performing teams in the author profiling task at the PAN 12 Lab at CLEF. 13 As a future work, we plan to apply LDR to other author profiling tasks such as personality recognition.
Fig. 1 .
1Accuracy obtained after removing words with frequency equal or lower than n. (a) Continuous scale. (b) Non-continuous scale.
Fig. 2 .
2Number of words after removing those with frequency equal or lower than n.
Fig. 3 .
3F1 values for identification as the corresponding language variety vs. others.
avg 0.675 ± 0.005 CL-max 0.496 ± 0.005 MX-prob 0.151 ± 0.005 MX-max 0.601 ± 0.005 CL-std 0.495 ± 0.007 ES-prob 0.130 ± 0.011 PE-max 0.600 ± 0.009 MX-std 0.493 ± 0.007 AR-prob 0.127 ± 0.006 ES-min 0.595 ± 0.033 CL-min 0.486 ± 0.013 AR-prop 0.116 ± 0.005 ES-avg 0.584 ± 0.004 AR-std 0.485 ± 0.005 MX-prop 0.113 ± 0.006 MX-avg 0.577 ± 0.008 PE-std 0.483 ± 0.012 PE-prop 0.112 ± 0.005 ES-max 0.564 ± 0.007 AR-min 0.463 ± 0.012 ES-prop 0.110 ± 0.007 AR-max 0.550 ± 0.007 CL-avg 0.455 ± 0.008 CL-prop 0.101 ± 0.005 MX-min 0.513 ± 0.027 PE-min 0.369 ± 0.019 CL-prob 0.087 ± 0.010
Fig. 4 .
4Accuracy with different combinations of features.
Table 1 .
1The same example in three varieties of Spanish (Argentina, Mexico and Spain).
Table 2 .
2Set of features for each category (language variety) used in Equation 4.
Table 3. Number of posts, words and words per post (average and standard deviation) per language variety.Language Variety
# Blogs/authors
# Words
# Words per post
Training Test Training
Test Training Test
AR -Argentina
450
200 1,408,103 590,583 371 448 385 849
CL -Chile
450
200 1,081,478 298,386 313 465 225 597
ES -Spain
450
200 1,376,478 620,778 360 426 395 765
MX -Mexico
450
200 1,697,091 618,502 437 513 392 894
PE -Peru
450
200 1,602,195 373,262 410 466 257 627
TOTAL
2,250 1,000 7,164,935 2,501,511 380 466 334 764
Table 8 .
8Features sorted by information gain.
http://alt.qcri.org/LT4CloseLang/index.html 2 http://corporavm.uni-koeln.de/vardial/sharedtask.html 3 http://ttg.uni-saarland.de/lt4vardial2015/dsl.html4 It is important to highlight the importance of this aspect from an evaluation perspective in an author profiling scenario. In fact, if texts from the same authors are both part of the training
http://ttg.uni-saarland.de/lt4vardial2015/dsl.html 12 http://pan.webis.de 13 http://www.clef-innitiative.org
The work of the first author was in the framework of ECOPORTUNITY IPT-2012-1220-430000. The work of the last two authors was in the framework of the SomEMBED MINECO TIN2015-71147-C2-1-P research project. This work has been also supported by the SomEM-BED TIN2015-71147-C2-1-P MINECO research project and by the Generalitat Valenciana under the grant ALMAPATER (PrometeoII/2014/030)
Language Variety Identification Using Distributed Representations of Words and Documents. Marc Franco-Salvador, Francisco Rangel, Paolo Rosso, Mariona Taulé, María Martí, Antonia, Experimental IR Meets Multilinguality, Multimodality, and Interaction. Springer-VerlagFranco-Salvador, Marc and Rangel, Francisco and Rosso, Paolo and Taulé, Mariona and Martí, María Antonia: Language Variety Identification Using Distributed Representations of Words and Documents. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction. Springer-Verlag, LNCS(9283), pp. 28-40
Classes for fast maximum entropy training. Joshua Goodman, Proceedings of the Acoustics, Speech, and Signal Processing (ICASSP'01). the Acoustics, Speech, and Signal Processing (ICASSP'01)1Goodman, Joshua: Classes for fast maximum entropy training. In: Proceedings of the Acous- tics, Speech, and Signal Processing (ICASSP'01) vol. 1 pp. 561-564 (2001)
Aapo: Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. Michael U Gutmann, Hyvärinen, The Journal of Machine Learning Research. 13Gutmann, Michael U. and Hyvärinen, Aapo: Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. In: The Journal of Machine Learning Research vol. 13 pp. 307-361 (2012)
Distributed representations, Parallel distributed processing: explorations in the microstructure of cognition. Geoffrey E Hinton, James L Mcclelland, David E Rumelhart, foundations. Cambridge, MAMIT Press1Hinton, Geoffrey E. and Mcclelland, James L. and Rumelhart, David E.: Distributed represen- tations, Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations. In: MIT Press, Cambridge, MA (1986)
Distributed representations of sentences and documents. Quoc V Le, Tomas Mikolov, Proceedings of the 31st International Conference on Machine Learning (ICML'14). the 31st International Conference on Machine Learning (ICML'14)32Le, Quoc V. and Mikolov, Tomas: Distributed representations of sentences and documents. In: Proceedings of the 31st International Conference on Machine Learning (ICML'14) vol. 32 (2014)
Language variety identification in Spanish tweets. Wolfgang Maier, Carlos Gómez-Rodríguez, Workshop on Language Technology for Closely Related Languages and Language Variants (EMNLP'14. Maier, Wolfgang and Gómez-Rodríguez, Carlos: Language variety identification in Spanish tweets. In: Workshop on Language Technology for Closely Related Languages and Language Variants (EMNLP'14) pp. 25-35 (2014)
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of Workshop at International Conference on Learning Representations (ICLR'13. Workshop at International Conference on Learning Representations (ICLR'13Mikolov, Tomas and Chen, Kai and Corrado, Greg and Dean, Jeffrey: Efficient estimation of word representations in vector space. In: Proceedings of Workshop at International Confer- ence on Learning Representations (ICLR'13) (2013)
Tomas Mikolov, Sutskever, Ilya, Kai Chen, Greg S Corrado, Jeff Dean, Distributed representations of words and phrases and their compositionality. Advances in Neural Information Processing SystemsMikolov, Tomas and Sutskever, Ilya and Chen, Kai and Corrado, Greg S. and Dean, Jeff: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems pp. 3111-3119 (2013)
A fast and simple algorithm for training neural probabilistic language models. Andriy Mnih, Yee Teh, Whye, Proceedings of the 29th International Conference on Machine Learning (ICML'12. the 29th International Conference on Machine Learning (ICML'12Mnih, Andriy and Teh, Yee Whye: A fast and simple algorithm for training neural probabilistic language models. In: Proceedings of the 29th International Conference on Machine Learning (ICML'12) pp. 1751-1758 (2012)
Automatic identification of Arabic language varieties and dialects in social media. Fatiha Sadat, Farmazeh Kazemi, Atefeh Farzindar, 1st. International Workshop on Social Media Retrieval and Analysis (SoMeRa'14. Sadat, Fatiha and Kazemi, Farmazeh and Farzindar, Atefeh: Automatic identification of Ara- bic language varieties and dialects in social media. In: 1st. International Workshop on Social Media Retrieval and Analysis (SoMeRa'14) (2014)
Gerard Salton, Christopher Buckley, Term-weighting approaches in automatic text retrieval. Information processing & management. 24Salton, Gerard and Buckley, Christopher: Term-weighting approaches in automatic text re- trieval. Information processing & management, vol. 24(5), pp. 513-523 (1988)
Merging comparable data sources for the discrimination of similar languages: The DSL corpus collection. Liling Tan, Marcos Zampieri, Nicola Ljubešic, Jörg Tiedemann, 7th Workshop on Building and Using Comparable Corpora Building Resources for Machine Translation Research (BUCC'14. Tan, Liling and Zampieri, Marcos and Ljubešic, Nicola and Tiedemann, Jörg: Merging com- parable data sources for the discrimination of similar languages: The DSL corpus collection. In: 7th Workshop on Building and Using Comparable Corpora Building Resources for Ma- chine Translation Research (BUCC'14) pp. 6-10 (2014)
Automatic identification of language varieties: The case of portuguese. Marcos Zampieri, Binyam Gebrekidan-Gebre, Proceedings of the 11th Conference on Natural Language Processing. the 11th Conference on Natural Language ProcessingZampieri, Marcos and Gebrekidan-Gebre, Binyam: Automatic identification of language va- rieties: The case of portuguese. In: Proceedings of the 11th Conference on Natural Language Processing (KONVENS'12) pp. 233-237 (2012)
A report on the DSL shared task. Marcos Zampieri, Liling Tan, Nicola Ljubeši, Jörg Tiedemann, Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects. the First Workshop on Applying NLP Tools to Similar Languages, Varieties and DialectsVarDial'14Zampieri, Marcos and Tan, Liling and Ljubeši, Nicola and Tiedemann, Jörg: A report on the DSL shared task 2014. In: Proceedings of the First Workshop on Applying NLP Tools to Similar Languages, Varieties and Dialects (VarDial'14) pp. 58-67 (2014)
| [
"https://github.com/autoritas/RD-Lab/tree/master/data/"
] |
[
"DepAnn -An Annotation Tool for Dependency Treebanks",
"DepAnn -An Annotation Tool for Dependency Treebanks"
] | [
"Tuomo Kakkonen tuomo.kakkonen@cs.joensuu.fi \nDepartment of Computer Science\nUniversity of Joensuu\nFinland\n"
] | [
"Department of Computer Science\nUniversity of Joensuu\nFinland"
] | [] | DepAnn is an interactive annotation tool for dependency treebanks, providing both graphical and text-based annotation interfaces. The tool is aimed for semi-automatic creation of treebanks. It aids the manual inspection and correction of automatically created parses, making the annotation process faster and less error-prone. A novel feature of the tool is that it enables the user to view outputs from several parsers as the basis for creating the final tree to be saved to the treebank. DepAnn uses TIGER-XML, an XML-based general encoding format for both, representing the parser outputs and saving the annotated treebank. The tool includes an automatic consistency checker for sentence structures. In addition, the tool enables users to build structures manually, add comments on the annotations, modify the tagsets, and mark sentences for further revision. | null | [
"https://arxiv.org/pdf/cs/0610116v1.pdf"
] | 8,767,770 | cs/0610116 | 6702a819a819c9b3bd53a225e7c5f44c9614b176 |
DepAnn -An Annotation Tool for Dependency Treebanks
19 Oct 2006
Tuomo Kakkonen tuomo.kakkonen@cs.joensuu.fi
Department of Computer Science
University of Joensuu
Finland
DepAnn -An Annotation Tool for Dependency Treebanks
19 Oct 2006
DepAnn is an interactive annotation tool for dependency treebanks, providing both graphical and text-based annotation interfaces. The tool is aimed for semi-automatic creation of treebanks. It aids the manual inspection and correction of automatically created parses, making the annotation process faster and less error-prone. A novel feature of the tool is that it enables the user to view outputs from several parsers as the basis for creating the final tree to be saved to the treebank. DepAnn uses TIGER-XML, an XML-based general encoding format for both, representing the parser outputs and saving the annotated treebank. The tool includes an automatic consistency checker for sentence structures. In addition, the tool enables users to build structures manually, add comments on the annotations, modify the tagsets, and mark sentences for further revision.
Introduction
Treebanks, collections of syntactically annotated sentences, are needed for developing and evaluating natural language processing (NLP) applications, as well as for research in empirical linguistics. The earliest treebanks, constructed in 1970's, were annotated manually (Abeillé 2003). As treebank construction is labor-intensive, methods are needed for automating part of the work. The reason that treebanks are not constructed fully automatically is obviously the fact there are no parsers of free text capable of producing error-free parses. In semi-automatic treebank building, the work of an annotator is transformed from a tree builder to a checker and corrector of automatically created structures. Constructing a treebank semi-automatically calls for a range of tools, such as a part-of-speech (POS) tagger, a syntactic parser and an annotation tool.
In recent years, there has been a wide interest towards dependency-based annotation of treebanks. Dependency grammar formalisms stem from the work of Tesniére (Tesniére 1959). Most often the motivation for basing the treebank format on dependency is the fact that the language for which the treebank is developed for has a relatively free word order. In such languages, due to their rich morphology, there is more freedom in word order for expressing syntactic functions. In dependency-based grammars, only the lexical nodes are recognized, and the phrasal ones are omitted. The lexical nodes are linked with directed binary relations. The dependency structure of a sentence thus consists of a 1 number of nodes which is equal to the number of words in the sentence, a root node and the relations (dependency links) between the nodes.
Although more collaboration has emerged between treebank projects in recent years, the main problem with current treebanks in regards to their use and distribution is the fact that instead of reusing existing annotation and encoding schemes, new ones have been developed. Furthermore, the schemes that have been developed are often designed from theory and even application-specific viewpoints, and consequently, undermine the possibility for reuse. In addition to the difficulties for reuse, creating a treebank-specific representation format requires developing a new set of tools for creating, maintaining and searching the treebank.
The main motivation for designing and implementing DepAnn (Dependency Annotator), an annotation tool for dependency treebanks, stems from the need to construct a treebank for Finnish. As Finnish is a language with relatively free word order, dependencybased annotation format is a straight-forward choice as the basis for the annotation. Although DepAnn is customized to be used for creating the Finnish treebank, the choices made in the architecture and design of the system allow it to be modified to the needs of other treebank projects. Most importantly, DepAnn uses a XML-based abstract annotation format, TIGER-XML (Mengel and Lezius 2000) as both input and output formats.
This paper represents the main design principles and functionality of DepAnn. In addition, we describe how the system interacts with the other treebanking tools (POS taggers, morphological analyzers, and parsers). Section 2 shortly describes the principles of treebank construction. Section 3 represents the requirements defined for DepAnn based on an analysis of existing annotation tools, and describes the tool. Finally, in Section 4 we give concluding remarks and underline some future possibilities.
Background
Speed, consistency, and accuracy are the three key issues in treebank annotation. The most commonly used method for constructing a treebank is a combination of automatic and manual processing. Constructing a treebank, even with a semi-automatic method, is a labor-intensive effort. Efficient tools play a key role in lowering the costs of treebank development and enable larger, higher quality treebanks to be created. Both goals are crucial. The estimated costs of the Prague Dependency Treebank, the largest of the existing dependency treebanks, are USD 600,000 (?). A treebank has to be large enough to have any practical use, for example for grammar induction. The size of the existing dependency treebanks is quite limited, ranging from few hundreds to 90,000 sentences. Self-evidently, a treebank has to be also consistent and have a low error frequency to be useful.
A morphological analyzer and a parser should be applied in order to lower the burden of the annotators. The typical procedure is to use a parser that leaves at least part of ambiguities unresolved and dependencies unspecified, and let human annotators to do the inspection and correction of the parses. Thus, an annotator is correcting the POS and morphosyntactic tags, resolving the remaining ambiguities and adding and correcting any missing or erroneous dependencies. A crucial component in this type of semi-automatic treebank creation is the annotation tool. A well-designed and well-implemented tool can aid the work of annotators considerably. With an annotation tool, the user can browse, check, and correct the parser's output as well as create structures from scratch. In some of the existing tools the annotations are automatically checked against inconsistencies before saving them to the treebank. In addition, the user is able to add comments to the structures or mark them as doubtful.
Dependency treebanks have been built for several languages, e.g. Czech (?), English (Rambow et al. 2002), Danish (Bick 2003;Kromann 2003), Italian (Lesmo, Lombardo, and Bosco 2002), and Dutch (van der Beek, Bouma, Malouf, and van Noord 2002). The TIGER Treebank of German is an example of a treebank with both phrase structure and dependency annotations (Brants, Dipper, Hansen, Lezius, and Smith 2002). The current direction in the thinking in the dependency vs. constituency discussion in general is on integration and cooperation (Schneider 1998). While dependency grammars are superior in handling free word order, on one hand some elements of constituency grammars are better for handling certain phenomena (e.g. coordination), and on the other hand, constituency-based grammars also need dependency relations, at least for verb valency. Furthermore, dependency structures can be automatically converted into phrase structures (Xia and Palmer 2000) and vice versa (Daum, Foth, and Menzel 2004), although not always with 100% accuracy.
We started designing a treebank for Finnish by analyzing the methods and tools used by other dependency treebank projects. The producers of the dependency treebanks have in most cases aimed at creating a multipurpose resource for research on NLP systems and theoretical linguistics. Some, e.g. the Alpino Treebank of Dutch (van der Beek et al. 2002), are built for a specific purpose. Most of the dependency treebanks consist of newspaper text and are annotated on POS, morphological and syntactic levels. An interested reader is referred to (Kakkonen 2005) for further details on the analysis of dependency treebanks.
After a throughout study of existing annotation methods and tools (such as GRAPH (?), Abar-Hitz (Díaz de Ilarraza, Garmendia, and Oronoz 2004), Annotate (Plaehen and Brants 2000), DTAG (Kromann 2003)), CDG SENtence annotaTOR (SENATOR) (White 2000), it was found that none of the available annotation tools satisfied all our requirements. Some tools were not suitable for dependency annotation, some were not compatible with any common XML-based annotation formats, the user-interface was not considered suitable or the tool didn't have all the functions we required. In addition, to our knowledge there aren't any annotation tools available capable of showing or merging outputs from several parsers for aiding the annotator's choices. Thus, the decision was made to design and implement an annotation tool with all the desired characteristics.
The annotation tool 3.1 Design principles
The analysis of existing annotation tools was crucial in defining the requirements for the system to be developed. The following key features were recognized:
• Support for an existing XML encoding scheme Building a treebank is such a labor-intensive effort that promoting co-operation between treebank projects and reuse of formats and tools is an important and widely accepted goal in treebanking community (e.g. (Ide and Romary 2003)). Using an existing encoding format will make the system reusable. In addition, existing tools supporting the same scheme can be used for browsing, manipulating and searching the annotated treebanks.
• Both textual and graphical display and manipulation of parse trees For any annotation tool the capability to visualize the sentence structures is a necessity. In addition, the graphical view should preferably be interactive, so that the user can manipulate the structures. On the other hand, for some annotation tasks or for some user's needs textual view of the structure may be more suitable.
• An interface to morphological analyzers and parsers for constructing the initial trees
In order to generate the initial trees for human inspection and modification, the annotation tool must have an interface to a morphological parser, a POS tagger and a syntactic parser. The tool should be able to use simultaneously outputs from several tools to guide the annotator's decisions.
• An inconsistency checker for both structures and encoding
The annotated sentences to be saved to the treebank should be checked against tagging inconsistencies. In addition to XML-based validation of encoding, the inconsistency checker should inform the annotator about several other types of mistakes, such as mismatching combinations of POS and morphological tags, missing main verb, and fragmented, incomplete parses.
• Menu-based tagging In order to make the annotation process faster, setting the tags should be done by means of selecting the most suitable tag from a pre-defined set of tags, instead of requiring the annotator to type the tag label. In addition to being efficient, menubased tagging lowers the number of errors as there will be no errors cost by typos in the labels. On the other hand, keyboard shortcuts for selecting appropriate tags should be provided for more advanced users.
• A commenting tool For easing the later revisions, possibly performed by other annotators, the user should be able to add comments on the annotated structures. In addition, user should be able to mark a sentence as ready or unfinished to make it easier to locate sentences needing further revision.
The foremost design principle, apart from making the annotation process faster and less error-prone, was that the tool must be reusable and modifiable. The system was designed in way that the modules for processing the treebank output and input are kept separate from the structure viewing and manipulation modules, thus making the tool more easy to modify. The support for an existing encoding scheme is a crucial reusability feature of any treebanking software. The selection of the format was first narrowed down by the decision that the format should be XML-based, as XML offers a set of validation capabilities, in order to automatically check for encoding inconsistencies.
The aim of an abstract annotation model is to provide a general framework for linguistic annotation. Existing abstract annotation formats share the common goal of offering an intermediate level between the actual data (encoding scheme) and the conceptual level of annotation (annotation scheme). An advantage of such an approach is to enable a common set of tools to be used for creating and manipulating treebanks in several formats. From the set of possible option, including e.g. XCES (Ide and Romary 2003), TIGER-XML (Mengel and Lezius 2000) was selected to be used in DepAnn. TIGER-XML is an exchange format for corpora and treebanks, providing an XML-based representation format which is general enough for representing diverse types of corpus and treebank annotations (Mengel and Lezius 2000). The format is based on encoding of directed acyclic graphs (DAGs). Each DAG represents a sentence as terminal (i.e. words) and nonterminal (dependencies) nodes. The syntactic categories, POS, lemma and other information is represented as attributes in the nodes. The edges encode labeled links between terminals and nonterminals.
TIGER-XML has several desirable characteristics: First, it is flexible and extensible enough to accommodate different treebank annotation types, both dependency and consistency based. Second, it has been shown to be suitable for dependency annotation in several treebank projects (e.g. TIGER Treebank (Brants et al. 2002), Arboretum (Bick 2003)). Third, there are explicit specifications available how to encode dependency structures in the scheme (Kromann 2004). And finally, there exists a set of well-implemented tools supporting the format, such as TIGERSearch viewing/query tool and TIGERRegistry indexing tool (König, Wolfgang, and Voormann 2003), capable of transforming some well-known corpus and treebank formats, such as the SUSANNE (Sampson 1995) and Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993) into TIGER-XML.
As TIGER-XML is a general model of treebank encoding, it would be possible to show and manipulate constituency structures with DepAnn. However, the decision was made that the tool was not going to be designed for both constituent and dependency structures in a suspicion that too general design would hamper the efficiency of dependency annotation. Thus, the visualization functions and the user interface are tuned for manipulating dependency structures.
Main functionality
In DepAnn tool, the structure to be annotated is represented to the user in textual and graphical formats in order to offer the best option for each user's needs. The textual and graphical views are fully integrated, thus the changes applied in the graphical view immediately affect the textual one and vice versa. The user interface is customizable to suit the task and the annotator's preferences. The user can add comments on annotations, reminding on problematic parts on the sentence structures. Completed trees can be marked as ready, indicating that no further inspection and modifications are needed.
Outputs of several parsers and POS taggers can be applied in parallel to offer the annotator a possibility to compare the outputs in order to guide the annotation decisions. To be able to use the output of an parser in DepAnn, a converter must be implemented to transform the output from the parser or tagger-specific format to the format used by DepAnn. TIGER-XML (Mengel and Lezius 2000) is used as the input format for the structures obtained from the automatic tools, as well as the output format for the annotated treebank. For internal data representation the TIGER-XML structures are transformed into Java objects. Figure 1.1 illustrates the input and output processes of DepAnn.
The annotation process using DepAnn starts with processing the treebank texts with one or more parsers and taggers. Next, a converter is applied to the outputs in order to transform the tool-specific format into TIGER-XML. After the conversion, the annotator can view the parsed structures and build the annotated structure to be added to the treebank. The user can select the parser output to be used for creating the initial trees. The main groups of functions are indicated in Figure 1.2 by boxes A...E. The text field in the area bordered with box A shows the sentence being annotated in raw text format. Area B is a toolbar with controls for treebank browsing (buttons for showing the next and the previous sentence and a slidebar for browsing), checking and saving the sentence, and modifying the tag sets. In C, the user can graphically manipulate the structure by changing the values on nodes representing the words and dependency links and by removing, adding and rerouting the links between the nodes. Area D consists of the revision functions. User can mark the sentence as ready, indicating that further revision is not needed. In addition, user can use the comment field to write notes concerning the sentence structure. Box E frames the tables for text-based structure manipulation and viewing.
The parser and tagger outputs for aiding the annotation decisions are shown in a separate resizable, customizable dialog. For example, in a computer system with multiple monitors, the dialog can be placed in to a separate desktop. In the current version, the user can select which parser's output is used as the initial tree for correction and modification.
We are working on an extension to the system, in which the initial trees would be created by semi-automatically combining the parsers' and taggers' outputs by the aid of the annotator.
When the user decides to stop editing a sentence, an automatic consistency checking is performed to validate the sentence structure, the annotation, and encoding. First, a series of checks are run to verify that the sentence has a main verb, a root, all the words have word form and lemma information and morphosyntactic tags, and that the sentence is not fragmented etc. Second, if the first series of checks was passed, the sentence is transformed into TIGER-XML and validated against the XML schema to find any errors in encoding. The problems found are indicated to the user. The user can select which checks are run by modifying the system set-up.
Implementation details
The annotation tool is implemented in Java. As Java is platform-independent, the system can be used in any environment for which Java is available. The system consists of three main components: the interface to parsers and taggers, the annotation tool itself, and the output module. Two freely available open source packages, OpenJGraph (Salvo 2006) and TIGER API (Demir et al. 2006), were used for developing the system, although both had to be modified considerably to be suitable to be used as a part of DepAnn. TIGER API, a Java API for TIGER-XML, is used for input and output processing. The graphical annotation manipulation functionality was build on top of OpenJGraph. The annotation tool uses Java Database Connectivity (JDBC) for storing the outputs from the parsing and tagging tools, as well as for the user comments and information on ready sentences. Thus, the MySQL database currently in use can be replaced by any other JDBC-compatible database.
Conclusion
The semi-automatic annotation tool for dependency structures discussed in the paper provides graphical and text-based annotation functions, possibility to use outputs from several parsers to aid the annotation decisions, tools for commenting the annotated structures, automatic consistency checking, and support for TIGER-XML format. In its first application, DepAnn will be used for creating a treebank for Finnish, aimed for evaluation of syntactic parsers. Outputs from two parsers/morphological analyzers, Functional Dependency Grammar parser (FI-FDG) (Tapanainen and Järvinen 1997) and Constraint Grammar parser (FINCG) (Karlsson 1990) is transformed to TIGER-XML and represented to the annotator as the basis for creating the correct structure. The tool is implemented in a way that it is adjustable for other treebank projects' needs. As the annotation format is based on TIGER-XML, the tool is not restricted to a particular set of POS, morphological or dependency tags. The modules for processing the treebank output and input are separate from the graphical and textual annotation modules, thus the tool could be modified to use any other annotation format. DepAnn will be made publicly available as an open source distribution.
As mentioned above, the issues related to reuse of tools and formats is one of the major issues in treebanking. Thus, few words on development costs of the annotation tool is in order. The work was conducted by a researcher with a degree in Software Engineering and few years of practical experience in programming and software designing. No exact data was recorded, but the amount of work to design and implement the system to its current state is around a half of a man-year. The work was considerably eased by using open source APIs for treebank manipulation and graph visualization. These observations underline the importance of reusing existing annotation schemes and software components for treebank development.
As discussed earlier, an improvement to the system that we are currently working on is the semi-automatic creation of initial trees. The algorithm would automatically combine as many words and dependency links of the taggers' and the parsers' outputs as possible, and ask the annotator the make decisions on the rest. Such method would improve the quality of the initial trees, thus lowering the number of modifications needed to come up with the correct structure. Other future enhancements to the system could include even more strict and detailed checking algorithms for the annotated structures and an improved interface between DepAnn and the parsers which would allow the annotator to interact with the parsers in a case of problematic sentences. The approach has been successfully applied by some annotation tools, such as Annotate (Plaehen and Brants 2000) and the lexical analysis and constituency marking tools of the Alpino Treebank (van der Beek, Bouma, Malouf, and van Noord 2002). Often several annotators are working on the same sentences in order to ensure the consistency of the treebank. In such cases, it would be helpful if the tool would allow to manage multiple annotations and to perform inter-annotator agreement checks. Furthermore, the memory management of the tool could be improved in order to make it more efficient when working with large treebanks with tens of thousands of sentences.
Figure 1
1Figure 1.1: The inputs and outputs of the tool.
Figure 1 .
12 illustrates the main frame of DepAnn's user interface.
Figure 1 . 2 :
12The main frame of DepAnn tool.
AcknowledgementsThe author would like to thank the two anonymous reviewers for helpful comments on this paper and for some interesting suggestions for future work. The research reported in this paper has been supported by the European Union under a Marie Curie Host Fellowship for Early Stage Researcher Training at MULTILINGUA, University of Bergen, Norway, MirrorWolf project funded by the National Technology Agency of Finland (TEKES), and Automated Assessment Technologies for Free Text and Programming Assignments project funded by the Academy of Finland. The work was partly conducted while the author was working at the Human Language Technology Group at the Council for Scientific and Industrial Research (CSIR), Pretoria, South Africa and at the Faculty of Philosophy, University of Split, Croatia.
Introduction. A Abeillé, Treebanks: Building and Using Syntactically Annotated Corpora. A. AbeilléDordrecht, The NetherlandsKluwer Academic PublishersAbeillé, A. (2003). Introduction. In A. Abeillé (Ed.), Treebanks: Building and Using Syntactically Annotated Corpora. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Arboretum, a hybrid treebank for Danish. E Bick, Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories. the 2nd Workshop on Treebanks and Linguistic TheoriesVäxjö, SwedenBick, E. (2003). Arboretum, a hybrid treebank for Danish. In Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories, Växjö, Sweden.
The Tiger Treebank. S Brants, S Dipper, S Hansen, W Lezius, G Smith, Proceedings of the Workshop on Treebanks and Linguistic Theories. the Workshop on Treebanks and Linguistic TheoriesSozopolBulgariaBrants, S., S. Dipper, S. Hansen, W. Lezius, and G. Smith (2002). The Tiger Treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, Sozopol, Bul- garia.
Automatic transformation of phrase treebanks to dependency trees. M Daum, K Foth, W Menzel, Proceedings of the 4th International Conference on Language Resources and Evaluation. the 4th International Conference on Language Resources and EvaluationLisbon, PortugalDaum, M., K. Foth, and W. Menzel (2004). Automatic transformation of phrase tree- banks to dependency trees. In Proceedings of the 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal.
TIGER API 1.8. -an interface to the TIGER corpus. O Demir, H Keffer, V Němčik, S B Poggel, Demir, O., H. Keffer, V. Němčik, and S. B. Poggel (2006). TIGER API 1.8. -an interface to the TIGER corpus. http://www.tigerapi.org/ (Accessed April 21th, 2006).
Abar-hitz: An annotation tool for the basque dependency treebank. A Díaz De Ilarraza, A Garmendia, M Oronoz, Proceedings of the 4th International Conference on Language Resources and Evaluation. the 4th International Conference on Language Resources and EvaluationLisbon, PortugalDíaz de Ilarraza, A., A. Garmendia, and M. Oronoz (2004). Abar-hitz: An annotation tool for the basque dependency treebank. In Proceedings of the 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal.
Encoding syntactic annotation. N Ide, L Romary, Treebanks: Building and Using Syntactically Annotated Corpora. A. AbeilléDordrecht, The NetherlandsKluwer Academic PublishersIde, N. and L. Romary (2003). Encoding syntactic annotation. In A. Abeillé (Ed.), Treebanks: Building and Using Syntactically Annotated Corpora. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Dependency treebanks: Methods, annotation schemes and tools. T Kakkonen, Proceedings of the 15th Nordic Conference of Computational Linguistics. the 15th Nordic Conference of Computational LinguisticsJoensuu, FinlandKakkonen, T. (2005). Dependency treebanks: Methods, annotation schemes and tools. In Proceedings of the 15th Nordic Conference of Computational Linguistics, Joensuu, Finland, pp. 94-104.
Constraint grammar as a framework for parsing running text. F Karlsson, Proceedings of the 13th Conference on Computational Linguistics. the 13th Conference on Computational LinguisticsHelsinki, Finland3Karlsson, F. (1990). Constraint grammar as a framework for parsing running text. In Pro- ceedings of the 13th Conference on Computational Linguistics -Volume 3, Helsinki, Finland.
Tigersearch 2.1 user's manual. E König, L Wolfgang, H Voormann, König, E., L. Wolfgang, and H. Voormann (2003). Tigersearch 2.1 user's manual. http://www.ims.uni-stuttgart.de/projekte/TIGER/TIGERSearch (Ac- cessed April 21th, 2006).
The Danish dependency treebank and the DTAG treebank tool. M T Kromann, Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories. the 2nd Workshop on Treebanks and Linguistic TheoriesVäxjö, SwedenKromann, M. T. (2003). The Danish dependency treebank and the DTAG treebank tool. In Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories, Växjö, Sweden.
Nordic treebank network tiger-xml: Proposals for extensions and conventions in tiger-xml within the nordic treebank network, septempber 1. M T Kromann, Kromann, M. T. (2004). Nordic treebank network tiger-xml: Proposals for extensions and conventions in tiger-xml within the nordic treebank network, septempber 1, 2004. http://www.id.cbs.dk/~mtk/ntn/tiger-xmlT.html (Accessed April 21th, 2006).
Treebank development: the TUT approach. L Lesmo, V Lombardo, C Bosco, Proceedings of the International Conference on Natural Language Processing. the International Conference on Natural Language ProcessingMumbai, IndiaLesmo, L., V. Lombardo, and C. Bosco (2002). Treebank development: the TUT ap- proach. In Proceedings of the International Conference on Natural Language Process- ing, Mumbai, India.
Building a large annotated corpus of English: The Penn Treebank. M Marcus, B Santorini, M A Marcinkiewicz, Computational Linguistics. 192Marcus, M., B. Santorini, and M. A. Marcinkiewicz (1993). Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19 (2), 313-330.
An XML-based representation format for syntactically annotated corpora. A Mengel, W Lezius, Proceedings of the International Conference on Language Resources and Evaluation. the International Conference on Language Resources and EvaluationAthens, GreeceMengel, A. and W. Lezius (2000). An XML-based representation format for syntacti- cally annotated corpora. In Proceedings of the International Conference on Language Resources and Evaluation, Athens, Greece, pp. 121-126.
Annotate -an efficient interactive annotation tool. O Plaehen, T Brants, Proceedings of the 6th Applied Natural Language Processing Conference. the 6th Applied Natural Language Processing ConferenceSeattle, Washington, USAPlaehen, O. and T. Brants (2000). Annotate -an efficient interactive annotation tool. In Proceedings of the 6th Applied Natural Language Processing Conference, Seattle, Washington, USA.
A dependency treebank for English. O Rambow, C Creswell, R Szekely, H Taber, M Walker, Proceedings of the 3rd International Conference on Language Resources and Evaluation. the 3rd International Conference on Language Resources and EvaluationLas Palmas, Gran Canaria, SpainRambow, O., C. Creswell, R. Szekely, H. Taber, and M. Walker (2002). A dependency treebank for English. In Proceedings of the 3rd International Conference on Language Resources and Evaluation, Las Palmas, Gran Canaria, Spain.
OpenJGraph -Java graph and graph drawing project. J M J Salvo, Salvo, J. M. J. (2006). OpenJGraph -Java graph and graph drawing project. http://openjgraph.sourceforge.net/ (Accessed January 18th, 2006).
English for the Computer: The SUSANNE Corpus and Analytic Scheme. G Sampson, Oxford University PressOxford, UKSampson, G. (1995). English for the Computer: The SUSANNE Corpus and Analytic Scheme. Oxford, UK: Oxford University Press.
A Linguistic Comparison of Constituency, Dependency and Link Grammar. G Schneider, Zürich, SwitzerlandUniversity of ZürichLicentiate ThesisSchneider, G. (1998). A Linguistic Comparison of Constituency, Dependency and Link Grammar. Zürich, Switzerland: Licentiate Thesis, University of Zürich.
A non-projective dependency parser. P Tapanainen, T Järvinen, Proceedings of the 5th Conference on Applied Natural Language Processing. the 5th Conference on Applied Natural Language ProcessingWashington, D.C., USATapanainen, P. and T. Järvinen (1997). A non-projective dependency parser. In Pro- ceedings of the 5th Conference on Applied Natural Language Processing, Washington, D.C., USA.
Éléments de Syntaxe Structurale. L Tesniére, Klincksiek, L Van Der Beek, G Bouma, R Malouf, G Van Noord, Computational Linguistics in the Netherlands 2001. Selected Papers from the Twelfth CLIN Meeting. M. Theune, A. Nijholt, and H. HondorpParis, France; Amsterdam, The Netherlands; RodopiThe Alpino dependency treebankTesniére, L. (1959).Éléments de Syntaxe Structurale. Paris, France: Klincksiek. van der Beek, L., G. Bouma, R. Malouf, and G. van Noord (2002). The Alpino depen- dency treebank. In M. Theune, A. Nijholt, and H. Hondorp (Eds.), Computational Linguistics in the Netherlands 2001. Selected Papers from the Twelfth CLIN Meeting. Amsterdam, The Netherlands: Rodopi.
Rapid Grammar Development and Parsing: Constraint Dependency Grammar with Abstract Role Values. C White, West Lafayette, Indiana, USAPurdue UniversityPhD ThesisWhite, C. (2000). Rapid Grammar Development and Parsing: Constraint Dependency Grammar with Abstract Role Values. West Lafayette, Indiana, USA: PhD Thesis, Purdue University.
Converting dependency structures to phrase structures. F Xia, M Palmer, Proceedings of the First International Conference on Human Language Technology Research. the First International Conference on Human Language Technology ResearchSan Diego, California, USAXia, F. and M. Palmer (2000). Converting dependency structures to phrase structures. In Proceedings of the First International Conference on Human Language Technology Research, San Diego, California, USA.
| [] |
[
"Scaling laws in human speech, decreasing emergence of new words and a generalized model",
"Scaling laws in human speech, decreasing emergence of new words and a generalized model"
] | [
"Ruokuang Lin \nSchool of Electronic Science and Engineering\nInstitute for Biomedical Electronics Engineering\nMinistry of Education Key Laboratory of Modern Acoustics\nNanjing University\n210093NanjingChina\n",
"Chunhua Bian \nSchool of Electronic Science and Engineering\nInstitute for Biomedical Electronics Engineering\nMinistry of Education Key Laboratory of Modern Acoustics\nNanjing University\n210093NanjingChina\n\nDepartment of Physics\nBoston University\n02215BostonMassachusettsUSA\n\nDepartment of Neurology\nBeth Israel Deaconess Medical Center\nHarvard Medical School\n02215BostonMassachusettsUSA\n",
"Qianli D Y Ma \nDepartment of Physics\nBoston University\n02215BostonMassachusettsUSA\n\nCollege of Geographic and Biologic Information\nNanjing University of Posts and Telecommunications\n210003NanjingChina\n"
] | [
"School of Electronic Science and Engineering\nInstitute for Biomedical Electronics Engineering\nMinistry of Education Key Laboratory of Modern Acoustics\nNanjing University\n210093NanjingChina",
"School of Electronic Science and Engineering\nInstitute for Biomedical Electronics Engineering\nMinistry of Education Key Laboratory of Modern Acoustics\nNanjing University\n210093NanjingChina",
"Department of Physics\nBoston University\n02215BostonMassachusettsUSA",
"Department of Neurology\nBeth Israel Deaconess Medical Center\nHarvard Medical School\n02215BostonMassachusettsUSA",
"Department of Physics\nBoston University\n02215BostonMassachusettsUSA",
"College of Geographic and Biologic Information\nNanjing University of Posts and Telecommunications\n210003NanjingChina"
] | [] | Human language, as a typical complex system, its organization and evolution is an attractive topic for both physical and cultural researchers. In this paper, we present the first exhaustive analysis of the text organization of human speech. Two important results are that: (i) the construction and organization of spoken language can be characterized as Zipf's law and Heaps' law, as observed in written texts; (ii) word frequency vs. rank distribution and the growth of distinct words with the increase of text length shows significant differences between book and speech. In speech word frequency distribution are more concentrated on higher frequency words, and the emergence of new words decreases much rapidly when the content length grows. Based on these observations, a new generalized model is proposed to explain these complex dynamical behaviors and the differences between speech and book.Numerous statistic studies have been done to uncover the dynamical universal laws in complex system in physical, biological and social areas, such as the reaction dynamics within cells, financial market fluctuations, income distribution, biological species, word frequency, scientific publication, city size, etc.[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Zipf's law and Heaps' law are two typical representatives, the coexistence of which have been observed in various systems[20,21]. Zipf finds a certain scaling law in the rank of the word frequencies. Heaps' law reveals another aspect of scaling in natural language processing, according to which the vocabulary size grows in a sub-linear function with document size. It is also found that Zipf's law and Heaps' law holds for different languages with different characterization exponents which means more complicated statistical features[22][23][24][25][26][27]. Moreover, similar results have been recently reported for the corpus of web texts, including collaborative tagging, social annotation, internet search result and computer programs[28][29][30][31][32], which indicates universality of Zipf's law and Heaps' law beyond natural language systems.In this paper, we focus on the word frequency in spoken language. Using the speech transcriptions from the Speech British National Corpus on daily conversations of various subjects and ten classic books for comparison, we show (i) the word frequency distribution vs. ranking obeys the Zipf's law and growing of distinct words obeys Heaps' law; (ii) the coefficients of Zipf's and Heaps' laws have significant differences between written texts and speech transcriptions; and (iii) on the basis of the above study, a generalized model is proposed to simulate the growing dynamics and construction mechanism of spoken and written languages and the difference between speech and written texts. Empirical observations and model simulations agree well with each other.RESULTSExperiments. Two classes of data sets were analyzed in this article: (i) Ten speech transcriptions of the Speech British National Corpus [33] from Phonetics Laboratory, and (ii) ten classic books written by different authors in different era. The transcriptions of daily conversation from BNC database are selected to have comparable length as the books. As shown inTable 1, the total lengths of the text are 27341 to 268843 for books, and 25281 to | null | [
"https://arxiv.org/pdf/1412.4846v2.pdf"
] | 17,231,979 | 1412.4846 | 72b001045c741392414c57935b50cc08c1173dfe |
Scaling laws in human speech, decreasing emergence of new words and a generalized model
7 Jan 2015
Ruokuang Lin
School of Electronic Science and Engineering
Institute for Biomedical Electronics Engineering
Ministry of Education Key Laboratory of Modern Acoustics
Nanjing University
210093NanjingChina
Chunhua Bian
School of Electronic Science and Engineering
Institute for Biomedical Electronics Engineering
Ministry of Education Key Laboratory of Modern Acoustics
Nanjing University
210093NanjingChina
Department of Physics
Boston University
02215BostonMassachusettsUSA
Department of Neurology
Beth Israel Deaconess Medical Center
Harvard Medical School
02215BostonMassachusettsUSA
Qianli D Y Ma
Department of Physics
Boston University
02215BostonMassachusettsUSA
College of Geographic and Biologic Information
Nanjing University of Posts and Telecommunications
210003NanjingChina
Scaling laws in human speech, decreasing emergence of new words and a generalized model
7 Jan 2015Speech transcriptionZipf's lawHeaps' lawpreferential attachmentcomplex system
Human language, as a typical complex system, its organization and evolution is an attractive topic for both physical and cultural researchers. In this paper, we present the first exhaustive analysis of the text organization of human speech. Two important results are that: (i) the construction and organization of spoken language can be characterized as Zipf's law and Heaps' law, as observed in written texts; (ii) word frequency vs. rank distribution and the growth of distinct words with the increase of text length shows significant differences between book and speech. In speech word frequency distribution are more concentrated on higher frequency words, and the emergence of new words decreases much rapidly when the content length grows. Based on these observations, a new generalized model is proposed to explain these complex dynamical behaviors and the differences between speech and book.Numerous statistic studies have been done to uncover the dynamical universal laws in complex system in physical, biological and social areas, such as the reaction dynamics within cells, financial market fluctuations, income distribution, biological species, word frequency, scientific publication, city size, etc.[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Zipf's law and Heaps' law are two typical representatives, the coexistence of which have been observed in various systems[20,21]. Zipf finds a certain scaling law in the rank of the word frequencies. Heaps' law reveals another aspect of scaling in natural language processing, according to which the vocabulary size grows in a sub-linear function with document size. It is also found that Zipf's law and Heaps' law holds for different languages with different characterization exponents which means more complicated statistical features[22][23][24][25][26][27]. Moreover, similar results have been recently reported for the corpus of web texts, including collaborative tagging, social annotation, internet search result and computer programs[28][29][30][31][32], which indicates universality of Zipf's law and Heaps' law beyond natural language systems.In this paper, we focus on the word frequency in spoken language. Using the speech transcriptions from the Speech British National Corpus on daily conversations of various subjects and ten classic books for comparison, we show (i) the word frequency distribution vs. ranking obeys the Zipf's law and growing of distinct words obeys Heaps' law; (ii) the coefficients of Zipf's and Heaps' laws have significant differences between written texts and speech transcriptions; and (iii) on the basis of the above study, a generalized model is proposed to simulate the growing dynamics and construction mechanism of spoken and written languages and the difference between speech and written texts. Empirical observations and model simulations agree well with each other.RESULTSExperiments. Two classes of data sets were analyzed in this article: (i) Ten speech transcriptions of the Speech British National Corpus [33] from Phonetics Laboratory, and (ii) ten classic books written by different authors in different era. The transcriptions of daily conversation from BNC database are selected to have comparable length as the books. As shown inTable 1, the total lengths of the text are 27341 to 268843 for books, and 25281 to
124802 for speeches. There are relatively less distinct words being used in speeches, ranging from 2158 to 5815 in the selected data sets, and from 2572 to 29020 in books (see details about data in Methods).
Firstly, we analyzed the probability distribution of word frequency. We analyzed each book and speech transcription. The probability distribution of word frequency can be described as a power-law between the word frequency k and its probability density P (k),
P (k) ∼ k −β ,(1)
where β is the power-law scaling exponent. Fig. 1 shows the probability distribution of word frequency P (k) of two sets of data, the result of book is in Fig. 1(a) and speech is in Fig. 1(b), which both follow power-law. The goodness of fit is 0.9952 and 0.9944 respectively, and fitting region is k = 2 ∼ 100 (see details of the goodness of fit in Methods). All the validation results are listed in Table S1. We found significant difference of scaling exponents between books and speeches (book: 1.77 ± 0.11, speech: 1.67 ± 0.05, p < 0.05).
Zipf found a power-law relation between the work frequency Z(r), and its corresponding rank r, as speech transcription. The word frequency distributions can be divided into two parts, in the first part, which is corresponding to the high frequency words, the relative word frequency Z(r) of books is less than that of speech. In the second part, which is corresponding to the low frequency words, the Z(r) of books is larger than that of speech. The decay of the second part for speech and book both follow power-law with the goodness of fit 0.9985 and 0.9992 respectively and we found significant difference in the decay exponents α between books and speeches (book: 1.12 ± 0.08, speech: 1.39 ± 0.06, p < 0.01, see Table S1).
Z(r) ∼ r −α .(2)N(t) ∼ t λ ,(3)
and can also be divided into two parts. In the first part, N(t) of the speeches is very close to that of the books, corresponding to the linear increasing region. In the second part, the sub-linear increasing region, we can see that N(t) of the books is bigger than those of the speeches, which indicates that the growth speed of new words in speech is lower than in book.
We calculated the slope λ of the second part of t ∼ N(t) curve of ten books and ten speech transcriptions as a linear approximate fitting in log-log scale, and the slope values of each subject were listed in Table 1. The t-test shows there is a significant difference between the slope of books and speeches (book: 0.73 ± 0.04, speech: 0.63 ± 0.03, p < 0.01), which both follow power-law with the goodness of fit is 0.9988 and 0.9980 respectively, and fitting region is k = 100 ∼ 20000.
TABLE I. The basic statistics of the ten speech transcriptions and the ten books. T is the total length of the text measured in number of words, and N t is the total number of distinct words.
The slope of t ∼ N (t) of the ten speech transcriptions and the ten books (local linear approximate fitting in log-log scale, fitting region t = 100 ∼ 20000) Model. We test whether the rich-get-richer mechanism, also named preference attachment mechanism [34][35][36][37] works for spoken language generating process. We denote φ(k) the average probability that a character appeared k times will appear again (see Methods how to measure φ(k)). As shown in Fig. 4, φ(k) for all the books and speeches increase proportionally with k, indicating a rich-get-richer effect like the preferential attachment in evolving scale-free networks. We propose a generalized model to further simulate and investigate the empirical observations as an extensive Yule-Simon model [38][39][40][41][42][43]. The process of language construction can be modeled as follows. At every time step, a word will be appended to the text, either by generating a new word, or selecting one from the text that already generated. We propose that the growing dynamics revealed by Heaps' law can be defined as the probability p of new word generation, which can gradually change with the text length. The fomula of the probability p can be determined according to the specific application. In this investigation of language construction, we set it as:
No. Book Speech T N t λ T N tp = k 0 t kt .(4)
While with probability p = 1−p, one word is copied from the text that already generated, where word will be chosen is determined by rich-get-richer mechanism. Let n(i, t) be the number of appearance of ith word at time step t, then at next step t + 1, the ith word will be selected with the probability
p(i, t + 1) = (1 − p) n(i, t) kp i n(i, t) kp .(5)
Parameter k p provides a parameter to modulate the strength of preferential attachment. In this proposed model, the probability p of new word generation depends on text length t,
and decreases monotonously with text increasing, inspired by the empirical observation of t ∼ N(t). And the word will be reused according to the rich-by-richer rules, which gives the Zipf's law. Fig. 5 reports the simulation results for one book and one speech transcription using proposed model. All three scaling properties can be very well captured by the model.
For all the ten books and ten speech transcriptions, the parameters are as follows: k 0 , book:
2.93 ± 1.01, speech: 3.26 ± 0.72, p = 0.5; k t , book: 0.33 ± 0.05, speech: 0.41 ± 0.04, p < 0.01; k p , book: 1.10 ± 0.02, speech: 1.06 ± 0.01, p < 0.01 (see Table S2 and Fig. S1).
DISCUSSION
Previous statistical analyses about human languages mostly concentrated on written texts
where language consists of a huge number of words. In contrast, speech languages consisting So the speaker always choose a more simple way to use words.
The currently reported regularities from the well-known Zipf's and Heaps' laws point of view, can be reproduced by considering decrease of new words emergence with the length of generated text in a nonlinear process, which can be formulated according to the specific growing dynamics under study. In addition, the differences we observed in word frequency distribution and Zipf's law between speech and books indicate different strength of preferential attachment, which is also considered in the presented model. Simulation results confirm that the scaling properties of the complex dynamics of language construction and organization can be very well captured by the proposed model, and the differences between spoken language and written language can also be accounted for by different parameter settings. We further hypothesize that the proposed model of construction and organization in complex system with preferential attachment mechanism can also be applied to other complex system in physical, biological and social areas. Goodness of Linear Fit Statistics in Log-log Scale The original data P (k), Z(r) and N(t) are calculated in linear intervals, in log-log scale a lot of data cluster at large scales (see Fig. S1 for the data clustering at large scales). If the fitting process is performed on the original data, the data points at large scales will dominant the cost function and the measure of goodness of fit, causing bias towards large scales when fitting data in log-log scale. Thus, before the fitting process, we resample the original data to be equally distributed in log scale, as
y ′ (i) = b i+1 j=b i y orig (j) b i+1 − b i , i = 1, 2, ..., ⌊log b N⌋,(6)
where N is the number of the original data, and b can be any real number that greater than 1.
Then, we fit the resampled data in a least-squares sense in log-log scale, as to minimize the following cost function
SSE = n i=1
[log(y ′ (i)) − log(y ′ fit (i))] 2 ,
and the statistic measures of the goodness of fit in log-log scale is defined following the definition of R-square as
R 2 = 1 − n i=1
[log(y ′ (i)) − log(y ′ fit (i))] 2 n i=1 [log(y ′ (i)) − E(log(y ′ (i)))] 2 ,
where E(•) denotes the calculation of mean value, and n is the number of the resampled data.
Preferential attachment. For each speech or book text, we divide it into two parts: Part I contains a fraction ρ of words appeared and Part II contains the remain fraction 1 − ρ of words. For word i in Part II, if i appeared k times in Part I, we add one to φ(k) whose initial value is zero. Accordingly, φ(k) is the number of words in Part II that appeared k times in Part I. Dividing φ(k) by the number of distinct words that appeared k times in Part I. In this paper, we show the results for ρ = 0.5.
Higher Education
FIG. 1 .
1The probability distribution of word frequency P (k) vs. word frequency k in log-log scale for (a) books and (b) speech transcriptions. The fitting regions are k = 2 ∼ 100.
Fig. 2 (
2a) presents the plot of word frequency distribution Z(r) for one book and one
FIG. 2 .
2(a) The Zipf's plot on word frequency Z(r) of one book and one speech transcription, with a fitting region r = 60 ∼ 1000. (b) Comparison of the growth of the number of distinct words N (t) versus the text length t between one book and one speech transcription, fitting region t = 100 ∼ 20000.
Fig. 2 (
2b) reports the growth of the number of distinct words N(t) versus the text length t. Both books and speeches follow Heaps' law
Fig. 3
3are the summary of mean and standard deviation of the scaling exponents β, α and λ of books and speeches.
FIG. 3 .
3The mean and standard deviation of scaling exponents for (a) the probability distribution of word frequency, (b) Zipf's law and (c) Heaps' law. Significant differences were found between speech and book.
FIG. 4 .
4φ(k) versus k in a log-log scale for the representative (a) book and (b) speech, and the goodness of fit are 0.9477 and 0.9575 respectively.
of fewer words received less attention. The empirical results of the comparison between books and conversations indicate that i) the speech transcriptions also obey Zips law and Heaps' law, but with different exponents compared with books. ii) when the content length of speech grows, the emergence of new words does not increase as much as in books, the Heaps' law is also deviated from linear behavior. iii) in speech, the usage of words are much more concentrated on some words, which leads to the larger probability of high frequency rank words than in books. We can further explain the possible reasons. i) Book authors could create new words according to his own written style. New words may result from new techniques, new biological species, or new names. However, generally we seldom give birth
FIG. 5 .
5(a), (b), (c) are the simulation results and the real data for books , and (d),(e),(f) are the simulation result and the real data for speech. Modelling parameters for Book 01 are set as k 0 = 2.34, k t = 0.29 and k p = 1.14; and for Speech KBG k 0 = 3.31, k t = 0.40 and k p = 1.08. to a new word during speech. ii) Book novel needs personalized language to express the characteristics of different persons in the story. Using complex grammatical long sentences to express the thoughts. Changing the style of words expression with different scenarios. Take Hamlet as an example, Claudius's speech is rich with rhetorical figures, as is Hamlet's and, at times, Ophelia'swhile the language of Horatio, the guards, and the grave diggers is simpler. While a great speech puts the occasion, the audience, and the speaker together in an unforgettable way. Most estimated the number of words per minute around 80 ∼ 150.
METHODS
Data description 10 books are analyzed in this article: No.1, Alices adventures in wonderland, written by Lewis Carroll; No.2, The adventures of Tom Sawyer, written by Mark Twain; No.3, A Christmas carol, written by Charles Dickens; No.4, David Crockett, written by John S. C. Abbott; No.5, An enquiry concerning human understanding, written by David Hume; No.6, Hamlet, written by William Shakespeare; No.7, The hound of the Baskervilles, written by Sir Arthur Conan Doyle; No.8, Moby-Dick: or, the whale, written by Herman Melville; No.9, The origin of species by means of natural selection, written by Charles Darwin; No.10, Ulysses, written by James Joyce. These books cover disparate topics and types and were accomplished in far different dates. The basic statistics of these books are presented in Table 1. This corpus of all the ten Chinese books consisting of about 1,002,504 total words and 100,335 distinct words. The transcriptions of the Speech British National Corpus from Phonetics Laboratory are used in This article: The British National Corpus (BNC) is a 100 million word collection of samples of written and speech language from a wide range of sources, designed to represent a wide cross-section of British English from the later part of the 20th century, both speech and written. The speech part consists of orthographic transcriptions of unscripted informal conversations (recorded by volunteers selected from different age, region and social classes in a demographically balanced way) and speech language collected in different contexts, ranging from formal business or government meetings to radio shows and phone-ins. We selected 10 transcriptions of daily conversation from BNC database which have comparable length as the books. This corpus of all the ten speech text consisting of about 747,454 total words and 43,468 distinct words.
C.H.B conceived the research. R.K.L and Q.L.M analysed the data, C.H.B and Q.L.M designed and simulated the model. All authors wrote and revised the manuscript.
ACKNOWLEDGEMENTSWe thank the support from the Fundamental Research Funds for the Central Uni-COMPETING FINANCIAL INTERESTSThe authors declare no competing financial interests.
The language of genes. D B Searls, Nature. 420211217Searls, D. B. The language of genes. Nature 420, 211217 (2002).
Zipf's Law in Gene Expression. C Furusawa, K Kaneko, Phys. Rev. Lett. 9088102Furusawa, C. & Kaneko, K. Zipf's Law in Gene Expression. Phys. Rev. Lett. 90, 088102 (2003).
The structure of the protein universe and genome evolution. E V Koonin, Y I Wolf, G P Karev, Nature. 420218223Koonin, E. V., Wolf, Y. I. & Karev, G. P. The structure of the protein universe and genome evolution. Nature 420, 218223 (2002)
Species lifetime distribution for simple models of ecologies. S Pigolotti, A Flammini, M Marsili, A Martian, Proc. Natl. Acad. Sci. U.S.A. 1021574715751Pigolotti, S., Flammini, A., Marsili, M. & Martian, A. Species lifetime distribution for simple models of ecologies. Proc. Natl. Acad. Sci. U.S.A. 102, 1574715751 (2005).
The size distribution of business firms. H A Simon, C P Bonini, Am. Econ. Rev. 48607617Simon, H. A. & Bonini, C. P. The size distribution of business firms. Am. Econ. Rev. 48, 607617 (1958).
Semiotic dynamics in online social communities. C Cattuto, Eur. Phys. J. C. 463337Cattuto C. Semiotic dynamics in online social communities. Eur. Phys. J. C 46, 3337 (2006).
A theory of power-law distributions in financial market fluctuations. X Gabaix, P Gopikrishnan, V Plerou, H E Stanley, Nature. 423267270Gabaix, X., Gopikrishnan, P., Plerou, V. & Stanley, H. E., A theory of power-law distributions in financial market fluctuations. Nature 423, 267270 (2003).
Zipf rank approach and cross-country convergence of incomes. J Shao, P Ivanov, Ch, B Urosevic, H E Stanley, B Podobnik, Europhys. Lett. 9448001Shao, J., Ivanov, P. Ch., Urosevic, B., Stanley, H. E. & Podobnik B., Zipf rank approach and cross-country convergence of incomes. Europhys. Lett. 94, 48001 (2011).
The return of Zipf: Towards a further understanding of the rank-size distribution. S Brakman, H Garretsen, C Van Marrewijk, Van Den, M Berg, J. Reg. Sci. 39739767Brakman, S., Garretsen, H., van Marrewijk C. & van den Berg, M. The return of Zipf: Towards a further understanding of the rank-size distribution. J. Reg. Sci. 39, 739767 (1999).
Zipf's law for cities: an explanation. X Gabaix, Q J Econ. 114739767Gabaix, X. Zipf's law for cities: an explanation. Q J Econ 114, 739767(1999).
The evolution of language. M A Nowak, D C Krakauer, Proc. Natl. Acad. Sci. Natl. Acad. SciNowak, M. A. & Krakauer, D. C. The evolution of language. Proc. Natl. Acad. Sci.
. U S , 9680288033U.S.A. 96, 80288033 (1999).
Language as an evolving word web. S N Dorogovtsev, J F F Mendes, Proc. R. Soc. Lond. B. 26826032606Dorogovtsev, S. N. & Mendes, J. F. F. Language as an evolving word web. Proc. R. Soc. Lond. B 268, 26032606 (2001).
Computational and evolutionary aspects of language. M A Nowak, N L Komarova, P Niyogi, Nature. 417611617Nowak, M. A., Komarova, N. L. & Niyogi, P. Computational and evolutionary aspects of language. Nature 417, 611617 (2002).
The faculty of language: what is it, who has it, and how did it evolve?. M D Hauser, N Chomsky, W T Fitch, Science. 29815691579Hauser, M. D., Chomsky, N. & Fitch, W. T. The faculty of language: what is it, who has it, and how did it evolve? Science 298, 15691579 (2002).
Modelling the dynamics of language death. D M Abrams, S H Strogatz, Nature. 424900Abrams, D. M. & Strogatz, S. H. Modelling the dynamics of language death. Nature 424, 900 (2003).
Quantifying the evolutionary dynamics of language. E Lieberman, J B Michel, J Jackson, T Tang, M A Nowak, Nature. 449713716Lieberman, E., Michel, J. B., Jackson, J., Tang, T. & Nowak, M. A. Quantifying the evolutionary dynamics of language. Nature 449, 713716 (2007).
Language networks: Their structure, function, and evolution. R V Sole, B Corominas-Murtra, S Valverde, L Steels, Complexity. 152026Sole, R. V., Corominas-Murtra, B., Valverde, S. & Steels, L. Language networks: Their structure, function, and evolution. Complexity 15, 2026 (2010).
Statistical laws governing fluctuations in word use from word birth to word death. A M Petersen, J Tenenbaum, S Havlin, H E Stanley, Sci. Rep. 2313Petersen, A. M., Tenenbaum, J., Havlin, S. & Stanley, H. E. Statistical laws governing fluctuations in word use from word birth to word death. Sci. Rep. 2, 313 (2012).
Culturomics meets random fractal theory: insights into long-range correlations of social and natural phenomena over the past two centuries. J Gao, J Hu, X Mao, M Perc, Gao, J., Hu, J., Mao, X. & Perc, M. Culturomics meets random fractal theory: insights into long-range correlations of social and natural phenomena over the past two centuries. J.
. R. Soc. Interface. 919561964R. Soc. Interface 9, 19561964 (2012).
Behavior and the principal of least effort. G K Zipf, Addison-WealeyCambridge, MAZipf, G. K. Behavior and the principal of least effort (Addison-Wealey,Cambridge, MA, 1949).
Information retrieval-computational and theoretical aspects. H S Heaps, Academic PressHeaps, H. S. Information retrieval-computational and theoretical aspects (Academic Press, 1978).
Markov processes: linguistics and Zipf's law. I Kanter, D A Kessler, Phys. Rev. Lett. 7445594562Kanter, I. & Kessler, D. A. Markov processes: linguistics and Zipf's law. Phys. Rev. Lett. 74, 45594562 (1995).
Least effort and the origins of scaling in human language. R F I Cancho, R V Sole, Cancho, R. F. i. & Sole, R. V. Least effort and the origins of scaling in human language.
. Proc. Natl. Acad. Sci. U.S.A. 100788791Proc. Natl. Acad. Sci. U.S.A. 100, 788791 (2002).
Zipf and Heaps Laws coefficients depend on language. A Gelbukh, G Sidorov, Lect. Notes Comput. Sci. 332335Gelbukh, A. & Sidorov, G. Zipf and Heaps Laws coefficients depend on language. Lect. Notes Comput. Sci. 2004, 332335 (2001).
Modeling statistical properties of written text. M A Serrano, A Flammini, F Menczer, PLoS ONE. 45372Serrano, M. A., Flammini, A. & Menczer, F. Modeling statistical properties of written text. PLoS ONE 4, e5372 (2009).
True reason for Zipf's law in language. D Wang, M Li, Z Di, Physica A. 358545Wang, D., Li, M. & Di, Z. True reason for Zipf's law in language. Physica A 358, 545 (2005).
Deviation of Zipf's and Heaps' laws in human languages with limited dictionary sizes. L Lu, Z Zhang, T Zhou, Sci. Rep. 31082Lu, L., Zhang, Z. & Zhou, T. Deviation of Zipf's and Heaps' laws in human languages with limited dictionary sizes. Sci. Rep. 3, 1082 (2013).
C Cattuto, V Loreto, L Pietronero, Semiotic dynamics and collaborative tagging. Cattuto, C., Loreto, V. & Pietronero, L. Semiotic dynamics and collaborative tagging.
. Proc. Natl. Acad. Sci. U.S.A. 10414611464Proc. Natl. Acad. Sci. U.S.A. 104, 14611464 (2007).
Collective dynamics of social annotation. C Cattuto, A Barrat, A Baldassarri, G Schehr, V Loreto, Proc. Natl. Acad. Sci. U.S.A. 1061051110515Cattuto, C., Barrat, A., Baldassarri, A., Schehr, G. & Loreto, V. Collective dynamics of social annotation. Proc. Natl. Acad. Sci. U.S.A. 106, 1051110515 (2009).
Empirical analysis on a keyword-based semantic system. Z.-K Zhang, L Lu, J.-G Liu, T Zhou, Eur. Phys. J. B. 66557561Zhang, Z.-K., Lu, L., Liu, J.-G. & Zhou, T. Empirical analysis on a keyword-based semantic system. Eur. Phys. J. B 66, 557561 (2008).
Internet search result probabilities: Heaps' law and word associativity. J C Lansey, B Bukiet, J. Quant. Linguistics. 164066Lansey, J. C. & Bukiet, B. Internet search result probabilities: Heaps' law and word associativity. J. Quant. Linguistics 16, 4066 (2009).
Discovering power laws in computer programs. H.-Y Zhang, Inf. Process. Manage. 45477483Zhang, H.-Y. Discovering power laws in computer programs. Inf. Process. Manage. 45, 477483 (2009).
Audio BNC: the audio edition of the Speech British National Corpus. J Coleman, L Baghai-Ravary, J Pybus, S Grau, Phonetics Laboratory, University of OxfordColeman, J. ,Baghai-Ravary, L., Pybus, J., & Grau, S. Audio BNC: the audio edition of the Speech British National Corpus. (Phonetics Laboratory, University of Oxford 2012).
Accuracy and scaling phenomena in internet mapping. A Clauset, C Moore, Phys. Rev. Lett. 9418701Clauset, A. & Moore, C. Accuracy and scaling phenomena in internet mapping. Phys. Rev. Lett. 94, 018701 (2005).
Measuring preferential attachment for evolving networks. H Jeong, Z Neda, A.-L Barabasi, Europhys. Lett. 61567572Jeong, H., Neda, Z. & Barabasi, A.-L. Measuring preferential attachment for evolving networks. Europhys. Lett. 61, 567572 (2003).
Emergence of scaling in random networks. A.-L Barabasi, R Albert, Science. 286509512Barabasi, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509512. (1999).
Effect of the accelerating growth of communications networks on their structure. S N Dorogovtsev, J F Mendes, Phys. Rev. E. 6325101Dorogovtsev, S. N. & Mendes, J. F. F. Effect of the accelerating growth of communica- tions networks on their structure. Phys. Rev. E 63, 025101(R) (2001).
A mathematical theory of evolution, based on the conclusions of Dr. G U Yule, J. C. Yule, G. U. A mathematical theory of evolution, based on the conclusions of Dr. J. C.
. F R S Willis, Phil, Trans. R. Soc. B. 2132187Willis, F.R.S. Phil. Trans. R. Soc. B 213, 2187 (1925).
On a class of skew distribution functions. H A Simon, Biometrika. 42425440Simon, H. A., On a class of skew distribution functions. Biometrika 42, 425440 (1955).
A model of income distribution. D Champernowne, Econ J. 63318351Champernowne, D. A model of income distribution. Econ J. 63, 318351 (1953).
A stochastic model of superstardom: An application of the Yule distribution. K H Chung, R A K Cox, Rev Econ Stat. 76771775Chung K. H., & Cox R. A. K. A stochastic model of superstardom: An application of the Yule distribution. Rev Econ Stat 76, 771775 (1994).
A Yule-Simon process with memory. C Cattuto, V Loreto, V D P Servedio, Cattuto, C., Loreto, V. & Servedio, V. D. P. A Yule-Simon process with memory.
. Europhys. Lett. 762208214Europhys. Lett. 76(2), 208214 (2006).
| [] |
[
"Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective",
"Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective"
] | [
"Luis C Lamb \nUFRGS\nBrazil\n",
"Artur D'avila Garcez \nCity\nUniversity of London\n\n",
"Marco Gori marco.gori@unisi.it \nUniversity of Siena\nIT\n\nUniversité Côte d'Azur\n3IAFR\n",
"Marcelo O R Prates \nUFRGS\nBrazil\n",
"Pedro H C Avelar phcavelar@inf.ufrgs.br \nUFRGS\nBrazil\n\nUniversity of Siena\nIT\n",
"Moshe Y Vardi vardi@cs.rice.edu \nRice University\nHoustonUSA\n"
] | [
"UFRGS\nBrazil",
"City\nUniversity of London\n",
"University of Siena\nIT",
"Université Côte d'Azur\n3IAFR",
"UFRGS\nBrazil",
"UFRGS\nBrazil",
"University of Siena\nIT",
"Rice University\nHoustonUSA"
] | [] | Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNNs) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as their relationship to current developments in neural-symbolic computing. | 10.24963/ijcai.2020/679 | [
"https://arxiv.org/pdf/2003.00330v7.pdf"
] | 211,677,661 | 2003.00330 | eb1a7d811ac4041963841cae7a0d49713dbce9aa |
Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
12 Jun 2021
Luis C Lamb
UFRGS
Brazil
Artur D'avila Garcez
City
University of London
Marco Gori marco.gori@unisi.it
University of Siena
IT
Université Côte d'Azur
3IAFR
Marcelo O R Prates
UFRGS
Brazil
Pedro H C Avelar phcavelar@inf.ufrgs.br
UFRGS
Brazil
University of Siena
IT
Moshe Y Vardi vardi@cs.rice.edu
Rice University
HoustonUSA
Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
12 Jun 2021
Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNNs) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as their relationship to current developments in neural-symbolic computing.
Introduction
Over the last decade Artificial Intelligence in general, and deep learning in particular, have been the focus of intensive research endeavors, gathered media attention and led to debates on their impacts both in academia and industry [Marcus, 2020;Raghavan, 2019]. The recent AI Debate in Montreal with Yoshua Bengio and Gary Marcus [Marcus, 2020], and the AAAI-2020 fireside conversation with Nobel Laureate Daniel Kahneman and the 2018 Turing Award winners and deep learning pioneers Geoff Hinton, Yoshua Bengio and Yann LeCun have led to new perspectives on the future of AI. It has now been argued that if one aims to build richer AI systems, i.e. semantically sound, explainable, and reliable, one has to add a sound reasoning layer to deep learning [Marcus, 2020]. Kahneman has made this point clear when he stated at AAAI-2020 that "...so far as I'm concerned, System 1 certainly knows language... System 2... does involve certain manipulation of symbols." [Kahneman et al., 2020].
Kahneman's comments address recent parallels made by AI researchers between "Thinking, Fast and Slow" and the so-called "AI's systems 1 and 2", which could, in principle, be modelled by deep learning and symbolic reasoning, respectively. 1 In this paper, we present a survey and relate recent research results on: (1) Neural-Symbolic Computing, by summarizing the main approaches to rich knowledge representation and reasoning within deep learning, and (2) the approach pioneered by the authors and others of Graph Neural Networks (GNN) for learning and reasoning about problems that require relational structures or symbolic learning. Although recent papers have surveyed GNNs, including [Battaglia et al., 2018;Chami et al., 2020;Zhang et al., 2018] they have not focused on the relationship between GNNs and neural-symbolic computing (NSC). also touches particular topics related to some we discuss here, in particular to do with metatransfer learning. Recent surveys in neural-symbolic computing [d'Avila Garcez et al., 2015;Townsend et al., 2019] have not exploited the highly relevant applications of GNN in symbolic and relational learning, or the relationship between the two approaches. Our Contribution As mentioned above, recent work have surveyed graph neural networks and neural-symbolic computing, but to the best of our knowledge, no survey has reviewed and analysed the recent results on the specific relationship between GNNs and NSC. We also outline the promising directions for research and applications combining GNNs and NSC from the perspective of symbolic reasoning tasks. The above-referenced surveys on GNNs, although comprehensive, all describe other application domains. The remainder of the paper is organized as follows. In Section 2, we present an overview and taxonomy of neural-symbolic computing. In Section 3, we discuss the main GNN models and their relationship to neural-symbolic computing. We then outline the main GNN architectures and their use in relational and symbolic learning. Finally, we conclude and point out directions for further research. We shall assume familiarity with neural learning and symbolic AI.
York, February 10th, 2020, Henry Kautz introduced a taxonomy for neural-symbolic computing as part of a talk entitled The Third AI Summer.
Six types of neuralsymbolic systems are outlined: 1. SYMBOLIC NEURO SYM-BOLIC, 2. SYMBOLIC[NEURO], 3. NEURO;SYMBOLIC, 4. NEURO:SYMBOLIC → NEURO, 5. NEURO SYMBOLIC and 6. NEURO [SYMBOLIC].
The origin of GNNs [Scarselli et al., 2008] can be traced back to neural-symbolic computing (NSC) in that both sought to enrich the vector representations in the inputs of neural networks, first by accepting tree structures and then graphs more generally. In this sense, according to Kautz's taxonomy, GNNs are a TYPE 1 neural-symbolic system. GNNs [Battaglia et al., 2018] were recently combined with convolutional networks in novel ways which have produced impressive results on data efficiency. In parallel, NSC has focused on the learning of adequate embeddings for the purpose of symbolic computation. This branch of neuralsymbolic computing, which includes Logic Tensor Networks [Serafini and d'Avila Garcez, 2016] and Tensor Product Representations [Huang et al., 2017] has been called in tensorization methods and draw similarities with [Diligenti et al., 2017] that use fuzzy methods in representing first-order logic. These have been classified by Kautz as TYPE 5 neural-symbolic systems, as also discussed in what follows. A natural point of contact between GNNs and NSC is the provision of rich embeddings and attention mechanisms towards structured reasoning and efficient learning.
TYPE 1 neural-symbolic integration is standard deep learning, which some may argue is a stretch to refer to as neuralsymbolic, but which is included here to note that the input and output of a neural network can be made of symbols e.g. in the case of language translation or question answering applications. TYPE 2 are hybrid systems such as DeepMind's AlphaGo and other systems, where the core neural network is loosely-coupled with a symbolic problem solver such as Monte Carlo tree search. TYPE 3 is also a hybrid system whereby a neural network focusing on one task (e.g. object detection) interacts via input/output with a symbolic system specialising in a complementary task (e.g. query answering). Examples include the neuro-symbolic concept learner and deepProbLog [Manhaeve et al., 2018;Galassi et al., 2020].
In a TYPE 4 neural-symbolic system, symbolic knowledge is compiled into the training set of a neural network. Kautz offers [Lample and Charton, 2020] as an example. Here, we would also include other tightly-coupled neural-symbolic systems where various forms of symbolic knowledge, not restricted to if-then rules only, can be translated into the initial architecture and set of weights of a neural network [d'Avila Garcez et al., 2009], in some cases with guarantees of correctness. We should also mention [Arabshahi et al., 2018], which learn and reason over mathematical constructions, as well as [Arabshahi et al., 2019], which propose a learning architecture that extrapolates to much harder symbolic maths reasoning problems than what was seen during training. TYPE 5 are those tightly-coupled neural-symbolic systems where a symbolic logic rule is mapped onto a distributed representation (an embedding) and acts as a soft-constraint (a regularizer) on the network's loss function. Examples of these are [Huang et al., 2017] and [Serafini and d'Avila Garcez, 2016].
Finally, TYPE 6 systems should be capable, according to Kautz, of true symbolic reasoning inside a neural engine. It is what one could refer to as a fully-integrated system. Early work in neural-symbolic computing has achieved this: see [d'Avila Garcez et al., 2009] for a historical overview; and some TYPE 4 systems are also capable of it [d'Avila Garcez et al., 2009;d'Avila Garcez et al., 2015;Hitzler et al., 2004], but in a localist rather than a distributed architecture and using simpler forms of embedding than TYPE 5 systems. Kautz adds that TYPE 6 systems should be capable of combinatorial reasoning, suggesting using an attention schema to achieve it effectively. In fact, attention mechanisms can be used to solve graph problems, for example with pointer networks [Vinyals et al., 2015]. It should be noted that the same problems can be solved through other NSC architectures, such as GNNs [Prates et al., 2019]. This idea resonates with the recent proposal outlined by Bengio in the AI debate of December 2019.
In what concerns neural-symbolic computing theory, the study of TYPE 6 systems is highly relevant. In practical terms, a tension exists between effective learning and sound reasoning, which may prescribe the use of a more hybrid approach (TYPES 3 to 5) or variations thereof such as the use of attention with tensorization. Orthogonal to the above taxonomy, but mostly associated so far with TYPE 4, is the study of the limits of reasoning within neural networks w.r.t. full first-order, higher-order and non-classical logic theorem proving [d'Avila Garcez and Lamb, 2003;d'Avila Garcez et al., 2015]. In this paper, as we revisit the use of rich logic embeddings in TYPE 5 systems, notably Logic Tensor Networks [Serafini and d'Avila Garcez, 2016], alongside the use of attention mechanisms or convolutions in GNNs, we will seek to propose a research agenda and specific applications of symbolic reasoning and statistical learning towards the sound development of TYPE 6 systems.
Graph Neural Networks Meet Neural-Symbolic Computing
One of the key concepts in machine learning is that of priors or inductive biases -the set of assumptions that a learner uses to compute predictions on test data. In the context of deep learning (DL), the design of neural building blocks that enforce strong priors has been a major source of breakthroughs. For instance, the priors obtained through feedforward layers encourage the learner to combine features additively, while the ones obtained through dropout discourage it to overfit and the ones obtained through multi-task learning encourage it to prefer sets of parameters that explain more than one task. One of the most influential neural building blocks, having helped pave the way for the DL revolution, is the convolutional layer [LeCun et al., 2015]. Convolutional architectures are successful for tasks defined over Euclidean signals because they enforce equivariance to spatial translation. This is Figure 1: Due to permutation invariance, literals ¬x5 and x3 can exchange places with no effect to the boolean function f (x). There are 5! = 120 such permutations. a useful property to have when learning representations for objects regardless of their position in a scene.
f (x) = (x 1 ∨ ¬x 5 ∨ x 2 ∨ x 3 ∨ ¬x 4 )
Analogously, recurrent layers enforce equivariance in time that is useful for learning over sequential data. Recently, attention mechanisms, through the advent of transformer networks, have enabled advancing the state-of-art in many sequential tasks, notably in natural language processing [Devlin et al., 2019;Goyal et al., 2019] and symbolic reasoning tasks such as solving math equations and integrals 2 [Lample and Charton, 2020]. Attention encourages the learner to combine representations additively, while also enforcing permutation invariance. All three architectures take advantage of sparse connectivity -another important design in DL which is key to enable the training of larger models. Sparse connectivity and neural, building blocks with strong priors usually go hand in hand, as the latter leverage symmetries in the input space to cut down parameters through invariance to different types of transformations. NSC architectures often combine the key design concepts from convolutional networks and attention-based architectures to enforce permutation invariance over the elements of a set or the nodes of a graph (Fig. 1). Some NSC architectures such as Pointer Networks [Vinyals et al., 2015] implement attention directly over a set of inputs X = {x 1 , . . . , x n } coupled with a decoder that outputs a sequence (i 1 , i 2 , . . . i n ) ∈ [1, n] m of "pointers" to the input elements (hence the name). Note that both formalizations are defined over set inputs rather than sequential ones.
Logic Tensor Networks
Tensorisation is a class of approaches that embeds first-order logic (FOL) symbols such as constants, facts and rules into real-valued tensors. Normally, constants are represented as one-hot vectors (first-order tensor). Predicates and functions are matrices (second-order tensor) or higher-order tensors.
In early work, embedding techniques were proposed to transform symbolic representations into vector spaces where reasoning can be done through matrix computation [Bordes et al., 2011;Serafini and d'Avila Garcez, 2016;Santoro et al., 2017]. Training embedding systems can be carried out as distance learning using backpropagation. Most research in this direction focuses on representing relational predicates in a neural network. This is known as "relational embedding" [Sutskever and Hinton, 2008]. For representation of more complex logical structures, i.e. FOL formulas, a system named Logic Tensor Network (LTN) [Serafini and d'Avila Garcez, 2016] is proposed by extending Neural Tensor Networks (NTN), a state-of-the-art relational embedding method. LTNs effectively implement learning using symbolic information as a prior, as pointed out by [van Harmelen and ten Teije, 2019]. Related ideas are discussed formally in the context of constraint-based learning and reasoning [d'Avila . Recent research in first-order logic programs has successfully exploited advantages of distributed representations of logic symbols for efficient reasoning, inductive programming [Evans and Grefenstette, 2018] and differentiable theorem proving [Rocktäschel and Riedel, 2016].
Pointer Networks
The
Pointer Network (PN) formalization [Vinyals et al., 2015] is a neural architecture meant for computing a m-sized sequence (i 1 , i 2 , . . . i n ) ∈ [1, n] m over the elements of an input set X = {x 1 , . . . , x n }. PN implements a simple modification over the traditional seq2seq model, augmenting it with a simplified variant of the attention mechanism whose outputs are interpreted as "pointers" to the input elements. Traditional seq2seq models implement an encoder-decoder architecture in which the elements of the input sequence are consumed in order and used to update the encoder's hidden state at each step. Finally, a decoder consumes the encoder's hidden state and is used to yield a sequence of outputs, one at a time.
It is known that seq2seq models tend to exhibit improved performance when augmented with an attention mechanism, a phenomenon noticeable from the perspective of Natural Language Processing [Devlin et al., 2019]. Traditional models however yield sequences of outputs over a fixed-length dictionary (for instance a dictionary of tokens for language models), which is not useful for tasks whose output is defined over the input set and hence require a variable-length dictionary. PN tackle this problem by encoding the n-sized input set P with a traditional encoding architecture and decoding a probability distribution p(C i |C 1 , . . . C i−1 , P) over the set {1, . . . , n} of indices at each step i by computing a softmax over an attention layer parameterized by matrices W 1 , W 2 and vector v feeding on the decoder state d i and the encoder states e i (1, . . . , n):
u i j = v ⊺ tanh (W 1 e j W 2 d i ) j ∈ (1, . . . n) p(C i |C 1 , . . . C i−1 , P) = softmax(u i )(1)
The output pointers can then be used to compute loss functions over combinatorial optimization problems. In the original paper the authors define a PN to solve the Traveling Salesperson Problem (TSP) in which a beam search procedure is used to select cities given the probability distributions computed at each step and finally a loss function can computed for the output tour by adding the corresponding city distances. Given their discrete nature, PNs are naturally suitable for many combinatorial problems (the original paper evaluates PN on Delauney Triangulation, TSP and Convex Hull problems). Unfortunately, even though PNs can solve problems over sets, they cannot be directly applied to general (noncomplete) graphs.
Convolutions as Self-Attention
The core building block of models in the GNN family is the graph convolution operation, which is a neural building block that enables one to perform learning over graph inputs. Empowering DL architectures with the capacity of feeding on graph-based data is particularly suitable for neural-symbolic reasoning, as symbolic expressions are easily represented as graphs (Fig. 2). Furthermore, graph representations have useful properties such as permutation invariance and flexibility for generalization over the input size (models in the graph neural network family can be fed with graphs regardless of their size in terms of number of vertices). Graph convolutions can be seen as a variation of the well-known attention mechanism [Garcia and Bruna, 2018]. A graph convolution is essentially an attention layer with two key differences:
1. There is no dot-product for computing weights: encodings are simply added together with unit weights. 3 2. The sum is masked with an adjacency mask, or in other words the graph convolution generalizes attention for non-complete graphs.
All models in the GNN family learn continuous representations for graphs by embedding nodes into hyper-dimensional spaces, an insight motivated by graph embedding algorithms. A graph embedding corresponds to a function f : V → R n mapping from the set of vertices V of a graph G = (V, E) to n-dimensional vectors. In the context of GNNs, we are interested in learning the parameters θ of a function f :
G × θ → (V → R n ).
That is, a parameterized function f (G, θ) over the set of graphs G whose outputs are mappings V → R n from vertices to n-dimensional vectors. In other words, GNNs learn functions to encode vertices in a generalized way. Note that since the output from a GNN is itself a function, there are no limitations for the number of vertices in the input graph. This useful property stems from the modular architecture of GNNs, which will be discussed at length in the sequel. We argue that this should be interesting to explore in the context of neural-symbolic computing in the representation and manipulation of variables within neural networks.
Generally, instead of synthesizing a vertex embedding function from the ground up, GNNs choose an initial, simpler vertex embedding such as mapping each vertex to the same (learned) vector representation or sampling vectors from a multivariate normal distribution, and then learn to refine this representation by iteratively updating representations for all vertices. The refinement process, which consists of each vertex aggregating information from its direct neighbors to update its own embedding is at the core of how GNNs learn properties over graphs. Over many refinement steps, vertices can aggregate structural information about progressively larger reachable subsets of the input graph. However we rely on a well-suited transformation at each step to enable vertices to make use of this structural information to solve problems over graphs. The graph convolution layer, described next in Section 3.4, implements such transformation.
Graph Convolutional Networks
Graph convolutions are defined in analogy to convolutional layers over Euclidean data. Both architectures compute 3 The Graph Attention network (GAT) however generalizes graph convolutions with dot-product attention [Veličković et al., 2018] (x 1 ∨ ¬x 2 ) ∧ (x 3 ∨ x 4 ∨ x 5 )
x 1 ¬x 1 x 2 ¬x 2 x 3 ¬x 3 x 4 ¬x 4 x 5 ¬x 5 Figure 2: CNF formula F = (x1 ∨¬x2)∧(x3 ∨x4 ∨x5) represented as a graph: clauses and literals correspond to nodes, edges between clauses and literals are painted gray and edges between literals and their complements are painted black weighted sums over a neighborhood. For CNNs, this neighborhood is the well known 9-connected or 25-connected neighborhood defined over pixels. One can think of the set of pixels of an image as a graph with a grid topology in which each vertex is associated with a vector representation corresponding to the Red/Green/Blue channels. The internal activations of a CNN can also be thought of graphs with grid topologies, but the vector representations for each pixel are generally embedded in spaces of higher dimensionality (corresponding to the number of convolutional kernels learned at each layer). In this context, Graph Convolutional Networks (GCNs) [Kipf and Welling, 2017] can be thought of as a generalization of CNNs for non-grid topologies. Generalizing CNNs this way is tricky because one cannot rely anymore on learning 3 × 3 or 5 × 5 kernels, for two reasons:
1. In grid topologies pixels are embedded in 2-dimensional Euclidean space, which enables one to learn a specific weight for each neighbor on the basis of its relative position (left, right, central, top-right, etc.). This is not true for general graphs, and hence weights such as W 1,0 , W 1,1 , W 1,1 , W 0,1 do not always have a clear interpretation.
2. In grid topologies each vertex has a fixed number of neighbors and weight sharing, but there is no such constraint for general graphs. Thus we cannot hope to learn a specific weight for each neighbor as the required number of such weights will vary with the input graph.
GCNs tackle this problem the following way: Instead of learning kernels corresponding to matrices of weights, they learn transformations for vector representations (embeddings) of graph vertices. Concretely, given a graph G = (V, E) and a matrix x (k) ∈ R |V|×d of vertex representations (i.e. x i (k) is the vector representation of vertex i at the k-th layer), a GCN computes the representations x i (k+1) of vertex i in the next layer as:
x i (k+1) = σ j∈N (i)∪{i} θ k · x j (k) deg(i) deg(j)(2)
In other words, we linearly transform the vector representation of each neighbor j by multiplying it with a learned matrix of weights θ k , normalizing it by the square roots of the degrees deg i, deg j of both vertices, aggregate all results additively and finally apply a non-linearity σ. Note that θ k denotes the learned weight matrix for GCN layer k: in general one will stack n different GCN layers together and hence learn the parameters of n such matrices. Also note that one iterates over an extended neighborhood N (i) ∪ {i}, which includes i itself. This is done to prevent "forgetting" the representation of the vertex being updated. Equation 2 can be summarized as:
x (k+1) = D − 1 2ÃD − 1 2 x (k) θ (k)
, wherẽ A = A + I is the adjacency matrix A plus self-loops (I is the identity matrix) andD is the degree matrix ofÃ.
The Graph Neural Network Model
Although GCNs are conceptually simpler, the GNN model predates them by a decade, having been originally proposed by [Scarselli et al., 2008]. The model is similar to GCNs, with two key differences:
(1) One does not stack multiple independent layers as in GCNs. A single parameterized function is iterated many times, in analogy to recurrent networks, until convergence.
(2) The transformations applied to neighbor vertex representations are not necessarily linear, and can be implemented by deep neural networks (e.g. by a multilayer perceptron).
Concretely, the graph neural network model defines parameterized functions h : for vertex i at time (t + 1) is computed as:
N × N × N × R d → R d and g : N × R d → R o ,x i (t+1) = j∈N (i) h l i , l j , l ij , x i (t)(3)
Where l i , l j and l ij are respectively the labels for nodes i and j and edge ij and R d , R o are respectively the space of vertex representations and the output space. The model is defined over labelled graphs, but can still be implemented for unlabelled ones by supressing l i , l j , l ij from the transition function. After a certain number of iterations one should expect that vertex embeddings x i (t+1) are enriched with structural information about the input graph. At this point, the output function g can be used to compute an output for each vertex, given its final representation:
o i = g(l i , x i )
In other words, the output at the end of the process is a set of |V| vectors ∈ R o . This is useful for node classification tasks, in which one can have o equal the number of node classes and enforce o i to encode a probability distribution by incorporating a softmax layer into the output function g. If one would like to learn a function over the entire graph instead of its neighbors, there are many possibilities, of which one is to compute the output on an aggregation over all final vertex representations: o = g i∈V x vg
Message-passing Neural Networks
Message-passing neural networks implement a slight modification over the original GNN model, which is to define a specialized update function u : R d × R d → R d to update the representation for vertex i given its current representation and an aggregation m i over transformations of neighbor vertex embeddings (which are referred to as "messages", hence message-passing neural networks), as an example:
x i (t+1) = u x i (t) , j∈N (i) h l i , l j , l ij , x i (t)
Also, the update procedure is run over a fixed number of steps and it is usual to implement u if using some type of recurrent network, such as Long-Short Term Memory (LSTM) cells [Hochreiter and Schmidhuber, 1997;Selsam et al., 2019], or Gated Recurrent Units.
Graph Attention Networks
The Graph Attention Networks (GAT) [Veličković et al., 2018] augment models in the graph neural network family with an attention mechanism enabling vertices to weigh neighbor representations during their aggregation. As with other types of attention, a parameterized function is used to compute the weights dynamically, which enables the model to learn to weigh representations wisely. The goal of the GAT is to compute a coefficient e ij : R for each neighbor j of a given vertex i, so that the aggregation in Equation 3 becomes:
x i (t+1) = j∈N (i) e ij h l i , l j , l ij , x i (t)
To compute e ij , the GAT introduces a weight matrix W ∈ R d × R d , used to multiply vertex embeddings for i and j, which are concatenated and multiplied by a parameterized weight vector a. Finally, a non-linearity is applied to the computation in the above equation and then a softmax over the set of neighbors N (i) is applied over the exponential of the result, yielding: e ij = softmax j (σ( a · (W x i ||W x j ))) GAT are known to outperform typical GCN architectures for graph classification tasks, as shown in the original paper [Veličković et al., 2018].
Perspectives and Applications of GNNs to Neural-Symbolic Computing
In this paper, we have seen that GNNs endowed with attention mechanisms are a promising direction of research towards the provision of rich reasoning and learning in type 6 neuralsymbolic systems. Future work includes, of course, application and systematic evaluation of relevant specific tasks and data sets. These include what John McCarthy described as drosophila tasks for Computer Science: basic problems that can illustrate the value of a computational model. Examples in the case of GNNs and NSC could be: (1) extrapolation of a learned classification of graphs as Hamiltonian to graphs of arbitrary size, (2) reasoning about a learned graph structure to generalise beyond the distribution of the training data, (3) reasoning about the partOf (X, Y ) relation to make sense of handwritten MNIST digits and non-digits.
(4) using an adequate self-attention mechanism to make combinatorial reasoning computationally efficient. This last task relates to satisfiability including work on using GNNs to solve the TSP problem. The other tasks are related to metatransfer learning across domains, extrapolation and causality. In terms of domains of application, the following are relevant.
Relational Learning and Reasoning
GNN models have been successfully applied to a number of relational reasoning tasks. Despite the success of convolutional networks, visual scene understanding is still out of reach for pure CNN models, and hence are a fertile ground for GNN-based models. Hybrid CNN + GNN models in particular have been successful in these tasks, having been applied to understanding human-object interactions, localising objects, and challenging visual question answering problems [Santoro et al., 2017]. Relational reasoning has also been applied to physics, with models for extracting objects and relations in unsupervised fashion [van Steenkiste et al., 2018]. GNNs coupled with differentiable ODE solvers have been used to learn the Hamiltonian dynamics of physical systems given their interactions modelled as a dynamic graph [Greydanus et al., 2019]. The application of NSC models to life sciences is very promising, as graphs are natural representations for molecules, including proteins. In this context, [Stokes et al., 2020] have generated the first machinelearning discovered antibiotic ("halcin") by training a GNN to predict the probability that a given input molecule has a growth inhibition effect on the bacterium E. coli and using it to rank randomly-generated molecules. Protein Structure Prediction, which is concerned with predicting the threedimensional structure of proteins given its molecular description, is another promising problem for graph-based and NSC models such as DeepMind's AlphaFold and its variations [Wei, 2019].
In Natural language processing, tasks are usually defined over sequential data, but modelling textual data with graphs offers a number of advantages. Several approaches have defined graph neural networks over graphs of text cooccurrences, showing that these architectures improve upon the state-of-the-art for seq2seq models [Yao et al., 2019]. GNN models have also been successfully applied to relational tasks over knowledge bases, such as link prediction [Schlichtkrull et al., 2018]. As previously mentioned, attention mechanisms, which can be seen as a variation of models in the GNN family, have enabled substantial improvements in several NLP tasks through transfer learning over pretrained transformer language models [Devlin et al., 2019]. The extent to which language models pretrained over huge amounts of data can perform language understanding however is substantially debated, as pointed out by both Marcus [Marcus, 2020] and Kahneman [Kahneman et al., 2020].
Graph-based neural network models have also found a fertile field of application in software engineering: due to the structured and unambiguous nature of code, it can be represented naturally with graphs that are derived unambiguously via parsing. Several works have then utilised GNNs to perform analysis over graph representations of programs and obtained significant results. et al., 2018] achieved convergent algorithms over relational problems. The expressiveness of GNNs has also been the focus of recent research [Sato, 2020]. Regarding N P-Hard problems, neural-symbolic models with an underlying GNN formalization have been proposed to train solvers for the decision variants of the SAT, TSP and graph colouring problems, respectively [Selsam et al., 2019;Prates et al., 2019;Lemos et al., 2019]. This allowed these models to be trained with a single bit of supervision on each instance, [Selsam et al., 2019;Cameron et al., 2020] being able to extract assignments from the trained model, [Prates et al., 2019] performing a binary search on the prediction probability to estimate the optimal route cost. [Toenshoff et al., 2019] built an end-to-end framework for dealing with (boolean) constraint satisfaction problems in general, extending the previous works and providing comparisons and performance increases, and [Abboud et al., 2020] have proposed a GNN-based architecture that learns to approximate DNF counting. There has also been work in generative models for combinatorial optimization, such as [You et al., 2019], which generates SAT instances using a graph-based approach.
Conclusions
We presented a review on the relationship between Graph Neural Network (GNN) models and architectures and Neural-Symbolic Computing (NSC). In order to do so, we presented the main recent research results that highlight the potential applications of these related fields both in foundational and applied AI and Computer Science problems. The interplay between the two fields is beneficial to several areas. These range from combinatorial optimization/constraint satisfaction to relational reasoning, which has been the subject of increasing industrial relevance in natural language processing, life sciences and computer vision and image understanding [Raghavan, 2019;Marcus, 2020]. This is largely due to the fact that many learning tasks can be easily and naturally captured using graph representations, which can be seen as a generalization over the traditional sequential (RNN) and gridbased representations (CNN) in the family of deep learning building blocks. Finally, it is worth mentioning that the principled integration of both methodologies (GNN and NSC) of-fers a richer alternative to the construction of trustful, explainable and robust AI systems, which is clearly an invaluable research endeavor.
named the transition function and the output function. In analogy to a graph convolution layer, the transition function defines a rule for updating vertex representations by aggregating transformations over representations of neighbor vertices. The vertex representation x i (t+1)
More specifically, Microsoft's "Deep Program Understanding" research program has used a GNN variant called Gated Graph Sequence Neural Networks[Li et al., 2016] in a large number of applications, including spotting errors, suggesting variable names, code completion, as well as edit representation and automatically applying edits to programs[Yin et al., 2019].Many combinatorial optimization problems are relational in structure and thus are prime application targets toGNN-based models [Bengio et al., 2018]. For instance,[Khalil et al., 2017] uses a GNN-like model to embed graphs and use these embeddings in their heuristic search for the Minimum Vertex Cover (MVC), Maximum Cut and Traveling Salesperson (TSP) problems. Regarding end-to-end models,[Kool et al., 2019] trained a transformer-based GNN model to embed TSP answers and extract solutions with an attention-based decoder, while obtaining better performance than previous work. used a GCN as a heuristic to a search algorithm, applying this method on four canonical NP-complete problems, namely Maximal Independent Set, MVC, Maximal Clique, and the Boolean Satisfiability Problem (SAT). [Palm4.2 Combinatorial Optimization and Constraint
Satisfaction Problems
It is advisable to read [Lample and Charton, 2020] alongside this critique of its limitations[Davis, 2019]
AcknowledgementsThis work is partly supported by CNPq and CAPES, Brazil -Finance Code 001. Moshe Vardi is supported in part by NSF grants IIS-1527668, CCF-1704883, IIS-1830549, DoD MURI grant N00014-20-1-2787, and an award from the Maryland Procurement Office.
Learning to reason: Leveraging neural networks for approximate DNF counting. Abboud, AAAI. References [Abboud et al., 2020] Ralph Abboud, Ismail Ceylan, and Thomas Lukasiewicz. Learning to reason: Leveraging neural networks for approximate DNF counting. In AAAI, pages 1-6, 2020.
Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d'horizon. Arabshahi, CoRR abs/1811.06128AAAI. Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, and Kevin MurphyJason Weston, Ronan Collobert, and Yoshua BengioAlexander Gaunt, and Oleksandr PolozovAAAI. Chami et al., 2020. Machine learning on graphs: A model and comprehensive taxonomy. CoRR, abs/2005.03675, 2020Arabshahi et al., 2018] Forough Arabshahi, Sameer Singh, and Animashree Anandkumar. Combining symbolic ex- pressions and black-box function evaluations in neural programs. In ICLR, 2018. [Arabshahi et al., 2019] Forough Arabshahi, Zhichu Lu, Sameer Singh, and Animashree Anandkumar. Mem- ory augmented recursive neural networks. CoRR, abs/1911.01545, 2019. [Battaglia et al., 2018] Peter Battaglia, Jessica Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinícius Zam- baldi et. al. Relational inductive biases, deep learning, and graph networks. CoRR, abs/1806.01261, 2018. [Bengio et al., 2018] Yoshua Bengio, Andrea Lodi, and An- toine Prouvost. Machine learning for combinatorial op- timization: a methodological tour d'horizon. CoRR abs/1811.06128, 2018. [Bordes et al., 2011] Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. Learning structured em- beddings of knowledge bases. In AAAI, 2011. [Brockschmidt et al., 2019] Marc Brockschmidt, Miltiadis Allamanis, Alexander Gaunt, and Oleksandr Polozov. Generative code modeling with graphs. In ICLR, 2019. [Cameron et al., 2020] Chris Cameron, Rex Chen, Jason Hartford, and Kevin Leyton-Brown. Predicting proposi- tional satisfiability via end-to-end learning. In AAAI, 2020. [Chami et al., 2020] Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, and Kevin Murphy. Machine learning on graphs: A model and comprehensive taxon- omy. CoRR, abs/2005.03675, 2020.
Reasoning about time and knowledge in neural symbolic learning systems. Avila Garcez, Luís Lamb ; Artur D'avila Garcez, Lamb ; Avila, Garcez, Artur d'Avila Garcez, Luís C. Lamb, and Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. SpringerNIPSAvila Garcez and Lamb, 2003] Artur d'Avila Garcez and Luís Lamb. Reasoning about time and knowledge in neural symbolic learning systems. In NIPS, pages 921-928, 2003. [d'Avila Garcez et al., 2009] Artur d'Avila Garcez, Luís C. Lamb, and Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. Springer, 2009.
Neural-symbolic learning and reasoning: Contributions and challenges. Avila Garcez, AAAI Spring Symposia. Avila Garcez et al., 2015] Artur d'Avila Garcez, Tarek Besold, Luc de Raedt, Peter Földiák, Pascal Hitzler, Thomas Icard, Kai-Uwe Kühnberger, Luís C. Lamb, Risto Miikkulainen, and Daniel Silver. Neural-symbolic learn- ing and reasoning: Contributions and challenges. In AAAI Spring Symposia, 2015.
Neuralsymbolic argumentation mining: An argument in favor of deep learning and reasoning. Front. Big Data, 2:52. Avila Garcez, 23/02/2020Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning. FLAP. Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le SongLample and Charton6NIPSAvila Garcez et al., 2019] Artur d'Avila Garcez, Marco Gori, Luís C. Lamb, Luciano Serafini, Michael Spranger, and Son Tran. Neural-symbolic computing: An effective methodology for principled integration of machine learn- ing and reasoning. FLAP, 6(4):611-632, 2019. [Davis, 2019] Ernest Davis. The use of deep learning for symbolic integration: A review of (Lample and Charton, 2019). CoRR, abs/1912.05752, 2019. [Devlin et al., 2019] Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT, pages 4171-4186, 2019. [Diligenti et al., 2017] Michelangelo Diligenti, Marco Gori, and Claudio Saccà. Semantic-based regularization for learning and inference. Artif. Intell., 244:143-165, 2017. [Evans and Grefenstette, 2018] Richard Evans and Edward Grefenstette. Learning explanatory rules from noisy data. JAIR, 61:1-64, 2018. [Galassi et al., 2020] Andrea Galassi, Kristian Kersting, Marco Lippi, Xiaoting Shao, and Paolo Torroni. Neural- symbolic argumentation mining: An argument in favor of deep learning and reasoning. Front. Big Data, 2:52, 2020. [Garcia and Bruna, 2018] Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. In ICLR, pages 1-13, 2018. [Goyal et al., 2019] Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Ben- gio, and Bernhard Schölkopf. Recurrent independent mechanisms. CoRR, abs/1909.10893, 2019. [Greydanus et al., 2019] Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. In NeurIPS, pages 5353-15363, 2019. [Hitzler et al., 2004] Pascal Hitzler, Steffen Hölldobler, and Anthony K. Seda. Logic programs and connectionist net- works. J. Appl. Log., 2(3):245-272, 2004. [Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. [Huang et al., 2017] Qiuyuan Huang, Paul Smolensky, Xi- aodong He, Li Deng, and Dapeng Oliver Wu. A neural- symbolic approach to natural language tasks. CoRR, abs/1710.11475, 2017. [Kahneman et al., 2020] Daniel Kahneman, Francesca Rossi, Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. AAAI20 fireside chat with Daniel Kahne- man. https://vimeo.com/390814190?ref=tw-share, 2020. Accessed 23/02/2020. [Khalil et al., 2017] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial op- timization algorithms over graphs. In NIPS, pages 6348- 6358, 2017.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, and Welling, 2017] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017.
The Neuro-Symbolic Concept Learner: Interpreting scenes, words, and sentences from natural supervision. abs/1801.00631Attention, learn to solve routing problems! In ICLR. Rasmus Palm, Ulrich Paquet, and Ole WintherMarcus, 2020] Gary Marcus; Pedro Avelar, Henrique LemosMarcelo Prates521AAAI-2019. Raghavan, 2019] Sriram Raghavan. 2020 AI predictions from IBM researchet al., 2019] Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In ICLR, 2019. [Lample and Charton, 2020] Guillaume Lample and François Charton. Deep learning for symbolic mathemat- ics. In ICLR, 2020. [LeCun et al., 2015] Yann LeCun, Yoshua Bengio, and Ge- offrey Hinton. Deep learning. Nature, 521(7553):436- 444, 2015. [Lemos et al., 2019] Henrique Lemos, Marcelo Prates, Pe- dro Avelar, and Luís C. Lamb. Graph colouring meets deep learning: Effective graph neural network models for combinatorial problems. In ICTAI, pages 879-885, 2019. [Li et al., 2016] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph se- quence neural networks. In ICLR, 2016. [Li et al., 2018] Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolu- tional networks and guided tree search. In NeurIPS, 2018. [Manhaeve et al., 2018] Robin Manhaeve, Sebastijan Du- mancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Deepproblog: Neural probabilistic logic programming. In NeurIPS, 2018. [Mao et al., 2019] Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua Tenenbaum, and Jiajun Wu. The Neuro- Symbolic Concept Learner: Interpreting scenes, words, and sentences from natural supervision. In ICLR, 2019. [Marcus, 2020] Gary Marcus. The next decade in AI: Four steps towards robust artificial intelligence. CoRR, abs/1801.00631, 2020. [Palm et al., 2018] Rasmus Palm, Ulrich Paquet, and Ole Winther. Recurrent relational networks. In NeurIPS, pages 3372-3382, 2018. [Prates et al., 2019] Marcelo Prates, Pedro Avelar, Henrique Lemos, Luís Lamb, and Moshe Vardi. Learning to Solve NP-Complete Problems: A Graph Neural Network for De- cision TSP. In AAAI-2019, pages 4731-4738, 2019. [Raghavan, 2019] Sriram Raghavan. 2020 AI predictions from IBM research.
Tim Rocktäschel and Sebastian Riedel. Learning knowledge base inference with neural theorem provers. Adam Santoro, David Raposo, David Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Tim Lillicrap, 20/02/2020AKBC@NAACL-HLT. NIPShttps://www.ibm.com/blogs/research/2019/12/2020-ai-predictions, 2019. Accessed 20/02/2020. [Rocktäschel and Riedel, 2016] Tim Rocktäschel and Sebas- tian Riedel. Learning knowledge base inference with neu- ral theorem provers. In AKBC@NAACL-HLT, 2016. [Santoro et al., 2017] Adam Santoro, David Raposo, David Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Tim Lillicrap. A simple neural network module for relational reasoning. In NIPS, pages 4967- 4976, 2017.
Modeling relational data with graph convolutional networks. ; Ryoma Sato, Sato ; Corr, Scarselli, Serafini and d'Avila Garcez. 20ICLR. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. CoRR, abs/1606.04422Sato, 2020] Ryoma Sato. A survey on the expressive power of graph neural networks. CoRR, abs/2003.04078, 2020. [Scarselli et al., 2008] Franco Scarselli, Marco Gori, Ah Tsoi, Markus Hagenbuchner, and Gabriele Monfar- dini. The graph neural network model. IEEE T Neural Networ, 20(1):61-80, 2008. [Schlichtkrull et al., 2018] Michael Schlichtkrull, Thomas Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convo- lutional networks. ESWC, pages 593-607, 2018. [Selsam et al., 2019] Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, and David L. Dill. Learning a SAT solver from single-bit supervision. In ICLR, pages 1-11, 2019. [Serafini and d'Avila Garcez, 2016] Luciano Serafini and Artur d'Avila Garcez. Logic tensor networks: Deep learn- ing and logical reasoning from data and knowledge. CoRR, abs/1606.04422, 2016.
A deep learning approach to antibiotic discovery. [ Stokes, NIPS. 180Using matrices to model symbolic relationship[Stokes et al., 2020] Jonathan M. Stokes, Kevin Yang, Kyle Swanson, and Wengong Jin et al. A deep learning ap- proach to antibiotic discovery. Cell, 180, 2020. [Sutskever and Hinton, 2008] Ilya Sutskever and Geoffrey Hinton. Using matrices to model symbolic relationship. In NIPS, pages 1593-1600, 2008.
RUN-CSP: unsupervised learning of message passing networks for binary constraint satisfaction problems. CoRR, abs/1909.08387. [ Toenshoff, IEEE T Neur Net Learn. 181J Web Eng[Toenshoff et al., 2019] Jan Toenshoff, Martin Ritzert, Hin- rikus Wolf, and Martin Grohe. RUN-CSP: unsupervised learning of message passing networks for binary constraint satisfaction problems. CoRR, abs/1909.08387, 2019. [Townsend et al., 2019] Joe Townsend, Thomas Chaton, and João Monteiro. Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective. IEEE T Neur Net Learn, pages 1-15, 2019. [van Harmelen and ten Teije, 2019] Frank van Harmelen and Annette ten Teije. A boxology of design patterns for hy- brid learning and reasoning systems. J Web Eng, 18(1):97- 124, 2019.
Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. Steenkiste, In ICLR. Steenkiste et al., 2018] Sjoerd van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. In ICLR, 2018.
Graph attention networks. [ Veličković, Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS. ICLR[Veličković et al., 2018] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. [Vinyals et al., 2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, pages 2692- 2700, 2015.
Protein structure prediction beyond alphafold. Wei ; Guo-Wei Wei, ; Wu, Nature Mach. Intell. 1CoRR, abs/1901.00596Wei, 2019] Guo-Wei Wei. Protein structure prediction be- yond alphafold. Nature Mach. Intell., 1:336-337, 2019. [Wu et al., 2019] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. CoRR, abs/1901.00596, 2019.
Graph convolutional networks for text classification. [ Yao, AAAI. [Yao et al., 2019] Liang Yao, Chengsheng Mao, and Yuan Luo. Graph convolutional networks for text classification. In AAAI, pages 7370-7377, 2019.
G2SAT: learning to generate SAT formulas. [ Yin, NeurIPS. ICLR. and Wenwu Zhu. Deep learning on graphs: A survey. CoRR, abs/1812.04202[Yin et al., 2019] Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, and Alexander Gaunt Marc Brockschmidt. Learning to represent edits. In ICLR, 2019. [You et al., 2019] Jiaxuan You, Haoze Wu, Clark Barrett, Raghuram Ramanujan, and Jure Leskovec. G2SAT: learn- ing to generate SAT formulas. In NeurIPS, pages 10552- 10563, 2019. [Zhang et al., 2018] Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. CoRR, abs/1812.04202, 2018.
| [] |
[
"Semantic Types, Lexical Sorts and Classifiers",
"Semantic Types, Lexical Sorts and Classifiers"
] | [
"Bruno Mery \nUniversité de Bordeaux\nIRIT-CNRS\nLABRI-CNRS\n\n",
"Christian Retoré \nUniversité de Bordeaux\nIRIT-CNRS\nLABRI-CNRS\n\n"
] | [
"Université de Bordeaux\nIRIT-CNRS\nLABRI-CNRS\n",
"Université de Bordeaux\nIRIT-CNRS\nLABRI-CNRS\n"
] | [] | We propose a cognitively and linguistically motivated set of sorts for lexical semantics in a compositional setting: the classifiers in languages that do have such pronouns. These sorts are needed to include lexical considerations in a semantical analyser such as Boxer or Grail. Indeed, all proposed lexical extensions of usual Montague semantics to model restriction of selection, felicitous and infelicitous copredication require a rich and refined type system whose base types are the lexical sorts, the basis of the many-sorted logic in which semantical representations of sentences are stated. However, none of those approaches define precisely the actual base types or sorts to be used in the lexicon. In this article, we shall discuss some of the options commonly adopted by researchers in formal lexical semantics, and defend the view that classifiers in the languages which have such pronouns are an appealing solution, both linguistically and cognitively motivated.⋆ This research has benefitted from grants and inputs by ANR Polynomie, Project Itipy (Région Aquitaine), and CoLAn. We are indebted to the reviewers for their input. | null | [
"https://arxiv.org/pdf/1312.3168v1.pdf"
] | 13,975,943 | 1312.3168 | eacf4637a22922a84a8827a19c9a6a5b33298ce4 |
Semantic Types, Lexical Sorts and Classifiers
11 Dec 2013
Bruno Mery
Université de Bordeaux
IRIT-CNRS
LABRI-CNRS
Christian Retoré
Université de Bordeaux
IRIT-CNRS
LABRI-CNRS
Semantic Types, Lexical Sorts and Classifiers
11 Dec 2013
We propose a cognitively and linguistically motivated set of sorts for lexical semantics in a compositional setting: the classifiers in languages that do have such pronouns. These sorts are needed to include lexical considerations in a semantical analyser such as Boxer or Grail. Indeed, all proposed lexical extensions of usual Montague semantics to model restriction of selection, felicitous and infelicitous copredication require a rich and refined type system whose base types are the lexical sorts, the basis of the many-sorted logic in which semantical representations of sentences are stated. However, none of those approaches define precisely the actual base types or sorts to be used in the lexicon. In this article, we shall discuss some of the options commonly adopted by researchers in formal lexical semantics, and defend the view that classifiers in the languages which have such pronouns are an appealing solution, both linguistically and cognitively motivated.⋆ This research has benefitted from grants and inputs by ANR Polynomie, Project Itipy (Région Aquitaine), and CoLAn. We are indebted to the reviewers for their input.
Introduction
One of the most difficult aspect of the automated processing of human language is the phenomenon of polysemy, the ability for words to be used for different meanings in different contexts. Relatively recent studies, such as Pustejovsky (1995), have held the view that polysemy is a feature that enables creativity in linguistic acts, and that the meaning of words might be deduced by the application of generative mechanisms from their contexts, via processes refining semantical composition. Instead of thinking of all words denoting individual objects as sharing the same semantic types (of entities), advanced lexical semantics could class them along lexical sorts according to their contextual behaviour, and a process of type-checking could infer the correct meaning from any combination of predicate and object.
For the computational linguist, the problem of lexical semantics thus becomes twofold:
1. How does the semantic composition have to be modified ? 2. How should the base types, the lexical sorts, be defined ?
The first point has been the subject of many different and related proposals, including the authors' own framework. This paper is concerned with the second part of the problem, and propose a linguistically-motivated solution.
Including Lexical Considerations into Syntactical and Semantical Parsers
There are some wide coverage analysers that produce complete semantic analyses expressed as logical formulae, like Boxer by Johan Bos (English) and Grail by Richard Moot (spoken Dutch, written French). In both cases, the grammar, that is, a lexicon mapping each word to several semantic categories, is statistically acquired from annotated corpora. It thus has up to one hundred categories per word, hence the parser first computes the most likely sequences of categories and parse the n best. See Bos (2008); Moot (2010b).
In order to compute semantic representation, both use categorial grammars (mutlimodal or combinatory CG) and this is not a coincidence. Indeed, categorial grammars allow easy transformation from syntactic categories to semantic types and from syntactic analyses to semantic analyses.
Both analysers, as well as many other practical and theoretical frameworks, rely on principles of semantical composition along with the tradition of Montague Grammar, specified in Montague (1974) and refined many times since.
Montague Grammar assumes that words have a correspondence with terms of the simply-typed λ -calculus, with applications and abstractions given by the syntactic structure of the utterance, sentence or discourse. Those terms are constructed in a type system that use two types, t for truth-valued formulae, and e for entities. In that way, all single entities share the same sort, e.
Some frameworks and analysers also add the base type s, for indices of possible worlds, and the abstract sort v for events. However, linguistic entities still share the single sort e.
Considerations of lexical semantics provide compelling arguments for different base types. Specifically, the single sort e for entities can be split in several sorts, refining the type system. Consider:
(1) a. The hound barked.
b. * The vase barked. c. ? The sergeant barked.
Restrictions of selection (what, according to dictionaries, noun phrases can be object to specific verbs) dictate that (1a) is correct, (1b) is difficult to admit without a clear context, and (1c) is acceptable, but indicates a common metaphorical usage of bark, implying that the person referred to has certain dog-like qualities.
If the distinction is made by an analyser at the stage of semantic composition, using a singular sort e for all entities does not allow to distinguish between the syntactically similar sentences. Using different sorts for animate and inanimate entities (as commonly used in dictionary definitions) will licence (1a) and reject (1b) 1 .
With additional distinctions between, in this case, dogs and humans, and a flexible typing system that detects type clashes and licence certain modification to the typing of lexical entities, the metaphorical usage of the verb in (1c) can be detected and identified.
Lexical semantics also helps with the common problem of word sense disambiguation. A common use of words pertaining to organisations such as banks, schools, or newspapers is to represent some unnamed person that is responsible for the conduct of that organisation. Consider:
(2) The bank has covered for the extra expenses.
(2) means that someone has taken the liberty mentioned. Distinguishing between the normal use of the word (as an organisation) and this specific use (as an agent within that organisation) is only possible if the semantic system has a mean to set them apart, and a way to accomplish this is having Organisations and Agents being two different sorts of entities in the type system. Pustejovsky (1995) and other related publications present a broad linguistic framework encompassing those issues and many others related to polysemy and the creative use of words in context. It relies on a rich lexicon with several layers of information, and a many-sorted type system that help distinguish the different sorts of entities using an ontological hierarchy founded on linguistic principles.
The main issue is that this rich ontological type system has not been detailed, and is very much not trivial to construct, let alone that the general composition rules are missing from the original formulation.
Rich Types and Lexical Modifiers
The authors have defined a system for the inclusion of lexical semantics data (see Mery et al. (2007), Bassac et al. (2010), Mery (2011) and Mery & Retoré (2013)), and some of those results have been implemented in a semantics analyser. Instead of the single sort e, we make use of many different sorts for entities that can distinguish between different linguistic behaviours.
Formally, this framework uses a version of the type logic with n sorts, TY n , detailed in Muskens (1996b). Without detailing functionalities outside the scope of this contribution, those n sorts are used to type the different classes of entities of the lexicon. When a type clash is detected, the analyser searches for available modifiers provided by the logical terms that would allow the analysis to proceed, and makes a note of the lexical operations used in order to compute the actual meaning of the sentence. For instance, the following sentences refer to different facets of the entity bank (all pertaining to the finance-related concept), identifiable by the predicates used: to such sentences, and strategies to deal with those and compute a correct semantics with the same compositional analysis. In order to recognise that such a special treatment is needed, however, the system still needs to detect that the use is non-standard; it is as simple as detecting a type clash
(3) a. The bank is closed today.
b. The bank is at the next corner. c. The bank has gone mad.
(3a) refers to one of the most common use of the word, an Organisation, its base type. The type system maintains inferences for commonly used modifications, a very common is to refer to a physical location where the organisation is embodied, and thus the analyser would shift the type of the term to Location in (3b). In (3c), the predicate should apply to a person, and thus the type system would look for a way to associate a person to the organisation referred to. Our framework makes use of abstraction over types (and second-order λcalculus) in order to keep track of the lexical types involved, of constraints and modifications over those types. With hand-typed lexical entries and sorts defined over a restricted domain, this approach has been implemented and tested. However, we do not have a type system covering an entire language.
As an abridged example of the analyser, consider the sample lexical entry below:
Lexical item Main λ -term Modifiers
Birmingham birmingham T Id T : T → T t 2 : T → P t 3 : T → Pl is a huge place huge place : Pl → t voted voted : P → t
where the base types are defined as follows: T town P people Pl place The sentence:
(4) Birmingham is a huge place results in a type mismatch (the predicate is of type Pl → t, argument of type T )
huge place Pl→t (Birmingham T ))
The lexical modifier t T →Pl 3 that turns a town (T ) into a place (Pl) is inserted, resulting in:
big place Pl→t (t T →Pl 3 Birmingham T ))
Considering:
(5) Birmingham is a huge place and voted (Labour).
In order to parse the co-predication correctly, we use a polymorphic conjunction & Π . After application and reduction, this yields the following predicate:
Λ ξ λ x ξ λ f ξ →α λ g ξ →β (and t→t)→t (huge place ( f x))(voted (g x)))
Applying the argument of type T and the correct modifiers t 2 and t 3 , we finally obtain:
(and (huge place
Pl→t (t T →Pl 3 Birmingham T ))(voted Pl→t (t T →P 2 Birmingham T )))
The Difference between our Proposal and related Formulations
There are several related proposals devoted to type-driven lexical disambiguation that share many characteristics, including works by Pustejovsky, Asher, Luo and Bekki, started in Pustejovsky & Asher (2000), elaborated in Asher & Pustejovsky (2005), extensively developped in Asher (2011) and subject of continuing work in Xue & Luo (2012) and Bekki & Asher (2012).
We are indebted to the authors of this proposal and many others. However, our formulation differs from the others in a significant way. (2011) and other proposals, the base types are envisioned as an ontological hierarchy that derive a languageindependent system of transfers of meaning. The different possible senses associated to a word are largely dependent on conceptual relations made available by its type.
Ontological Types and Meaning Shifts : In Asher
Lexical-based Transformations : In our model, while base types distinguish between different sorts and drive the disambiguation process, the availability of transformations from a sort to another is defined at the lexical level, and depends on the language. It is thus possible to define idiosyncrasies and keep a closer rein on complex linguistic phenomena. This does not exclude to have some type-level transformations for practical purposes, specifically for the factorisation of common meaning shifts (e.g. transformations that apply to all vehicles also apply to cars).
Results on a restricted Domain
As observed by a reviewer, our model does not need a wide coverage generalist semantic lexicon to be tested , and we actually made some experiments for a particular question (in fulfilment of a regional project Itipy), the reconstruction of itineraries from a historical (XVII-XX century, mainly XIX) corpus of travel stories through the Pyrenees of of 576.334 words. See Lefeuvre et al. (2012a,b) for details.
For such a task the grammar ought to be a wide coverage one, including a basic compositional semantics without sorts nor any lexical information. We do have such a grammar, which has been automatically extracted from annotated corpora: it is a wide coverage multimodal categorial grammar that is a lexicalised grammar with an easy interface with compositional semanticsà la Montague. In the absence of manually typed semantic information, the grammar only includes an automatically constructed semantic lexicon with semantic terms that only depict the argument structure, e.g, give has λ s e λ o e λ d e .give(s, o, d) as its semantics. The actual implementation detailed in Moot (2010b,a) uses λ -DRSs of Discourse Representation Theory Kamp & Reyle (1993); Muskens (1996a) rather than plain λ -terms in order to handle discursive phenomena.
As the task is to provide a semantic representation the paths traversed or described by the authors, we focused on spatial and temporal semantics. Temporal semantics is handled by operatorsà la Verkuyl, that have little to do with lexical semantics, so we shall not speak about this in the present paper. But the semantics of space is modelled by the very framework described in the present paper.
As expected, the sorts or base types are easier to find for a specific domain or task. For space and motion verbs we obviously have two sorts, namely paths and regions, the later one being subdivided into villages, mountains, and larger areas like mountains chains. Paths did not need to be further divided, since by the time the stories in our corpus were written people only walked on paths (that could be called trails nowadays). Nowadays for the analysing travel stories one would possibly although consider motorways, roads, trails, etc.
The principal coercion we study in this setting for the analysis of itineraries is the phenomenon known as fictive motion Talmy (1999). One can say "the path descends for two hours". In order to interpret such a sentence, one needs to consider someone that would follow the path, although there might be no one actually following the path, and it is often difficult to tell apart whether the narrator does follow the path or not. Such constructions with verbs like "descendre, Our syntactical and semantical parser successfully analyses such examples, by considering coercion that turn an immobile object like a road into an object of type path that can be followed. A coercion introduced by the motion verb that allow fictive motion, e.g. "descendre" (descend), construct a formula (a DRS) that says that if an individual follows the path then he will goes down. The formula introduces such an individual, bound by an existential quantifier, and it is part of discourse analysis to find out whether it is a virtual traveller or whether the character of the travel story actually followed the path. Moot et al. (2011a,b) With a handwritten lexicon designed for a more precise analysis of spatial semantics, our framework worked successfully, i.e., automatically obtained the proper readings (and rejected the infelicitous ones when motion event are applied to improper spatial entities).
The Granularity of the Type System
The obstacle to our framework, and other related proposals, is thus the building of the system of sorts for entities. There is no real consensus on the criteria to be followed. We chose to dismiss the claims that such an endeavour is simply impossible, that compositional semantics should stick to the single-sort Montagovian e, and that any refinements should wait a phase of pragmatics or interpretation left as an exercise to the reader, as made in very blunt terms by Fodor & Lepore (1998) and more reasonably by Blutner (2002), and refuted in Pustejovsky (1998) and Wilks (2001). We assume that a rich lexicon with a refined type system are helpful for a number of theoretical and practical applications.
However, in those cases, the type system is more often than not simply assumed. James Pustejosky has described how it should behave in a number of details, in publications such as Pustejovsky (2001). It has never been detailed beyond the top level and some examples; as it was outlined, the system was a hierarchical ontology comprising most concepts expressed in natural language, with at least hundreds of nodes. The other proposals range between a dozen high-level sorts (animated, physical, abstract. . . ) and every common noun of every language (Xue & Luo (2012)), and even every possible formula with a single free variable (as formulae are derived from types, that last definition is circular). Some others, such as Cooper (2007), propose using a record type system that does away neatly with the granularity problem, as record types are redefined dynamically 2 ; or even deliberately vague approaches, arguing that a definite answer to that question would be self-defeating.
Practical Issues with the Controversy
While leaving the issue open is philosophically appealing, as the possibility of a definition of an actual, single metalinguistic ontology contradicts existential principles, there is a very compelling reason to pursue the matter: providing an actual implementation of a compositional lexical semantic analysis. Partial implementations, including ours illustrated in section 2, exist, but without a comprehensive and well-defined type system, they are largely prototypal and rely on a few hand-written types. They do prove the viability of the analysis and the interest for word sense disambiguation, but they cannot provide a really useful analysis outside the scope of very specific domains, up to now. Largescale generic NLP applications remain out of reach. Manual or semi-automated annotations are difficult, as they have either to be restricted to a very specific domain where it is possible to define base types comprehensively, or to be few in number and thus vague and error-prone. Choices have to be made, not in order to define the essence of lexical meaning, but simply to provide testable, falsifiable models and software that can be refined for actually useful applications.
This does not mean that a definite set of sorts can or should be devised once and for all, but a linguistically-motivated system, adaptable and mutable, would be an important step forward.
Type Granularity and the Classifier Systems
Sorts should represent the different classes of objects available to a competent speaker of the language. That two words of the same syntactic category have different sorts should mark a strong difference of semantic behaviour.
Our type system should be useable, with a computationally reasonable number of sorts. It should nevertheless be complex enough to model the lexical differences we are looking for.
In short, the set of sorts used as base types should be small in cardinality, with respect to the lexicon; large in the scope of lexical differences covered, if not complete; linguistically and cognitively motivated; adaptable, and immune to idiosyncrasy.
There have been many studies of some linguistic features that can prove interesting candidates for such a set, including grammatical attributes (gender, noun classes. . . ) and meta-linguistic classes proposed by Goddard & Wierzbicka (2007). We have chosen to illustrate some of the properties of the classifier systems, a class of pronominal features common to several language families including many Asian languages and every Sign language.
The Case of the Classifier Systems
A large class of languages employ a certain category of syntactic items known as classifiers. They are used routinely for quantificational operations, most commonly for counting individuals and measuring mass nouns. Classifiers are also widely used in Sign Language (several variations) for analog purposes.
Classifiers are interesting, as they are used to denote categories of physical objects or abstract concepts and approximate a linguistic classification of entities. The fact that they arise spontaneously in different and wide-reaching language families, their variety and their coverage makes them good candidates for base types. Classifiers are often present in many Asian languages (Chinese, Japanese, Korean, Vietnamese, Malay, Burmese, Thai, Hmong, Bengali, Munda), in some Amerindian and West African languages and in all Sign Languages. They are almost absent for Indo-European languages; in English a trace of a classifier is "head" in the expression "forty heads of cattle" where one can thereafter use "head(s)" to refer to some of them.
They are used as pronouns for a class of nouns which is both linguistically and ontologically motivated. They differ from noun classes in the sense that they are much more classifiers (200-400) than noun classes used for flexion morphology and agreement (≤20). Several classifiers may be used for a single noun, depending on the relevant reading. Classifiers are especially developed and refined for physical objects and can often stand alone with the meaning of a generic object of their class, and some nouns do not have a classifier: in such a case the noun itself may be used as a classifier.
The notions conveyed by classifiers differ somehow from language to language. For instance, in Chinese, classifiers can be used to count individuals, measures, both, or neither (see Li XuPing (2011) for details), the latter case being used to denote a similarity with the referred class. They are some linguistic and cultural idiosyncrasies. However, the main features of the system are common to all languages.
Classifiers in French Sign Language
Classifiers in sign languages (see Zwitserlood (2012)) are used in the language as distinct pronouns each of them applying to cognitively related nouns, in the sense that their shape evoke their visual shape or the way these entities are used or handled. There are many of them for material objects, humans beings, animals, while ideas and abstract object are gathered into wider classes. Classifiers in sign languages are hand shapes, that are used to express physical properties, size, position, and also the way the classified object moves. Here are a few examples, from French sign language (LSF):
Hand shape Classifier of ... horizontal M hand shape flat object, car, bus, train (not bike) vertical M hand shape bike, horse, fish, Y handshape plane C handshape small round or cylindrical object forefinger up person fist head of a person 4 hand shape a line of people three crouched fingers small animal The classifier used for a given object depends on what is said about the noun / entity represented by the classifier. For instance, a line of n people waiting to be served at the bakery may be represented by n fore fingers, in case for example, these n people are individualised and one wants to say they were discussing, or with the 4 hand shape of one wants to says they were waiting, they were numerous etc.
Some linguists, such as Cuxac (2000), call them pro-forms rather then classifiers. Pro-forms are analogous to pro-nouns: they stand for the form (shape) of the object: they refer to an object via its shape or part of its shape i.e. they depend on the aspect that is being referred to, just like the restriction of selection in lexical semantics. Polysemic mechanisms also apply to pro-forms, as different pro-forms can be used to refer to different facets of the same lexeme: e.g., a car might be referred to using a C shape (cylinder) pro-form to indicate that it is thought of as a container, or using a M shape (flat, horizontal hand, palm down) to indicate a moving vehicle.
Classifiers of sign languages are also used to identify how many objects one speaks about.
Classifiers in Japanese
In Japanese, the classifiers are used as counters, in a syntactic category formally known as "numerical auxiliaries". They are always used in conjunction with a numeral, or a pronoun referring to a numeral: 'How many men are there ?'
In (8), Nin is the classifier for people. The rest of the sentence makes clear that we are referring to a specific subclass, men. Japanese classifiers organise a hierarchy of sorts among the lexical entities. Children or people unfamiliar with the language can get by with a dozen common classifiers, mostly used as generic classes. Competent speakers of the language are expected to use the correct classifiers in a list comprising about a hundred entries. There are also a few hundred classifiers used only in specific situations such as restricted trades or professions, or ritualistic settings. Finally, classifiers can be generated from common nouns as a creative, non-lexical use of the word.
Examples of classifiers in that respect include:
Generic classifiers -Tsu: empty semantic content, used to mean any object. Commonly translated as "thing". -Nin: people (human).
-Order (Ban), frequency (Kai), amount of time in minutes, hours, days, etc. -Hai: measure. Used to mean "x units of" anything that is a mass concept, and is presented in a container (bottles of water, bowls of rice, cups of tea, etc.) Common classifiers -Mai: flat or slim objects, including paper, stamps, some articles of clothing, etc. -Dai: vehicles, machines, appliances.
-Ko: small things (such as dice, keys, pins) or unspecified things (their classifier is not known to the speaker or does not exist). -Hon: long and thin objects, such as pens, bottles, but also rivers, telephone calls (if they take a long time), etc. Specialised classifiers -Bi: fritter and small shrimps (for fishmongers).
-Koma: frames (for comic strip editors).
A complete discussion of the classifier system of Japanese or any other language falls outside the scope of this publication. What we want to illustrate is that it provides a linguistically sound classification of entities, applicable to any entity in the language (anything that can be referred by a pronoun), and derived from cognitive categories reflected by the etymology of the individual classifiers. In some cases, the classifiers are similar to words used in language that do not have a complete classifier system, such as the English head for units of cattle (the counter Tô for cattle and large animals is the character denoting "head"). In others, the metaphorical reasoning behind the lexical category is apparent (Hon, the character for "book" and "root", is used to count long things, including objects that are physically long, rivers and coasts that have a similar shape on a map, and abstract things that take a long time such as calls, movies, tennis matches. . . ).
The classifier system is very obviously the result of language evolution. In each language concerned, many classifiers have a different history (linguists have argued that the classifier system in Japanese, as well as in Korean and other languages of the Asia-Pacific region, has been heavily influenced by Chinese, see T'sou (2001) for details). However, the grammatical need to have a categorisation of entities in order for nouns to be countable or measurable has produced classes that share similar characteristics, suggesting that they are derived from natural observation of their salient features. In other words, even if classifiers are not commonly used in linguistics to denote anything other than numerical auxiliaries, we think they provide good candidates for a type system of the granularity we are interested in.
Moreover, classifiers can have a behaviour similar to lexical sorts in formal lexical semantics. Entities with the same denotation can have different classifiers if they are used in different contexts. Nin (people) can be used to count persons in many cases, but Mei (names) will have to be used in cases the situation calls for dignity and formality. Hai (full container) can be used to measure countable nouns, but also boats in a dismissive way (as a populist might refer to "a shipload of migrants"). Inapplicable classifiers can be used for metaphoric usages, puns, or obscure references to the particular etymology of a word or character. The overly obsequious humility of a character might be indicated by his use of the counter for small animals (rather than people) for himself; for other persons, this is considered a grave insult (often translated as "I am an unworthy insect" or "You are a mere ant to me").
Classifiers as Base Types: Linguistic or Cognitive Choice ?
What is pleasant in the choice of classifiers as base types is that they are natural both from a cognitive and from a linguistic viewpoint. They definitely are linguistic objects, since they are part or the language, being independent morphemes (words or signs). However these morphemes represent nouns, or, more precisely, refer to the relevant aspect of the noun for a particular predicate (adjective of verb), this is the reason why several classifiers are possible for a given object. Thus they also gather objects (rather than words) that resemble each other as far as a given predicate is applied to them, and this other aspect is more cognitive than linguistic.
Clearly, the precise classifier system depends on the language, but they obey some common general properties: it suggests that the classifier system is cognitively motivated. An intriguing common property is that physical entities that a speaker interact with have a very precise system of classifiers, with sub classifiers (i.e., a classifier being more specific than another), thus providing a kind of ontology in the language. For example, human beings and animals have classifiers, and there is a richer variety of classifiers for animals usual and closer to the human species: for instance there is a specific classifier in French sign language for small animals (rabbits, rats,. . . ). Although it could seem natural for sign languages, because sign language is visual and gestural that physical entities have very refined classifier systems, as signs recall the visual aspects of objects and the way we handle them, it is surprising that the Asian classifier systems are actually as rich for physical objects as the one for French Sign Language. From what we read, it seems that all classifier system do represent fairly precisely the physical objects.
For this reason we think that the classifier system is halfway between a cognitively motivated set of sorts, and a linguistic system. It is thus a good answer to our initial practical question: what should the base types be in compositional semantics if one wishes to include some lexical semantics (e.g. to limit ambiguities) to a semantic parser.
We propose building, for use by the existing analysers for syntax and semantics, a system of sorts based on the observed classifier systems and adapted to the target languages (English, French. . . ). The common use of the classifier systems indicate that they have a reasonable granularity. The classifier systems also have some limited redundancy and specialisation, that is included in our system as lexical modifiers indicating hyponymy and hyperonymy relations between sorts.
Integrating Base Types in our Lexicon
Our system requires base types in order to describe lexical sorts, that is, classes of entities that behave differently from one another as semantic units. These sorts are used to categorise nouns that refer to individuals, and form the base types of our hierarchy; predicates, action nouns, adverbs and adjectives are defined by complex or functional types built from those sorts.
We have seen that classifiers have many desirable qualities in the description of such classes, specifically as they apply to individuals. The cover provided is extensive, and the classification is linguistically motivated; some classifiers might have an archaic origin, or other peculiar features that makes them strongly idiosyncratic, but the strength of our system lies in the accurate representation of those idiosyncrasies, and we think classifiers provide a sound entry point for the classification necessary in our lexicon.
Our type-theoretical model of lexical semantics is already implemented in analysers for syntax and semantics based on refinements of Montague Grammar and categorial grammars, and has proven useful for the study of several specific linguistic issues, using restricted, hand-typed lexica. A first system being tested uses different sorts for regions, paths and times, as well as a fictive traveller, to analyse itineraries in a specific corpus of travel stories, as illustrated in section 2. The devising of a complete type system for each of the target languages, and thus the definition of a wide-coverage classification of entities into sorts, is a necessity for the next step: the completion of the lexicon and its semantics.
The base types, and the semantics for the transformations necessary for our approach, can be obtained by those methods or a combination thereof:
1. by statistical means (this is, however, a very difficult issue even with a very simple type system, see Zettlemoyer & Collins (2009) for a discussion); 2. by hand (this is possible for restricted domains); 3. by derivation from other linguistic data.
For that last method, we believe that the classifier systems used in various languages present the properties we would expect from such a type system. We propose to use the classifier systems as a template for classifying sorts in the target language, and are currently designing tests in order to confirm that such categories are identified as such by speakers of the language. For those languages that do not have classifiers, we are considering the adaptation of a classifier system of a language that does. Finally, if the kind of semantic analysis we want to perform is oriented towards some sorts, it is possible to use both classifiers and specific sorts.
entrer, serpenter,..." are quite common in our corpus as examples below show: (6) (. . . ) cette route qui monte sans cesse pendant deux lieues (. . . ) this road which climbs incessantly for two miles (7) (. . . ) où les routes de Lux et de Cauterets se séparent. Celle de Lux entre dans une gorge qui vous mène au fond d'un précipice et traverse le gave de Pau. (. . . ) where the roads to Lux and to Pau branch off. The one to Lux enters a gorge which leads you to the bottom of a precipice and traverses the Gave de Pau.
This does not imply that sentences such as (1b) should never receive any semantic analysis. There are some contexts (such as fairy tales or fantasy) that can give meaning
However, the inclusive definition of the records type system places it beyond classical type theory, which necessitates further adaptation in the logical framework.
Lexical Meaning in Context: a Web of Words. N Asher, Cambridge University PressAsher, N. (2011). Lexical Meaning in Context: a Web of Words. Cambridge Uni- versity Press.
Word Meaning and Commonsense Metaphysics. N Asher, J Pustejovsky, Semantics Archive. Asher, N., & Pustejovsky, J. (2005). Word Meaning and Commonsense Meta- physics. Semantics Archive.
Towards a Type-Theoretical Account of Lexical Semantics. C Bassac, B Mery, C Retoré, Journal of Language, Logic, and Information. 192Bassac, C., Mery, B., & Retoré, C. (2010). Towards a Type-Theoretical Account of Lexical Semantics. Journal of Language, Logic, and Information, 19(2).
Logical polysemy and subtyping. D Bekki, N Asher, Lecture Notes in Computer Science. Y. Motomura, A. Butler, & D. Bekki7856SpringerJSAI-isAI WorkshopsBekki, D., & Asher, N. (2012). Logical polysemy and subtyping. In Y. Moto- mura, A. Butler, & D. Bekki (Eds.) JSAI-isAI Workshops, vol. 7856 of Lecture Notes in Computer Science, (pp. 17-24). Springer.
Lexical Semantics and Pragmatics. R Blutner, Linguistische BerichteBlutner, R. (2002). Lexical Semantics and Pragmatics. Linguistische Berichte.
Wide-coverage semantic analysis with boxer. J Bos, STEP 2008 Conference Proceedings, Research in Computational Semantics. J. Bos, & R. DelmonteCollege PublicationsSemantics in Text ProcessingBos, J. (2008). Wide-coverage semantic analysis with boxer. In J. Bos, & R. Del- monte (Eds.) Semantics in Text Processing. STEP 2008 Conference Proceedings, Research in Computational Semantics, (pp. 277-286). College Publications.
Copredication, dynamic generalized quantification and lexical innovation by coercion. R Cooper, Fourth International Workshop on Generative Approaches to the Lexicon. Cooper, R. (2007). Copredication, dynamic generalized quantification and lex- ical innovation by coercion. In Fourth International Workshop on Generative Approaches to the Lexicon.
C Cuxac, La Langue des Signes Française. Les voies de l'iconicité. Ophrys. Cuxac, C. (2000). La Langue des Signes Française. Les voies de l'iconicité. Ophrys.
The emptiness of the lexicon : Reflections on James Pustejovsky's The Generative Lexicon. J A Fodor, E Lepore, Liguistic Inquiry. 229Fodor, J. A., & Lepore, E. (1998). The emptiness of the lexicon : Reflections on James Pustejovsky's The Generative Lexicon. Liguistic Inquiry, 29(2).
Applied Cultural Linguistics: Implications for second language learning and intercultural communication. C Goddard, A Wierzbicka, John Benjamins. F. Sharifian, & G. B. PalmerSemantic primes and cultural scripts in language learning and intercultural communicationGoddard, C., & Wierzbicka, A. (2007). Semantic primes and cultural scripts in language learning and intercultural communication. In F. Sharifian, & G. B. Palmer (Eds.) Applied Cultural Linguistics: Implications for second language learning and intercultural communication, (pp. 105-124). John Benjamins.
From Discourse to Logic. H Kamp, U Reyle, D. ReidelDordrechtKamp, H., & Reyle, U. (1993). From Discourse to Logic. Dordrecht: D. Reidel.
Traitement automatique d'un corpus de récits de voyages pyrénéens : analyse syntaxique, sémantique et pragmatique dans le cadre de la théorie des types. A Lefeuvre, R Moot, C Retoré, Congrès mondial de linguistique française. Lefeuvre, A., Moot, R., & Retoré, C. (2012a). Traitement automatique d'un cor- pus de récits de voyages pyrénéens : analyse syntaxique, sémantique et prag- matique dans le cadre de la théorie des types. In Congrès mondial de linguis- tique française.
Traitement automatique sur corpus de récits de voyages pyrénéens : Une analyse syntaxique, sémantique et temporelle. A Lefeuvre, R Moot, C Retoré, N.-F Sandillon-Rezer, Traitement Automatique du Langage Naturel, TALN'2012. 2Lefeuvre, A., Moot, R., Retoré, C., & Sandillon-Rezer, N.-F. (2012b). Traitement automatique sur corpus de récits de voyages pyrénéens : Une analyse syntax- ique, sémantique et temporelle. In Traitement Automatique du Langage Naturel, TALN'2012, vol. 2, (pp. 43-56). URL http://aclweb.org/anthology/F/F12/
On the semantics of classifiers in Chinese. Li Xuping, Bar-Ilan UniversityPh.D. thesisLi XuPing (2011). On the semantics of classifiers in Chinese. Ph.D. thesis, Bar-Ilan University.
Modélisation de la Sémantique Lexicale dans le cadre de la Théorie des Types. B Mery, Université de BordeauxPh.D. thesisMery, B. (2011). Modélisation de la Sémantique Lexicale dans le cadre de la Théorie des Types. Ph.D. thesis, Université de Bordeaux.
A montague-based model of generative lexical semantics. B Mery, C Bassac, C Retoré, New Directions in Type Theoretic Grammars. ESSLLI, Foundation of Logic, Language and Information. R. MuskensMery, B., Bassac, C., & Retoré, C. (2007). A montague-based model of gener- ative lexical semantics. In R. Muskens (Ed.) New Directions in Type Theoretic Grammars. ESSLLI, Foundation of Logic, Language and Information.
Recent advances in the logical representation of lexical semantics. B Mery, C Retoré, NCLS -Workshop on Natural Language and Computer Science, LiCS. New OrleansTulane UniversityMery, B., & Retoré, C. (2013). Recent advances in the logical representation of lexical semantics. In NCLS -Workshop on Natural Language and Computer Science, LiCS 2013. Tulane University, New Orleans.
The proper treatment of quantification in ordinary English. R Montague, R. H. ThomsonYale University PressNew Haven ConnecticutMontague, R. (1974). The proper treatment of quantification in ordinary En- glish. In R. H. Thomson (Ed.) Formal Philosophy, (pp. 188-221). New Haven Connecticut: Yale University Press.
Semi-automated extraction of a wide-coverage type-logical grammar for French. R Moot, Proceedings of Traitement Automatique des Langues Naturelles (TALN). Traitement Automatique des Langues Naturelles (TALN)MontrealMoot, R. (2010a). Semi-automated extraction of a wide-coverage type-logical grammar for French. In Proceedings of Traitement Automatique des Langues Na- turelles (TALN). Montreal.
Wide-coverage French syntax and semantics using Grail. R Moot, Proceedings of Traitement Automatique des Langues Naturelles (TALN). Traitement Automatique des Langues Naturelles (TALN)MontrealMoot, R. (2010b). Wide-coverage French syntax and semantics using Grail. In Proceedings of Traitement Automatique des Langues Naturelles (TALN). Montreal.
A discursive analysis of itineraries in an historical and regional corpus of travels. R Moot, L Prévot, C Retoré, Constraints in discourse. FranceAyay-roches-rougesMoot, R., Prévot, L., & Retoré, C. (2011a). A discursive analysis of itineraries in an historical and regional corpus of travels. In Constraints in discourse, (p. http://passage.inria.fr/cid2011/doku.php). Ayay-roches-rouges, France. URL http://hal.archives-ouvertes.fr/hal-00607691/en/
Un calcul de termes typés pour la pragmatique lexicale -chemins et voyageurs fictifs dans un corpus de récits de voyages. R Moot, L Prévot, C Retoré, Traitement Automatique du Langage Naturel. Montpellier, FranceMoot, R., Prévot, L., & Retoré, C. (2011b). Un calcul de termes typés pour la pragmatique lexicale -chemins et voyageurs fictifs dans un corpus de récits de voyages. In Traitement Automatique du Langage Naturel, TALN 2011, (pp. 161-166). Montpellier, France. URL http://hal.archives-ouvertes.fr/hal-00607690/en/
Combining Montague Semantics and Discourse Representation. R Muskens, Linguistics and Philosophy. 19Muskens, R. (1996a). Combining Montague Semantics and Discourse Repre- sentation. Linguistics and Philosophy, 19, 143-186.
Meaning and Partiality. R Muskens, Studies in Logic, Langage and Information. R. Cooper, & M. de RijkeCSLIMuskens, R. (1996b). Meaning and Partiality. In R. Cooper, & M. de Rijke (Eds.) Studies in Logic, Langage and Information. CSLI.
The Generative Lexicon. J Pustejovsky, MIT PressPustejovsky, J. (1995). The Generative Lexicon. MIT Press.
Generativity and Explanation in Semantics: a reply to Fodor and Lepore. J Pustejovsky, Linguistic Inquiry. 29Pustejovsky, J. (1998). Generativity and Explanation in Semantics: a reply to Fodor and Lepore. Linguistic Inquiry, 29, 289-311.
Type construction and the logic of concepts. J Pustejovsky, URL citeseer.ist.psu.edu/pustejovsky01type.htmlPustejovsky, J. (2001). Type construction and the logic of concepts. URL citeseer.ist.psu.edu/pustejovsky01type.html
The Metaphysics of Words in Context. Objectual attitudes. J Pustejovsky, N Asher, Linguistics and Philosophy. 23Pustejovsky, J., & Asher, N. (2000). The Metaphysics of Words in Context. Ob- jectual attitudes, Linguistics and Philosophy, 23, 141-183.
Fictive motion in language and "ception. L Talmy, Language and Space. P. Bloom, M. A. Peterson, L. Nadel, & M. F. GarrettMIT PressTalmy, L. (1999). Fictive motion in language and "ception". In P. Bloom, M. A. Peterson, L. Nadel, & M. F. Garrett (Eds.) Language and Space, (pp. 211-276). MIT Press.
Language contact and linguistic innovation. B K T'sou, New Terms for New Ideas. Western Knowledge and Lexical Change in Late Imperial China. M. Lackner, I. Amelung, & J. KurtzKoninklijke BrillT'sou, B. K. (2001). Language contact and linguistic innovation. In M. Lackner, I. Amelung, & J. Kurtz (Eds.) New Terms for New Ideas. Western Knowledge and Lexical Change in Late Imperial China, (pp. 35-56). Koninklijke Brill.
The "Fodor"-FODOR Fallacy bites back. Y Wilks, Studies in Natural Language Processing. P. Bouillon, & F. BusaCambridge University PressThe Language of Word MeaningWilks, Y. (2001). The "Fodor"-FODOR Fallacy bites back. In P. Bouillon, & F. Busa (Eds.) The Language of Word Meaning, Studies in Natural Language Processing. Cambridge University Press.
Dot-types and their implementation. T Xue, Z Luo, Béchet and Dikovsky. Xue, T., & Luo, Z. (2012). Dot-types and their implementation. In Béchet and Dikovsky (2012), pages 234-249.
Learning context-dependent mappings from sentences to logical form. L S Zettlemoyer, M Collins, ACL-2009. Zettlemoyer, L. S., & Collins, M. (2009). Learning context-dependent mappings from sentences to logical form. In ACL-2009.
Sign Languages: an International Handbook. I Zwitserlood, R. Pfau, M. Steinbach, & B. WollMouton de GruyterClassifiersZwitserlood, I. (2012). Classifiers. In R. Pfau, M. Steinbach, & B. Woll (Eds.) Sign Languages: an International Handbook, (pp. 158-186). Mouton de Gruyter.
| [] |
[
"On Sampling-Based Training Criteria for Neural Language Modeling",
"On Sampling-Based Training Criteria for Neural Language Modeling"
] | [
"Yingbo Gao \nComputer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany\n\nAppTek GmbH\n52062AachenGermany\n",
"David Thulke \nComputer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany\n\nAppTek GmbH\n52062AachenGermany\n",
"Alexander Gerstenberger alexander.gerstenberger|khoa.tran@rwth-aachen.de \nComputer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany\n",
"Khoa Viet Tran \nComputer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany\n",
"Ralf Schlüter \nComputer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany\n\nAppTek GmbH\n52062AachenGermany\n",
"Hermann Ney \nComputer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany\n\nAppTek GmbH\n52062AachenGermany\n"
] | [
"Computer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany",
"AppTek GmbH\n52062AachenGermany",
"Computer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany",
"AppTek GmbH\n52062AachenGermany",
"Computer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany",
"Computer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany",
"Computer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany",
"AppTek GmbH\n52062AachenGermany",
"Computer Science Department\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University\n52074AachenGermany",
"AppTek GmbH\n52062AachenGermany"
] | [] | As the vocabulary size of modern word-based language models becomes ever larger, many sampling-based training criteria are proposed and investigated. The essence of these sampling methods is that the softmax-related traversal over the entire vocabulary can be simplified, giving speedups compared to the baseline. A problem we notice about the current landscape of such sampling methods is the lack of a systematic comparison and some myths about preferring one over another. In this work, we consider Monte Carlo sampling, importance sampling, a novel method we call compensated partial summation, and noise contrastive estimation. Linking back to the three traditional criteria, namely mean squared error, binary cross-entropy, and crossentropy, we derive the theoretical solutions to the training problems. Contrary to some common belief, we show that all these sampling methods can perform equally well, as long as we correct for the intended class posterior probabilities. Experimental results in language modeling and automatic speech recognition on Switchboard and LibriSpeech support our claim, with all sampling-based methods showing similar perplexities and word error rates while giving the expected speedups. | 10.21437/interspeech.2021-1067 | [
"https://arxiv.org/pdf/2104.10507v2.pdf"
] | 233,324,471 | 2104.10507 | 1672bd443cde01614d06aa700728ebab24f97961 |
On Sampling-Based Training Criteria for Neural Language Modeling
17 Jun 2021
Yingbo Gao
Computer Science Department
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
52074AachenGermany
AppTek GmbH
52062AachenGermany
David Thulke
Computer Science Department
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
52074AachenGermany
AppTek GmbH
52062AachenGermany
Alexander Gerstenberger alexander.gerstenberger|khoa.tran@rwth-aachen.de
Computer Science Department
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
52074AachenGermany
Khoa Viet Tran
Computer Science Department
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
52074AachenGermany
Ralf Schlüter
Computer Science Department
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
52074AachenGermany
AppTek GmbH
52062AachenGermany
Hermann Ney
Computer Science Department
Human Language Technology and Pattern Recognition Group
RWTH Aachen University
52074AachenGermany
AppTek GmbH
52062AachenGermany
On Sampling-Based Training Criteria for Neural Language Modeling
17 Jun 2021Index Terms: samplingtraining criterionNCELMASR
As the vocabulary size of modern word-based language models becomes ever larger, many sampling-based training criteria are proposed and investigated. The essence of these sampling methods is that the softmax-related traversal over the entire vocabulary can be simplified, giving speedups compared to the baseline. A problem we notice about the current landscape of such sampling methods is the lack of a systematic comparison and some myths about preferring one over another. In this work, we consider Monte Carlo sampling, importance sampling, a novel method we call compensated partial summation, and noise contrastive estimation. Linking back to the three traditional criteria, namely mean squared error, binary cross-entropy, and crossentropy, we derive the theoretical solutions to the training problems. Contrary to some common belief, we show that all these sampling methods can perform equally well, as long as we correct for the intended class posterior probabilities. Experimental results in language modeling and automatic speech recognition on Switchboard and LibriSpeech support our claim, with all sampling-based methods showing similar perplexities and word error rates while giving the expected speedups.
Introduction
Enjoying the benefit of large amounts of text-only training data, language models (LMs) remain an important part of the modern automatic speech recognition (ASR) pipeline [1,2,3]. However, the large quantity of available data is a double-edged sword, posing real challenges in training. For example, the popular BERT model [4] and the recent GPT-2 and GPT-3 models [5,6] have millions of parameters and are trained on billions of tokens. For BERT and GPT, both systems use byte pair encoding [7] to mitigate the problem of potentially very large vocabularies. However, for ASR, it is not uncommon for LMs to operate on the word level with a vocabulary size in the order of several hundred thousands [8,9].
In order to train the neural LMs more efficiently, many speedup methods are proposed. To name a few: hierarchical softmax is used in [10], which changes the flat classification over all words to a series of binary decisions to arrive at the correct word; negative sampling (NS) method in [11] sums over a few sampled words instead of the full vocabulary; the noise contrastive estimation (NCE) method is proposed in [12] and adapted later in [13] for efficient estimation of LMs. Note that this is in no way an exhaustive enumeration of the ideas and methods, because there exist other works that introduce interesting concepts to address the problem, e.g. the Monte Carlo (MC) sampling and importance sampling (IS) discussed in [14]. For an overview of approximations to softmax, we refer the readers to a comprehensive blog post by Sebastian Ruder [15].
Among these methods, we find an interesting appreciation for NCE. In [16], the authors discuss the self normalization properties of models trained with NCE. In another line of work that maximizes mutual information for representation learning [17,18,19,20], the NCE concept is frequently used. In a preprint note by Chris Dyer [21], a short conclusion is drawn "Thus, if your goal is language modeling, you should use NCE; if your goal is word representation learning, you should consider both NCE and negative sampling." As a result, if one overlooks the math and takes it for granted, methods like NS may not sound very attractive for language modeling.
In this paper, we start from three fundamental criteria, namely mean squared error (MSE), binary cross-entropy (BCE), and cross-entropy (CE). We explicitly write out the sampling-based versions of them. Then, we derive the theoretical optimums of the model, and show that although for some sampling-based criteria we may not directly obtain the original class posterior probabilities from the model outputs, because there is a one-to-one mapping between the optimum and the posterior, we can correct the model outputs and obtain the desired probabilities. By doing so, the model also has self-normalization properties and gives reasonable performances. Note that we are not here to argue that any of the cited work is fundamentally wrong, our goal is simply to raise awareness that models trained with sampling-based criteria other than NCE can also perform well, given enough care.
Our contribution can be summarized into two points: • First, we examine various sampling methods systematically, linking back to traditional criteria MSE, BCE and CE, and derive optimums for them. • Second, we show from both a theoretical and a practical perspective, that all these sampling methods under consideration work, giving similar perplexities (PPLs), word error rates (WERs) and speedups.
Related Work
Neural LMs are commonly used in second-pass rescoring [1,2,22,23] or first-pass decoding [3] in ASR systems. While for conventional research-oriented datasets like Switchboard the word-level vocabulary size is several dozens of thousands, for larger systems, especially commercially available systems, the vocabulary size can often go up to several hundred thousand. Numerous methods to speed up the training (and potentially testing) of word-level LMs are proposed in the past decades [10,11,12,13,14,15]. These methods either exploit the statistical structure or perform sampling in the large target vocabulary. Among the sampling methods, NCE enjoys special appreciation [17,18,19,20,21], whose self-normalization property is also examined [16]. For self-normalization and variance regularization, it is shown that explicit additional losses can also be added to the cross-entropy training criterion [24]. Traditionally, MSE, BCE, and CE are three training criteria that give the correct class posterior probabilities [25,26]. In this paper, we make a connection between sampling-based criteria and these traditional ones and show that as long as one corrects for the intended class posterior probabilities, these sampling methods all show similar PPLs and WERs, while giving the expected speedups.
Sampling-Based Training Criteria
In this section, we formally define the training criteria. To clarify the notations, in the following sections, n is a running index in the number of word positions N , c, c ′ andc are running indices in the target vocabulary C, which is supposedly very large. The context or history for next word prediction is denoted with x, and the Kronecker delta δ is used to decide the identity of model prediction and ground truth. We use θ for model parameters, q for model outputs, p for class posterior probabilities, andq to represent the derived optimum. When model outputs are explicitly normalized, q(c|x) is used, otherwise, q(x, c) is used. F represents the training criterion to maximize. Lastly, the distribution from which we sample is denoted with D, and k is a running index in the number of samples K. Due to the limited pages, we only show the criterion definition and the optimum, and comment on the important steps in derivations in this paper.
Traditional Criteria
Mean Squared Error (MSE)
MSE is a classic training criterion commonly used for regression problems. Intuitively, it corresponds to "counting the errors", but in the continuous sense.
FMSE(θ) := − 1 N N n C c (q θ (xn, c) − δ(cn, c)) 2 (1) =⇒q θ (x, c) = p(c|x)(2)
To obtain the optimum, one could rewrite the summation in N into a summation in x, expand the squared term, and single out the terms related to q θ . By definition, q can be unbounded, but it is common to parametrize q to be positive. In our preliminary experiments, further constraining q to be between zero and one with sigmoid slightly boosts the performance. However, although we spend a great amount of effort to tune the MSEbased models 1 , the PPLs are still much worse than BCE and CE, therefore we only describe the MSE criterion here for the sake of completeness and not mention it later.
Binary Cross Entropy (BCE)
BCE is another traditional training criterion. The motivation of BCE can be summarized as to "encourage the correct predic-tions and discourage the wrong predictions".
FBCE(θ) := (3) 1 N N n=1 log q θ (xn, cn) + C c ′ =cn log(1 − q θ (xn, c ′ )) =⇒q θ (x, c) = p(c|x)(4)
Here, divergence inequality can be used for derivation. Note that q is required to be bounded in (0, 1) and it is commonly done via a sigmoid operation.
Cross Entropy (CE)
CE is arguably the most commonly used criterion nowadays and finds its roots in information theory and probabilistic theory. Intuitively, CE "encourages the model probability on the target word to be more exactly correct".
FCE(θ) := 1 N N n=1 log exp q θ (xn, cn) C c ′ =1 exp q θ (xn, c ′ ) (5) =⇒ expq θ (x, c) C c ′ =1 expq θ (x, c ′ ) = p(c|x)(6)
Again, applying divergence inequality here would give us the optimum. We explicitly write out the softmax operation in this case to highlight the summation in the big vocabulary C in the denominator. In this case, q is unbounded and the softmax guarantees the normalized property.
Sampling-Based Criteria
According to the previous discussion about MSE, BCE, and CE, we see a summation in C in all three cases. This summation can be viewed as an expectation of some quantity Qc under the uniform distribution 1 C : C c Qc = C C c 1 C Qc = CE(Qc). Approximating this expectation is the core concept of the sampling methods.
Monte Carlo Sampling (MCS)
MCS approximates an expectation by some sample mean. Specifically, instead of summing over C, we sum over K random samples wherec k ∼ D. NS in [10] is a prominent example of MCS.
FBCE-MCS(θ) := (7) 1 N N n=1 log q θ (xn, cn) + K k=1 log(1 − q θ (xn,c k )) =⇒q θ (x, c) ≈ (1 + KD(c) p(c|x) ) −1 (8) FCE-MCS(θ) := (9) 1 N N n=1 q θ (xn, cn) − log K k=1 exp q θ (xn,c k ) =⇒ D(c) expq θ (x, c) C c ′ =1 D(c ′ ) expq θ (x, c ′ ) ≈ p(c|x)(10)
Importance Sampling (IS)
IS rewrites the expectation by introducing another distribution other than the uniform distribution. In our case, this new distri-bution is the noise distribution D.
FBCE-IS(θ) := (11) 1 N N n=1 log q θ (xn, cn) + K k=1 log(1 − q θ (xn,c k )) KD(c k ) =⇒q θ (x, c) ≈ (1 + 1 p(c|x) ) −1 (12) FCE-IS(θ) := (13) 1 N N n=1 q θ (xn, cn) − log K k=1 exp q θ (xn,c k ) KD(c k ) =⇒ expq θ (x, c) C c ′ =1 expq θ (x, c ′ ) ≈ p(c|x)(14)
Compensated Partial Summation (CPS)
CPS comes from a very simple motivation. If we replace the summation of C terms with a sub-summation of K terms, maybe we should compensate the partial summation by C K . Essentially, this is still a Monte Carlo method with some correction term α = C K .
FBCE-CPS(θ) :=
1 N N n=1 log q θ (xn, cn) + α K k=1 log(1 − q θ (xn,c k )) =⇒q θ (x, c) ≈ (1 + αKD(c) p(c|x) ) −1 (16) FCE-CPS(θ) := (17) 1 N N n=1 q θ (xn, cn) − log α K k=1 exp q θ (xn,c k ) =⇒ D(c) expq θ (x, c) C c ′ =1 D(c ′ ) expq θ (x, c ′ ) ≈ p(c|x)(15)
Noise Contrastive Estimation (NCE)
NCE changes the task of next-word prediction to a binary classification task of telling true samples from noisy ones. Originally, NCE is proposed in the context of BCE. Due to the property that the class posterior probabilities can be obtained directly from the model output, NCE is one of the most popular methods seen in literature [16,17,18,19,20,21,24].
FBCE-NCE(θ) := 1 N N n=1 log q θ (xn, cn) q θ (xn, cn) + KD(cn)(19)+ K k=1 log(1 − q θ (xn,c k ) q θ (xn,c k ) + KD(c k ) ) =⇒q θ (x, c) ≈ p(c|x) (20) FCE-NCE(θ) := 1 N N n=1 q θ (xn, cn) q θ (xn, cn) + KD(cn) (21) − log K k=1 exp( q θ (xn,c k ) q θ (xn,c k ) + KD(c k ) ) =⇒ D(c) exp( q θ (x,c) q θ (x,c)+KD(c) ) c ′ D(c ′ ) exp( q θ (x,c ′ ) q θ (x,c ′ )+KD(c ′ ) ) ≈ p(c|x) (22)
Notes on Derivation and Implications
The term KD(c) often shows up in our derivation due to rewriting the summation in K into a summation in C:
N n K k Q n,k ≈ N n C c KD(c)Qn,c.
Approximately, the number of terms that show up in the two summations should be equal, when given n and K is large enough. This trick is used in many of the derivations. Note that the model outputs q have different constraints in different criteria, the logits we plug into the criteria are activated according, e.g. applying sigmoid if q is bounded between zero and one.
From the derived optimums, if the model output q(x, c) is not strictly the intended class posterior probability p(c|x) (unlike in the case of three traditional criteria and BCE-NCE), we confirm the statement in [21] that such models are not directly applicable for language modeling. However, for all cases, it is clear that there is a one-to-one mapping between q and p. Therefore, given each model output q(x, c), we can calculate the desired p(c|x) (optionally querying the noise distribution D), and use this quantity for rescoring in ASR.
Experiments
Experimental Setup
For the experimental validation, we train word-based LMs on LibriSpeech and SwitchBoard and evaluate the resulting models in well-tuned ASR systems with second-pass lattice rescoring. The vocabulary contains about 200k words for LibriSpeech and about 30k words for SwitchBoard, respectively.
For all training criteria, we make use of the same model architecture 2 . For LibriSpeech, our LMs make use of the Transformer [28] architecture, which is motivated by recent stateof-the-art results outperforming LSTM-based LMs on this task by a large margin [9,29]. We use 42 layers, 512 input embedding dimension, 2048 feed-forward dimension, 8 attention heads, and 512 residual and key/query/value dimension. We do not use a positional encoding, following the architecture presented in [29]. For SwitchBoard, we use LSTM LMs with 2 layers and 2048 hidden units, and 128 embedding dimension.
For sampling-based approaches, we sample 8192 samples from log-uniform noise distributions, which performs well in our preliminary experiments. The noise samples are shared over the whole batch for computational efficiency in all cases [30,31]. The parameters are learned using stochastic gradient descent with a learning rate of 1. The global gradient norms are clipped to 1. We implement our models using the open-source toolkit RETURNN 3 [32], based on TensorFlow [33].
The acoustic models are based on hybrid hidden Markov models. For detailed information on training setups, including discriminative training approaches, adaptation, and model architectures we refer the reader to [34] for the SwitchBoard and [9] for the LibriSpeech systems. For LibriSpeech the lattices are generated using a well-tuned LSTM LM in the first pass [3]. For SwitchBoard, the lattices are generated using a 4-gram Kneser-Ney LM [34]. We interpolate the LSTM LMs with count-based LMs for evaluation. We report on clean and other test sets for LibriSpeech and on SwitchBoard (SWB) and CallHome (CH) Hub5'00 test sets for SwitchBoard. During LM training we use a separate validation dataset. PPLs and WERs are always obtained with proper normalization over the full vocabulary unless noted otherwise. The reported average training time per batch is obtained on NVIDIA GeForce GTX 1080 Ti GPUs, with a batch size of 64 for LibriSpeech and 32 for Switchboard respectively.
Main Results
Below we present the main results. Table 1 gives the PPLs and WERs of our baseline models [3,34]. In our opinion, these are competitive systems and serve as reasonable baselines for our purpose of judging different sampling methods. In Table 2, we present the PPLs and WERs of the samplingbased models on the LibriSpeech dataset. Since we sample 8k samples out of a 200k vocabulary, for all sampling methods under consideration, there is a significant relative training time speedup of over 40%, which is expected. Because of limited computational resources and our purpose not being to obtain the best trade-off between speed and quality, we do not sweep over different sampling sizes. Note that we do not include CE-NCE results due to some numerical problems. In terms of PPLs and WERs, we see a consistent improvement over the baseline, and we attribute this to using the Transformer and not the LSTM architecture [9,29]. Comparing different sampling methods, we see a small variation in PPLs and even less in WERs. This justifies our statement that all these sampling methods work equally well when the model outputs are corrected accordingly. In Table 3, similar results are presented for the Switch-Board dataset. In this case, all sampling-based methods give consistent relative training time speedup of over 20%. Considering that here 8k samples are drawn out of a 30k vocabulary, instead of 8k being drawn from a 200k vocabulary like that for LibriSpeech, the relative speedup being smaller is thus expected. PPL-wise, significant improvements over count-based baseline LM are achieved, which is also confirmed numerous times in different works on different datasets. The CE baseline is slightly better than the sampling-based LMs because our training recipe is well-tuned for the CE baseline. Comparing different sampling-based criteria, NCE and IS are slightly better, but the advantage is not large enough to justify one method being strictly superior than another. In terms of WERs, all sampling methods perform similarly. While PPLs and WERs reported above are properly normalized and give expected training time speedups, what makes these sampling methods especially attractive is the use case without explicit normalization. To this end, we look at the BCE sampling variants and directly rescore with the model outputs of these LMs, and report the WERs on Switchboard. As seen in Table 4, BCE-IS performs on par with BCE-NCE, as well as the CE baseline shown in Table 3, which shows its competitiveness and self-normalization properties. We notice that when plugging in the log-uniform distribution for D, the BCE-MCS and BCE-CPS performance without normalization can get much worse to around 11.8% WER, which is similar to the count-based model. We attribute this to having to query the unreliable noise distribution D during search (which is not the case for BCE-IS and BCE-NCE) and therefore use a smoothed empirical unigram distribution for D instead.
Conclusion
For language modeling with large vocabularies, we consider different sampling-based training criteria. We start from three traditional criteria and formulate sampling-based versions of them. We derive optimums and argue that when model outputs are corrected for the intended class posteriors, these methods perform equally well compared to the popular noise contrastive estimation. Experimental evidence of perplexity and word error rate results on LibriSpeech and SwitchBoard support our claim. For direct rescoring without explicit normalization, we show the self-normalization properties of such sampling-based models.
Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n o 694537, project "SEQCLAS"). The work reflects only the authors' views and the European Research Council Executive Agency (ERCEA) is not responsible for any use that may be made of the information it contains. Simulations were performed with computing resources granted by RWTH Aachen University under project rwth0582. We thank Christoph Lüscher for providing the LibriSpeech acoustic model and Eugen Beck for the lattices, and Markus Kitza for the SwitchBoard lattices.
Table 1 :
1Baseline PPLs and WERs.dataset
model PPL
WER
LibriSpeech LSTM 64.3
-clean other
-
2.6 5.8
SwitchBoard 4-gram 74.6
ALL SWB CH
11.8 8.1 15.4
Table 2 :
2Sampling-Based Transformer LMs on LibriSpeech.criterion sampling
train time
(ms/batch)
PPL
WER
clean other
BCE
-
0.358
58.5 2.5 5.4
MCS
0.213
58.0 2.6 5.4
IS
0.206
58.4 2.6 5.5
CPS
0.205
58.4 2.5 5.4
NCE
0.206
57.9 2.5 5.4
CE
-
0.302
57.7 2.5 5.4
MCS
0.206
57.9 2.5 5.4
IS
0.201
58.7 2.5 5.4
CPS
0.203
62.2 2.5 5.4
Table 3 :
3Sampling-Based LSTM LMs on Switchboard.criterion sampling
train time
(ms/batch)
PPL
WER
All SWB CH
BCE
-
0.107
52.3 10.3 6.9 13.7
MCS
0.077
52.6 10.3 7.0 13.6
IS
0.079
51.5 10.3 7.0 13.7
CPS
0.079
52.4 10.3 7.1 13.6
NCE
0.078
51.4 10.3 7.0 13.6
CE
-
0.100
49.9 10.1 6.8 13.4
MCS
0.078
52.4 10.4 7.0 13.7
IS
0.076
52.3 10.2 6.9 13.6
CPS
0.077
52.3 10.3 7.0 13.6
Table 4 :
4WERs without Explicit Normalization on SwitchBoard.WER BCE-MCS BCE-IS BCE-CPS BCE-NCEALL
11.0
10.2
11.3
10.2
SWB
7.3
6.9
7.5
6.9
CH
14.7
13.6
15.1
13.6
For example, initializing the model with parameters from a converged NCE-trained model, changing optimizers and grid-searching over the learning rates. We also find that down-scaling the penalization MSE has on the rival classes improves the PPL a little, but similar experiments with BCE or CE do not give any improvements. Although authors of[27] claim that MSE should be preferred, we argue that for large number of target classes, it is hard for MSE to converge to the optimum with our current training recipes. Our empirical experience also agrees with that in[25].
Ideally, one should tune the hyperparameters for each training criterion individually. Given the computational and time constraints, we believe that this is still a reasonable approach.3 Example config is available at https://git.io/Jnq3Q.
Efficient lattice rescoring using recurrent neural network language models. X Liu, Y Wang, X Chen, M J Gales, P C Woodland, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEX. Liu, Y. Wang, X. Chen, M. J. Gales, and P. C. Woodland, "Ef- ficient lattice rescoring using recurrent neural network language models," in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014, pp. 4908- 4912.
Lstm, gru, highway and a bit of attention: an empirical overview for language modeling in speech recognition. K Irie, Z Tüske, T Alkhouli, R Schlüter, H Ney, InterspeechK. Irie, Z. Tüske, T. Alkhouli, R. Schlüter, H. Ney et al., "Lstm, gru, highway and a bit of attention: an empirical overview for language modeling in speech recognition," in Interspeech, 2016, pp. 3519-3523.
Lstm language models for lvcsr in first-pass decoding and lattice-rescoring. E Beck, W Zhou, R Schlüter, H Ney, arXiv:1907.01030arXiv preprintE. Beck, W. Zhou, R. Schlüter, and H. Ney, "Lstm language mod- els for lvcsr in first-pass decoding and lattice-rescoring," arXiv preprint arXiv:1907.01030, 2019.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171-4186. [Online].
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, OpenAI blog. 189A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learn- ers," OpenAI blog, vol. 1, no. 8, p. 9, 2019.
Language models are few-shot learners. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A , arXiv:2005.14165arXiv preprintT. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., "Language models are few-shot learners," arXiv preprint arXiv:2005.14165, 2020.
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, arXiv:1508.07909arXiv preprintR. Sennrich, B. Haddow, and A. Birch, "Neural machine translation of rare words with subword units," arXiv preprint arXiv:1508.07909, 2015.
Transformerbased acoustic modeling for hybrid speech recognition. Y Wang, A Mohamed, D Le, C Liu, A Xiao, J Mahadeokar, H Huang, A Tjandra, X Zhang, F Zhang, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPY. Wang, A. Mohamed, D. Le, C. Liu, A. Xiao, J. Mahadeokar, H. Huang, A. Tjandra, X. Zhang, F. Zhang et al., "Transformer- based acoustic modeling for hybrid speech recognition," in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6874- 6878.
Rwth asr systems for librispeech: Hybrid vs attention-w/o data augmentation. C Lüscher, E Beck, K Irie, M Kitza, W Michel, A Zeyer, R Schlüter, H Ney, arXiv:1905.03072arXiv preprintC. Lüscher, E. Beck, K. Irie, M. Kitza, W. Michel, A. Zeyer, R. Schlüter, and H. Ney, "Rwth asr systems for librispeech: Hybrid vs attention-w/o data augmentation," arXiv preprint arXiv:1905.03072, 2019.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintT. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient esti- mation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in Neural Information Processing Systems. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerCurran Associates, Inc26T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, "Distributed representations of words and phrases and their compositionality," in Advances in Neural Information Processing Systems, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, Eds., vol. 26. Curran Associates, Inc., 2013. [Online]. Available: https://proceedings.neurips.cc/paper/ 2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf
Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. M Gutmann, A Hyvärinen, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings. the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference ProceedingsM. Gutmann and A. Hyvärinen, "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models," in Proceedings of the Thirteenth International Conference on Artifi- cial Intelligence and Statistics. JMLR Workshop and Conference Proceedings, 2010, pp. 297-304.
A fast and simple algorithm for training neural probabilistic language models. A Mnih, Y W Teh, arXiv:1206.6426arXiv preprintA. Mnih and Y. W. Teh, "A fast and simple algorithm for training neural probabilistic language models," arXiv preprint arXiv:1206.6426, 2012.
Quick training of probabilistic neural nets by importance sampling. Y Bengio, J.-S Senécal, AISTATS. Y. Bengio, J.-S. Senécal et al., "Quick training of probabilistic neural nets by importance sampling." in AISTATS, 2003, pp. 1-9.
On word embeddings -Part 2: Approximating the Softmax. S Ruder, S. Ruder, "On word embeddings -Part 2: Approximating the Soft- max," http://ruder.io/word-embeddings-softmax, 2016.
Self-normalization properties of language modeling. J Goldberger, O Melamud, arXiv:1806.00913arXiv preprintJ. Goldberger and O. Melamud, "Self-normalization properties of language modeling," arXiv preprint arXiv:1806.00913, 2018.
Representation learning with contrastive predictive coding. A V Oord, Y Li, O Vinyals, arXiv:1807.03748arXiv preprintA. v. d. Oord, Y. Li, and O. Vinyals, "Representation learning with contrastive predictive coding," arXiv preprint arXiv:1807.03748, 2018.
On variational bounds of mutual information. B Poole, S Ozair, A Van Den, A Oord, G Alemi, Tucker, International Conference on Machine Learning. PMLRB. Poole, S. Ozair, A. Van Den Oord, A. Alemi, and G. Tucker, "On variational bounds of mutual information," in International Conference on Machine Learning. PMLR, 2019, pp. 5171-5180.
On mutual information maximization for representation learning. M Tschannen, J Djolonga, P K Rubenstein, S Gelly, M Lucic, arXiv:1907.13625arXiv preprintM. Tschannen, J. Djolonga, P. K. Rubenstein, S. Gelly, and M. Lucic, "On mutual information maximization for representa- tion learning," arXiv preprint arXiv:1907.13625, 2019.
A mutual information maximization perspective of language representation learning. L Kong, C D . M. D'autume, W Ling, L Yu, Z Dai, D Yogatama, arXiv:1910.08350arXiv preprintL. Kong, C. d. M. d'Autume, W. Ling, L. Yu, Z. Dai, and D. Yo- gatama, "A mutual information maximization perspective of lan- guage representation learning," arXiv preprint arXiv:1910.08350, 2019.
Notes on noise contrastive estimation and negative sampling. C Dyer, arXiv:1410.8251arXiv preprintC. Dyer, "Notes on noise contrastive estimation and negative sam- pling," arXiv preprint arXiv:1410.8251, 2014.
Lattice rescoring strategies for long short term memory language models in speech recognition. S Kumar, M Nirschl, D Holtmann-Rice, H Liao, A T Suresh, F Yu, 2017 IEEE Automatic Speech Recognition and Understanding Workshop. ASRUS. Kumar, M. Nirschl, D. Holtmann-Rice, H. Liao, A. T. Suresh, and F. Yu, "Lattice rescoring strategies for long short term mem- ory language models in speech recognition," in 2017 IEEE Auto- matic Speech Recognition and Understanding Workshop (ASRU).
. IEEE. IEEE, 2017, pp. 165-172.
A parallelizable lattice rescoring strategy with neural language models. K Li, D Povey, S Khudanpur, arXiv:2103.05081arXiv preprintK. Li, D. Povey, and S. Khudanpur, "A parallelizable lattice rescoring strategy with neural language models," arXiv preprint arXiv:2103.05081, 2021.
Efficient training and evaluation of recurrent neural network language models for automatic speech recognition. X Chen, X Liu, Y Wang, M J Gales, P C Woodland, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2411X. Chen, X. Liu, Y. Wang, M. J. Gales, and P. C. Woodland, "Effi- cient training and evaluation of recurrent neural network language models for automatic speech recognition," IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, vol. 24, no. 11, pp. 2146-2157, 2016.
Cross-entropy vs. squared error training: a theoretical and experimental comparison. P Golik, P Doetsch, H Ney, Interspeech. 13P. Golik, P. Doetsch, and H. Ney, "Cross-entropy vs. squared er- ror training: a theoretical and experimental comparison." in Inter- speech, vol. 13, 2013, pp. 1756-1760.
Data-driven deep modeling and training for automatic speech recognition. P Golik, Aachen, GermanyRWTH Aachen University, Computer Science Department, RWTH Aachen UniversityPh.D. dissertationP. Golik, "Data-driven deep modeling and training for automatic speech recognition," Ph.D. dissertation, RWTH Aachen Univer- sity, Computer Science Department, RWTH Aachen University, Aachen, Germany, Aug. 2020.
Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks. L Hui, M Belkin, arXiv:2006.07322arXiv preprintL. Hui and M. Belkin, "Evaluation of neural architectures trained with square loss vs cross-entropy in classification tasks," arXiv preprint arXiv:2006.07322, 2020.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, "Atten- tion is all you need," in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30. Curran Associates, Inc., 2017. [Online].
Language modeling with deep transformers. K Irie, A Zeyer, R Schlüter, H Ney, Interspeech, Graz, AustriaiSCA Best Student Paper Award. [slidesK. Irie, A. Zeyer, R. Schlüter, and H. Ney, "Language modeling with deep transformers," in Interspeech, Graz, Austria, Sep. 2019, pp. 3905-3909, iSCA Best Student Paper Award. [slides].
Simple, fast noisecontrastive estimation for large rnn vocabularies. B Zoph, A Vaswani, J May, K Knight, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesB. Zoph, A. Vaswani, J. May, and K. Knight, "Simple, fast noise- contrastive estimation for large rnn vocabularies," in Proceedings of the 2016 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Tech- nologies, 2016, pp. 1217-1222.
Exploring the limits of language modeling. R Jozefowicz, O Vinyals, M Schuster, N Shazeer, Y Wu, arXiv:1602.02410arXiv preprintR. Jozefowicz, O. Vinyals, M. Schuster, N. Shazeer, and Y. Wu, "Exploring the limits of language modeling," arXiv preprint arXiv:1602.02410, 2016.
RETURNN as a generic flexible neural toolkit with application to translation and speech recognition. A Zeyer, T Alkhouli, H Ney, Proceedings of ACL 2018, System Demonstrations. ACL 2018, System DemonstrationsMelbourne, AustraliaAssociation for Computational LinguisticsA. Zeyer, T. Alkhouli, and H. Ney, "RETURNN as a generic flexible neural toolkit with application to translation and speech recognition," in Proceedings of ACL 2018, System Demonstrations. Melbourne, Australia: Association for Compu- tational Linguistics, Jul. 2018, pp. 128-133. [Online]. Available: https://www.aclweb.org/anthology/P18-4022
Tensorflow: A system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M D , Proc. USENIX Sympo. on Operating Systems Design and Impl. (OSDI 16). USENIX Sympo. on Operating Systems Design and Impl. (OSDI 16)Savannah, GA, USAM. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, and M. D. et al., "Tensorflow: A system for large-scale machine learn- ing," in Proc. USENIX Sympo. on Operating Systems Design and Impl. (OSDI 16), Savannah, GA, USA, Nov. 2016, pp. 265-283.
Cumulative adaptation for blstm acoustic models. M Kitza, P Golik, R Schlüter, H Ney, Interspeech. M. Kitza, P. Golik, R. Schlüter, and H. Ney, "Cumulative adap- tation for blstm acoustic models," in Interspeech, Graz, Austria, Sep. 2019, pp. 754-758.
| [] |
[
"Template-based Approach to Zero-shot Intent Recognition",
"Template-based Approach to Zero-shot Intent Recognition",
"Template-based Approach to Zero-shot Intent Recognition",
"Template-based Approach to Zero-shot Intent Recognition"
] | [
"Dmitry Lamanov \nHuawei Noah's Ark lab\n\n",
"Pavel Burnyshev \nHuawei Noah's Ark lab\n\n",
"Ekaterina Artemova \nHuawei Noah's Ark lab\n\n\nHSE University\n\n",
"Valentin Malykh \nHuawei Noah's Ark lab\n\n",
"Andrey Bout \nHuawei Noah's Ark lab\n\n",
"Irina Piontkovskaya piontkovskaya.irina@huawei.com \nHuawei Noah's Ark lab\n\n",
"Dmitry Lamanov \nHuawei Noah's Ark lab\n\n",
"Pavel Burnyshev \nHuawei Noah's Ark lab\n\n",
"Ekaterina Artemova \nHuawei Noah's Ark lab\n\n\nHSE University\n\n",
"Valentin Malykh \nHuawei Noah's Ark lab\n\n",
"Andrey Bout \nHuawei Noah's Ark lab\n\n",
"Irina Piontkovskaya piontkovskaya.irina@huawei.com \nHuawei Noah's Ark lab\n\n"
] | [
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"HSE University\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"HSE University\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n",
"Huawei Noah's Ark lab\n"
] | [] | The recent advances in transfer learning techniques and pre-training of large contextualized encoders foster innovation in real-life applications, including dialog assistants. Practical needs of intent recognition require effective data usage and the ability to constantly update supported intents, adopting new ones, and abandoning outdated ones. In particular, the generalized zero-shot paradigm, in which the model is trained on the seen intents and tested on both seen and unseen intents, is taking on new importance. In this paper, we explore the generalized zero-shot setup for intent recognition. Following best practices for zero-shot text classification, we treat the task with a sentence pair modeling approach. We outperform previous state-of-the-art f1-measure by up to 16% for unseen intents, using intent labels and user utterances and without accessing external sources (such as knowledge bases). Further enhancement includes lexicalization of intent labels, which improves performance by up to 7%. By using task transferring from other sentence pair tasks, such as Natural Language Inference, we gain additional improvements. | 10.48550/arxiv.2206.10914 | [
"https://export.arxiv.org/pdf/2206.10914v1.pdf"
] | 249,926,703 | 2206.10914 | 53d80a2dad295e2ccb237d21fc3704bde17ce967 |
Template-based Approach to Zero-shot Intent Recognition
Dmitry Lamanov
Huawei Noah's Ark lab
Pavel Burnyshev
Huawei Noah's Ark lab
Ekaterina Artemova
Huawei Noah's Ark lab
HSE University
Valentin Malykh
Huawei Noah's Ark lab
Andrey Bout
Huawei Noah's Ark lab
Irina Piontkovskaya piontkovskaya.irina@huawei.com
Huawei Noah's Ark lab
Template-based Approach to Zero-shot Intent Recognition
Correspondence:
The recent advances in transfer learning techniques and pre-training of large contextualized encoders foster innovation in real-life applications, including dialog assistants. Practical needs of intent recognition require effective data usage and the ability to constantly update supported intents, adopting new ones, and abandoning outdated ones. In particular, the generalized zero-shot paradigm, in which the model is trained on the seen intents and tested on both seen and unseen intents, is taking on new importance. In this paper, we explore the generalized zero-shot setup for intent recognition. Following best practices for zero-shot text classification, we treat the task with a sentence pair modeling approach. We outperform previous state-of-the-art f1-measure by up to 16% for unseen intents, using intent labels and user utterances and without accessing external sources (such as knowledge bases). Further enhancement includes lexicalization of intent labels, which improves performance by up to 7%. By using task transferring from other sentence pair tasks, such as Natural Language Inference, we gain additional improvements.
Introduction
User intent recognition is one of the key components of dialog assistants. With the advent of deep learning models, deep classifiers have been used throughout to recognize user intents. A common setup for the task (Chen et al., 2019;Wu et al., 2020;Casanueva et al., 2020) involves an omnipresent pre-trained language model (Devlin et al., 2018;Liu et al., 2019b;Sanh et al., 2019), equipped with a classification head, learned to predict intents. However, if the dialog assistant is extended with new skills or applications, new intents may appear. In this case, the intent recognition model needs to be re-trained. In turn, re-training the model requires annotated data, the scope of which is inherently limited. Hence, handling unseen events defies the common setup and poses new challenges. To this end, generalized zero-shot (GZS) learning scenario (Xian et al., 2018), in which the model is presented at the training phase with seen intents and at the inference phase with both seen and unseen intents, becomes more compelling and relevant for real-life setups. The main challenge lies in developing a model capable of processing seen and unseen intents at comparable performance levels.
Recent frameworks for GZS intent recognition are designed as complex multi-stage pipelines, which involve: detecting unseen intents (Yan et al., 2020), learning intent prototypes (Si et al., 2021), leveraging common sense knowledge graphs (Siddique et al., 2021). Such architecture choices may appear untrustworthy: using learnable unseen detectors leads to cascading failures; relying on external knowledge makes the framework hardly adjustable to low-resource domains and languages. Finally, interactions between different framework's components may be not transparent, so it becomes difficult to trace back the prediction and guarantee the interpretability of results.
At the same time, recent works in the general domain GZL classification are centered on the newly established approach of Yin et al. (2019), who formulate the task as a textual entailment problem. The class's description is treated as a hypothesis and the text -as a premise. The GZL classification becomes a binary problem: to predict whether the hypothesis entails the premise or not. Entailment-based approaches have been successfully used for information extraction (Haneczok et al., 2021;Lyu et al.;Sainz and Rigau, 2021) and for dataless classification (Ma et al., 2021). However, the entailment-based setup has not been properly explored for GZS intent recognition to the best of our knowledge.
This paper aims to fill in the gap and extensively evaluate entailment-based approaches for GZS intent recognition. Given a meaningful intent label, such as reset_settings, and an input utterance, such as I want my original settings back, the classifier is trained to predict if the utterance should be assigned with the presented intent or not. To this end, we make use of pre-trained language models, which encode a two-fold input (intent label and an utterance) simultaneously and fuse it at intermediate layers with the help of the attention mechanism.
We adopt three dialog datasets for GZS intent recognition and show that sentence pair modeling outperforms competing approaches and establishes new state-of-the-art results. Next, we implement multiple techniques, yielding an even higher increase in performance. Noticing that in all datasets considered, most intent labels are either noun or verb phrases, we implement a small set of lexicalizing templates that turn intent labels into plausible sentences. For example, an intent label reset_settings is re-written as The user wants to reset settings. Such lexicalized intent labels appear less surprising to the language model than intact intent labels. Hence, lexicalization of intent labels helps the language model to learn correlations between inputs efficiently. Other improvements are based on standard engineering techniques, such as hard example mining and task transferring.
Last but not least, we explore two setups in which even less data is provided by restricting access to various parts of annotated data. First, if absolutely no data is available, we explore strategies for transferring from models pre-trained with natural language inference data. Second, in the dataless setup only seen intent labels are granted and there are no annotated utterances, we seek to generate synthetic data from them by using offthe-shelf models for paraphrasing. We show that the sentence pair modeling approach to GZS intent recognition delivers adequate results, even when trained with synthetic utterances, but fails to transfer from other datasets.
The key contributions of the paper are as follows: 1. we discover that sentence pair modeling approach to GZS intent recognition establishes new state-of-the-art results; 2. we show that lexicalization of intent labels yields further significant improvements; 3. we use task transferring, training in dataless regime and conduct error analysis to investigate the strengths and weaknesses of sentence pair modeling approach.
Related Work
Our work is related to two lines of research: zeroshot learning with natural language descriptions and intent recognition. We focus on adopting existing ideas for zero-shot text classification to intent recognition. Zero-shot learning has shown tremendous progress in NLP in recent years. The scope of the tasks, studied in GZS setup, ranges from text classification (Yin et al., 2019) to event extraction (Haneczok et al., 2021;Lyu et al.), named entity recognition (Li et al., 2020) and entity linking (Logeswaran et al., 2019). A number of datasets for benchmarking zero-shot methods has been developed. To name a few, Yin et al. (2019) Recent research has adopted a scope of novel approaches, utilizing natural language descriptions, aimed at zero-shot setup. Text classification can be treated in form of a textual entailment problem (Yin et al., 2019), in which the model learns to match features from class' description and text, relying on early fusion between inputs inside the attention mechanism. The model can be fine-tuned solely of the task's data or utilize pre-training with textual entailment and natural language inference (Sainz and Rigau, 2021). However, dataless classification with the help of models, pre-trained for textual entailment only appears problematic due to models' high variance and instability (Ma et al., 2021). This justifies the rising need for learnable domain transferring (Yin et al., 2020) and self-training (Ye et al., 2020), aimed at leveraging unlabeled data and alleviating domain shift between seen and unseen classes.
Intent recognition Supervised intent recognition requires training a classifier with a softmax layer on top. Off-the-shelf pre-trained language models or sentence encoders are used to embed an input utterance, fed further to the classifier (Casanueva et al., 2020). Augmentation techniques help to increase the amount of training data and increase performance (Xia et al., 2020). Practical needs require the classifier to support emerging intents. Re-training a traditional classifier may turn out resource-greedy and costly. This motivates work in (generalized) zero-shot intent recognition, i.e. handling seen and unseen intents simultaneously. Early approaches to GZS intent recog-nition adopted capsule networks to learn lowdimensional representations of intents. IntentCap-sNet (Xia et al., 2018) is built upon three capsule modules, organized hierarchically: the lower module extracts semantic features from input utterances. Two upper modules execute recognition of seen and unseen intents independently from each other. ReCapsNet (Liu et al., 2019a) is built upon a transformation schema, which detects unseen events and makes predictions based on unseen intents' similarity to the seen ones. SEG (Yan et al., 2020) utilizes Gaussian mixture models to learn intent representations by maximising margins between them. One of the concurrent approaches, CTIR (Si et al., 2021) (Class-Transductive Intent Representations) learns intent representations from intent labels to model inter-intent connections. CTIR is not a stand-alone solution but rather integrates existing models, such as BERT, CNN, or CapsNet. The framework expands the prediction space at the training stage to be able to include unseen classes, with the unseen label names serving as pseudo-utterances. The current state-of-the-art performance belongs to RIDE (Siddique et al., 2021), an intent detection model that leverages common knowledge from ConceptNet. RIDE captures semantic relationships between utterances and intent labels considering concepts in an utterance linked to those in an intent label.
3 Sentence pair modelling for intent recognition
Problem formulation
Let X be the set of utterances, S = {y 1 , . . . , y k } be the set of seen intents and U = {y k+1 , . . . , y n } be the set of unseen intents. The training data consists of annotated utterances {x i , y j }. At the test time, the model is presented with a new utterance. In the GZS setup the model chooses an intent from both seen and unseen y j ∈ S ∪ U.
Our approach
A contextualized encoder is trained to make a binary prediction: whether the utterance x i is assigned with the intent y j or not. The model encodes the intent description and the utterance, concatenated by the separation token [SEP]. The representation of the [CLS] token is fed into a classification head, which makes the desired prediction P (1|y j , x i ). This approach follows standard sentence pair (SP) modeling setup. Given an intent y j , the model is trained to make a positive prediction for an in-class utterance x + i and a negative prediction for an out-of-class utterance x − i , sampled from another intent. At the train time, the model is trained with seen intents only y j ∈ S.
On the test time, given an utterance x test i , we loop over all intents y j ∈ S ∪ U and record the probability of the positive class. Finally, we assign to the utterance x test i such y * , that provides with the maximum probability of the positive class:
y * = arg max y j ∈S∪U P (1|y j , x test i )
Contextualized encoders. We use RoBERTa base (Liu et al., 2019b) as the main and default contextualized encoder in our experiments, as it shows superior performance to BERT (Devlin et al., 2018) in many downstream applications. RoBERTa's distilled version, Distil-RoBERTa (Sanh et al., 2019) is used to evaluate lighter, less computationally expensive models. Also, we use a pre-trained task-oriented dialogue model, TOD-BERT (Wu et al., 2020) to evaluate whether domain models should be preferred.
We used models, released by Hug-gingFace library (Wolf et al., 2020): roberta-base, distilroberta-base and TODBERT/TOD-BERT-JNT-V1 .
Negative sampling strategies include (i) sampling negative utterances for a fixed intent, denoted as (y j , x + i ), (y j , x − i ); (ii) sampling negative intents for a fixed utterance, denoted as (y + j , x i ), (y − j , x i ). Both strategies support sampling with hard examples. In the first case (i), we treat an utterance
x − i as a hard negative one for intent y j , if there exists such in-class utterance x + i , so that the similarity between x + i and x − i is higher than a predefined threshold. To this end, to compute semantic similarity, we make use of SentenceBERT (Reimers and Gurevych, 2019) cosine similarity. For a given positive in-class utterance, we selected the top-100 most similar negative out-of-class utterance based on the values of cosine similarity. In the second case (ii), we use the same approach to sample hard negative intents y − j , given an utterance x i , assigned with the positive intent y + j . Again, we compute semantic similarity between intent labels and sample an intent y − j with probability based on similarity score with intent y + j . To justify the need to sample hard negative examples, we experiment with random sampling, choosing randomly (iii) negative utterances or (iv) negative intents.
Lexicalization of intent labels utilizes simple grammar templates to convert intent labels into natural-sounding sentences. For this aim, we utilize two types of templates: (i) declarative templates ("the user wants to") and (ii) question templates ("does the user want to"). Most intent labels take a form of a verb phrase (VERB + NOUN + ), such as book_hotel or a noun phrase (NOUN + ), such as flight_status. We develop the set of rules that parses an intent label, detects whether it is a verb phrase or a noun phrase 1 , and lexicalizes it using one of the templates using the following expression: template + VERB + a/an + NOUN + . If the intent label is recognized as a noun phrase, the VERB slot is filled with an auxiliary verb, "get". This way, we achieve such sentences: the user wants to book a hotel and does the user want to get a flight status. The templates implemented are shown in Table 1.
Lexicalization templates were constructed from the most frequent utterance prefixes, computed for all datasets. This way, lexicalized intents sound natural and are close to the real utterances. We use declarative and question templates because the datasets consist of such utterance types. We experimented with a larger number of lexicalization templates, but as there is no significant difference in performance, we limited ourselves to two templates of each kind for the sake of brevity.
Task transferring Task transferring from other tasks to GZS intent recognition allows to estimate 1 We use a basic NLTK POS tagger to process intent labels. whether (i) pre-trained task-specific models can be used without any additional fine-tuning, reducing the need of annotated data and (ii) pre-training on other tasks and further fine-tuning is beneficial for the final performance.
There are multiple tasks and fine-tuned contextualized encoders, which we may exploit for task transferring experiments. For the sake of time and resources, we did not fine-tune any models on our own, but rather adopted a few suitable models from HuggigngFace library, which were fine-tuned on the Multi-Genre Natural Language Inference (MultiNLI) dataset (Williams et al., 2018): BERT-NLI (textattack/bert-base-uncased-MNLI), BART-NLI (bart-large-mnli), RoBERTa-NLI (textattack/roberta-base-MNLI).
Dataless classification
We experiment with a dataless classification scenario, in which we train the models on synthetic data. To this end, we used three pre-trained three paraphrasing models to paraphrase lexicalized intent labels. For example, the intent label get alarms is first lexicalized as tell the user how to get alarms and then paraphrased as What's the best way to get an alarm?. Next, we merge all sentences, paraphrased with different models, into a single training set. Finally, we train the GZS model with the lexicalized intent labels and their paraphrased versions without using any annotated utterances.
The T5-based (Raffel et al., 2020) and Pegasus-based (Zhang et al., 2020) paraphrasers (ramsrigouthamg/t5_paraphraser, Vamsi/T5_Paraphrase_Paws tuner007/pegasus_paraphrase) were adopted from the HuggigngFace library and were used with default parameters and beam size equal to 25.
Datasets
SGD (Schema-Guided Dialog) (Rastogi et al., 2020) contains dialogues from 16 domains and 46 intents and provides the explicit train/dev/test split, aimed at the GZSL setup. Three domains are available only in the test set. This is the only dataset, providing short intent descriptions, which we use instead of intent labels. To pre-process the SGD dataset, we keep utterances where users express an intent, selecting utterances in one of the two cases: (i) first utterances in the dialogue and (ii) an utterance that changes the dialogue state and (Budzianowski et al., 2018) is treated same way as SGD: we keep utterances that express an intent and we get 27.5K utterances, spanning over 11 intents from 7 different domains. We used 8 (out of 11) randomly selected intents as seen for training. 30% utterances from seen intents. All utterances implying unseen intents are used for testing. Test utterances for seen intents are sampled in a stratified way, based on their support in the original dataset.
CLINC (Larson et al., 2019) contains 23,700 utterances, of which 22,500 cover 150 in-scope intents, grouped into ten domains. We follow the standard practice to randomly select 3/4 of the inscope intents as seen (112 out of 150) and 1/4 as unseen (38 out of 150). The random split was made the same way as for MultiWoZ.
Experiments
Baselines We use SEG 2 , RIDE 3 , CTIR 4 as baselines, as they show the up-to-date top results on the three chosen datasets. For the RIDE model, we use the base model with a Positive-Unlabeled classifier, as it gives a significant improvement on the SGD and MultiWoZ datasets. We used Zero-Shot DNN and CapsNets along with CTIR, since 2 https://github.com/fanolabs/ 0shot-classification, unfortunately were unable to run the code and adopted the published results from the paper 3 https://github.com/RIDE-SIGIR/GZS 4 https://github.com/PhoebusSi/CTIR these two encoders perform best on unseen intents (Si et al., 2021).
Evaluation metrics commonly used for the task are accuracy (Acc) and F1. The F1 values are per class averages weighted with their respective support. Following previous works, we report results on seen and unseen intents separately. Evaluation for the test set overall is presented in Appendix.
We report averaged results along with standard deviation for ten runs of each experiment.
Results of experiments are presented in Table 2 (see Appendix for standard deviation estimation). Our approach SP RoBERTa, when used with intent labels and utterances only, shows significant improvement over the state-of-the-art on all three datasets, both on seen and unseen intents, by accuracy and F1 measures. The only exception is unseen intents of CLINC, where our approach underperforms in terms of accuracy of unseen intents recognition comparing to RIDE. At the same time, RIDE shows a lower recall score in this setup. So, our method is more stable and performs well even when the number of classes is high. Similarly to other methods, our method recognizes seen intents better than unseen ones, reaching around 90% of accuracy and F1 on the former. Next, with the help of lexicalized intent labels our approach yields even more significant improvement for all datasets. The gap between our approach and baselines becomes wider, reaching 14% of accuracy on SGD's unseen intents and becoming closer to perfect detection on seen intents across all datasets. The difference between our base approach SP RoBERTa and its modification, relying on intent lexicalization, exceeds 7% on unseen in- tents for SGD dataset and reaches 3% on MultiWoZ ones. Notably, SP RoBERTa does not overfit on seen intents and achieves a consistent increase both on unseen and seen intents compared to previous works.
Ablation study We perform ablation studies for two parts of the SP RoBERTa approach and present the results for unseen intents in Table 3. In all ablation experiments we use the SP approach with intent labels to diminish the effect of lexicalization. First, we evaluate the choice of the contextualized encoder, which is at the core of our approach (see the top part of Table 3). We choose between BERT base , RoBERTa base , its distilled version DistilRoBERTa, and TOD-BERT. BERT base provides poorer performance when compared to RoBERTa base , which may be attributed to different pre-training setup. At the same time, TOD-BERT's scores are compatible with the ones of RoBERTa on two datasets, thus diminishing the importance of domain adaptation. A higher standard deviation, achieved for the MultiWoZ dataset, makes the results less reliable. The performance of DistilRoBERTa is almost on par with its teacher, RoBERTa, indicating that our approach can be used with a less computationally expensive model almost without sacrificing quality.
Second, we experiment with the choice of negative sampling strategy (see the middle part of Table 3), in which we can sample either random or hard negative examples for both intents and utterances. The overall trend shows that sampling hard examples improves over random sampling (by up to 6% of accuracy for the SGD dataset). Table 4 demonstrates the performance of SP RoBERTa with respect to the choice of lexicalization templates. Regardless of which template is used, the results achieved outperform SP RoBERTa with intent labels. The choice of lexicalization template slightly affects the performance. The gap between the best and the worst performing template across all datasets is about 2%. The only exception is q 2 , which drops the performance metrics for two datasets. In total, this indicates that our approach must use just any of the lexicalization templates, but which template exactly is chosen is not as important. What is more, there is no evidence that declarative templates should be preferred to questions or vice versa.
Choice of lexicalization templates
Further adjustments of intent lexicalization templates and their derivation from the datasets seem a part of future research. Other promising directions include using multiple lexicalized intent labels jointly to provide opportunities for off-theshelf augmentation at the test and train times.
Task transferring results are presented in the bottom part of Table 3. First, we experiment with zero-shot task transferring, using RoBERTa-NLI to make predictions only, without any additional fine-tuning on intent recognition datasets. This experiment leads to almost random results, except for the SGD datasets, where the model reaches about 30% correct prediction.
However, models, pre-trained with MNLI and fine-tuned further for intent recognition, gain sig- nificant improvement up to 7%. The improvement is even more notable in the performance of BART-NLI, which obtains the highest results, probably, because of the model's size.
Dataless classification results are shown in Table 5. This experiment compares training on two datasets: (i) intent labels and original utterances, (ii) intent labels and synthetic utterances, achieved from paraphrasing lexicalized intent labels. In the latter case, the only available data is the set of seen intent labels, used as input to SP RoBERTa and for further paraphrasing. Surprisingly, the performance declines moderately: the metrics drop by up to 30% for seen intents and up to 10% for unseen intents. This indicates that a) the model learns more from the original data due to its higher diversity and variety; b) paraphrasing models can re-create some of the correlations from which the model learns.
The series of experiments in transfer learning and dataless classification aims at real-life scenarios in which different parts of annotated data are available. First, in zero-shot transfer learning, we do not access training datasets at all (Table 2, Zeroshot RoBERTA NLI). Second, in the dataless setup, we access only seen intent labels, which we utilize both as class labels and as a source to create synthetic utterances (Table 5). Thirdly, our main experiments consider both seen intents and utterances available (Table 2, SP RoBERTA). In the second scenario, we were able to get good scores that are more or less close to the best-performing model. We believe efficient use of intent labels overall and to generate synthetic data, in particular, is an important direction for future research.
Analysis
Error analysis shows, that SP RoBERTa tends to confuse intents, which (i) are assigned with semantically similar labels or (ii) share a word.
For example, an unseen intent get_train_tickets gets confused with the seen intent find_trains. Similarly, pairs of seen intents play_media and play_song or find_home_by_area and search_house are hard to distinguish.
We checked whether errors in intent recognition are caused by utterances' surface or syntax features. Following observations hold for the SGD dataset. Utterances, which take the form of a question, are more likely to be classified correctly: 93% of questions are assigned with correct intent labels, while there is a drop for declarative utterances, of which 90% are recognized correctly. The model's performance is not affected by the frequency of the first words in the utterance. From 11360 utterances in the test set, 4962 starts with 3-grams, which occur more than 30 times. Of these utterances, 9% are misclassified, while from the rest of utterances, which start with rarer words, 10% are misclassified.
The top-3 most frequent 3-grams at the beginning of an utterance are I want to, I would like, I need to.
Stress test for NLI models (Naik et al., 2018) is a typology for the standard errors of sentence pair models, from which we picked several typical errors that can be easily checked without additional human annotation. We examine whether one of the following factors leads to an erroneous prediction: (i) word overlap between an intent label and an utterance; (ii) the length of an utterance; (iii) negation or double negation in an utterance; (iv) numbers, if used in an utterance. Additionally, we measured the semantic similarity between intent labels and user utterances through the SentenceBERT cosine function to check whether it impacts performance. Table 6 displays the stress test results for one of the runs of SP RoBERTa, trained with q 1 template on the SGD dataset. This model shows reasonable performance, and its stress test results are similar to models trained with other templates. The results are averaged over the test set. An utterance gets more likely to be correctly predicted if it shares at least one token with the intent label. However, the semantic similarity between intent labels and utterances matters less and is relatively low for correct and incorrect predictions. Longer utterances or utterances, which contain digits, tend to get correctly classified more frequently. The latter may be attributed to the fact that numbers are important features to intents, related to doing something on particular dates and with a particular number of people, such as search_house, reserve_restaurant or book_appointment.
Conclusion
Over the past years, there has been a trend of utilizing natural language descriptions for various tasks, ranging from dialog state tracking (Cao and Zhang, 2021), named entity recognition (Li et al., 2020) to the most recent works in text classification employing Pattern-Exploiting Training (PET) (Schick and Schütze, 2020). The help of supervision, expressed in natural language, in most cases not only improves the performance but also enables exploration of real-life setups, such as few-shot or (generalized) zero-shot learning. Such methods' success is commonly attributed to the efficiency of pretrained contextualized encoders, which comprise enough prior knowledge to relate the textual task descriptions with the text inputs to the model.
Task-oriented dialogue assistants require the resource-safe ability to support emerging intents without re-training the intent recognition head from scratch. This problem lies well within the generalized zero-shot paradigm. To address it, we present a simple yet efficient approach based on sentence pair modeling, suited for the intent recognition datasets, in which each intent is equipped with a meaningful intent label. We show that we establish new state-of-the-art results using intent labels paired with user utterances as an input to a contextualized encoder and conducting simple binary classification. Besides, to turn intent labels into plausible sentences, better accepted by pre-trained models, we utilized an easy set of lexicalization templates. This heuristic yet alone gains further improvement, increasing the gap to previous best methods. Task transferring from other sentence pair modeling tasks leads to even better performance.
However, our approach has a few limitations: it becomes resource-greedy as it requires to loop over all intents for a given utterance. Next, the intent labels may not be available or may take the form of numerical indices. The first limitation might be overcome by adopting efficient ranking algorithms from the Information Retrieval area. Abstractive summarization, applied to user utterances, might generate meaningful intent labels. These research questions open a few directions for future work.
A Reproducibility Checklist
A.1 Code
Our code is enclosed in this submission: gzsl.zip.
A.2 Computing infrastructure
Each experiment runs on a single NVIDIA V100 16Gb. The longest experiment was running for less than 2.5 hours.
A.3 Datasets
All used datasets are described in the paper. Preprocessing for SGD and MultiWoZ dataset includes (i) selecting utterances from dialogues where users express a new intent, (ii) cleaning uninformative short phrases like acknowledgments and greetings. Preprocessed datasets are also included in gzsl.zip The SGD dataset is released under CC BY-SA 4.0 license. The MultiWoZ dataset is released under Apache License 2.0 To the best of our knowledge the CLINC dataset is released under CC-BY-3.0 license.
A.4 Randomness
All experiments could be reproduced using the fixed set of seeds {11..20}.
A.5 Evaluation metrics
All used metrics and our motivation to use them are described in the main paper. Metrics and an evaluation script are implemented in our code.
A.6 Models and hyperparameters
Our sentence pair model consists of the contextualized encoder itself, a dropout, and a linear on top of the embedding for [CLS] token. All hyperparameters for the model are fixed in our submission configs. Transformer tokenizers use truncation for utterance and intent description to speed up execution time. Specified values for lexicalized and nonlexicalized setups are reported in README.md. Batch size, learning rate, scheduler, warm-up steps ratio, and other experiment parameters are specified for each dataset and fixed in configs. We used the top 100 out-of-class similar utterances with a positive one as a threshold for hard negative sampling.
A.7 Hyperparameter Search
We performed hyperparameter search using the following grid for each dataset.
• Learning rate: [2e −5 , 5e −5 ]
• Batch size: [8,16] • Warm up steps ratio: [0.10, 0.15] • Utterance max length: [20,30,40] • Negative samples k: [5,7] For each hyperparameter configuration, we averaged the results over five runs.
A.8 Acceptability evaluation
Lexicalized intent labels help to increase performance since they form more plausible sentences than raw intent labels. This observation can be confirmed by estimating the acceptability of a sentence. We evaluate the acceptability of intent labels and their lexicalized versions with several unsupervised measures, which aim to evaluate to which degree the sentence is likely to be produced (Lau et al., 2017). We exploit the acceptability evaluation tool from (Lau et al., 2020) with default settings. Following acceptability measures have been used: LP stands for unnormalized log probability of the sentence, estimated by a language model. LP mean and LP pen are differently normalized versions of LP with respect to the sentence length. LP norm and SLOR utilize additional normalization with unigram probabilities, computed over a large text corpus. In this experiment, BERT large is used as the default language model; unigram probabilities are pre-computed from bookcorpus-wikipedia. Higher acceptability scores stand for the higher likelihood of the sentence. Thus, more plausible and more natural-sounding sentences gain higher acceptability scores.
We apply one of the lexicalization patterns to all intent labels, score each resulting sentence, and average the achieved scores. Tables 7-9 present with the results of acceptability evaluation for the each dataset. As expected, the intent labels gain lower acceptability scores, while lexicalized patterns receive higher acceptability scores. We may treat the acceptability of the pattern as a proxy to its performance since the SLOR value of the poor performing q 2 pattern is lower than for other patterns.
ID
create a benchmark for general domain text classification. SGD (Rastogi et al., 2020) allows for zero-shot intent recognition.
Table 2: Comparison of different methods. SP stands for Sentence Pair modeling approach. SP RoBERTa (ours) shows consistent improvements of F1 across all datasets for seen and unseen intents. The usage of lexicalized templates improves performance.expresses a new intent. We use pre-processed utterances from original train/dev/test sets for the GZS setup directly without any additional splitting.Method
SGD
MultiWoZ
CLINC
Unseen
Seen
Unseen
Seen
Unseen
Seen
Acc
F1
Acc
F1
Acc
F1
Acc
F1
Acc
F1
Acc
F1
SEG
0.372 0.403 0.613 0.636 0.371 0.414 0.652 0.646
-
-
-
-
RIDE+PU
0.590 0.573 0.832 0.830 0.569 0.521 0.884 0.885 0.798 0.573 0.908 0.912
ZSDNN + CTIR
0.603 0.580 0.809 0.878 0.468 0.437 0.827 0.892 0.561 0.493 0.904 0.871
CapsNet + CTIR
0.567 0.507 0.897 0.912 0.481 0.404 0.903 0.906 0.530 0.572 0.866 0.883
SP RoBERTa (ours) 0.698 0.732 0.917 0.925 0.606 0.686 0.903 0.919 0.661 0.742 0.946 0.954
SP RoBERTa
+ templates (ours)
0.750 0.805 0.931 0.934 0.624 0.722 0.941 0.948 0.692 0.766 0.927 0.931
MultiWoZ 2.2 (Multi-domain Wizard of Oz)
Table 3 :
3Ablation study and task transferring: comparison on unseen intents.Top: comparison of different con-
Table 5 :
5Dataless classififcation. Metrics are reported on seen and unseen intents. Fine-tuning SP-Roberta on synthetic utterances (bottom) shows moderate decline, compared to training on real utterances (top).
Table 6 :
6Stress test of SP RoBERTa predictions. An utterance is more likely to be correctly predicted if it shares at least one token with the intent labels.
LPLP mean LP pen LP norm SLORlabels -34.58 -18.40 -30.32 -1.84
-8.44
d 1
-43.16 -5.55
-23.50 -0.74
1.97
d 2
-49.92 -5.68
-25.60 -0.77
1.67
q 1
-46.71 -5.31
-23.93 -0.71
2.12
q 2
-43.86 -6.48
-25.48 -0.87
0.98
Table 7 :
7Averaged acceptability scores, computed for the CLINC dataset. Rows stand for intent labels without any changes or lexicalized, using one of the patterns. Higher acceptability scores mean that a sentence is more likely to be grammatical and sound natural. Intent labels less acceptable, while their lexicalized versions form plausible sentences.ID
LP
LP mean LP pen LP norm SLOR
labels -43.18 -18.45 -36.31 -1.96
-9.01
d 1
-38.93 -5.45
-22.08 -0.72
2.11
d 2
-43.00 -5.29
-22.92 -0.71
2.09
q 1
-41.91 -5.14
-22.32 -0.69
2.31
q 2
-39.36 -6.44
-23.93 -0.86
1.07
Table 8 :
8Acceptability measures, computed for the SGD dataset ID LP LP mean LP pen LP norm SLORlabels -41.92 -20.96 -37.06 -2.28
-11.77
d 1
-31.45 -4.49
-18.07 -0.62
2.71
d 2
-33.90 -4.24
-18.26 -0.60
2.83
q 1
-33.73 -4.22
-18.17 -0.59
2.93
q 2
-31.87 -5.31
-19.62 -0.75
1.78
Table 9 :
9Acceptability measures, computed for the MultiWOZ datasetMethod
Unseen
Seen
Overall
Acc
F1
Acc
F1
Acc
F1
SP RoBERTa + random IS 0.687 ± 0.018 0.716 ± 0.016 0.916 ± 0.005 0.922 ± 0.004 0.884 ± 0.006 0.886 ± 0.005
SP RoBERTa + random US 0.677 ± 0.017 0.707 ± 0.014 0.919 ± 0.005 0.932 ± 0.006 0.885 ± 0.005 0.893 ± 0.005
SP RoBERTa + hard IS
0.741 ± 0.010 0.786 ± 0.017 0.884 ± 0.010 0.891 ± 0.012 0.864 ± 0.009 0.868 ± 0.010
SP RoBERTa + hard US
0.698 ± 0.012 0.732 ± 0.019 0.917 ± 0.003 0.925 ± 0.003 0.887 ± 0.005 0.893 ± 0.008
SP RoBERTa-NLI
0.748 ± 0.026 0.801 ± 0.028 0.923 ± 0.004 0.929 ± 0.003 0.898 ± 0.005 0.905 ± 0.005
SP BERT-NLI
0.693 ± 0.017 0.738 ± 0.015 0.918 ± 0.002 0.924 ± 0.001 0.886 ± 0.003 0.892 ± 0.002
SP BART-NLI
0.789 ± 0.024 0.830 ± 0.030 0.917 ± 0.000 0.924 ± 0.000 0.899 ± 0.003 0.907 ± 0.005
SP RoBERTa + d 1 patterns 0.750 ± 0.019 0.805 ± 0.021 0.931 ± 0.006 0.934 ± 0.004 0.906 ± 0.004 0.909 ± 0.002
SP RoBERTa + d 2 patterns 0.752 ± 0.003 0.804 ± 0.006 0.927 ± 0.007 0.932 ± 0.004 0.902 ± 0.005 0.908 ± 0.003
SP RoBERTa + q 1 patterns 0.765 ± 0.019 0.818 ± 0.021 0.922 ± 0.010 0.927 ± 0.010 0.900 ± 0.007 0.905 ± 0.007
SP RoBERTa + q 2 patterns 0.753 ± 0.026 0.807 ± 0.026 0.927 ± 0.005 0.931 ± 0.002 0.903 ± 0.004 0.908 ± 0.004
Table 10 :
10Ablation study, task transferring and lexicalization patterns for SGD dataset. Top: comparison of negative sampling strategies of intent sampling (IS) and utterance sampling (US); middle: task transferring from the MNLI dataset, using various fine-tuned models; bottom: Comparison of different lexicalization patterns, improving performance of SP RoBERTa.SP RoBERTa + random IS 0.594 ± 0.180 0.705 ± 0.157 0.903 ± 0.055 0.912 ± 0.047 0.769 ± 0.082 0.767 ± 0.084 SP RoBERTa + random US 0.531 ± 0.218 0.632 ± 0.217 0.930 ± 0.036 0.938 ± 0.027 0.742 ± 0.096 0.730 ± 0.106 SP RoBERTa + hard IS 0.561 ± 0.177 0.680 ± 0.136 0.937 ± 0.024 0.943 ± 0.016 0.771 ± 0.083 0.761 ± 0.091 SP RoBERTa + hard US 0.606 ± 0.244 0.686 ± 0.234 0.903 ± 0.033 0.919 ± 0.030 0.764 ± 0.099 0.754 ± 0.108 SP RoBERTa + d 1 patterns 0.624 ± 0.231 0.722 ± 0.175 0.941 ± 0.011 0.948 ± 0.010 0.785 ± 0.103 0.782 ± 0.105 SP RoBERTa + d 2 patterns 0.610 ± 0.219 0.713 ± 0.201 0.944 ± 0.013 0.948 ± 0.011 0.786 ± 0.095 0.781 ± 0.104 SP RoBERTa + q 1 patterns 0.621 ± 0.208 0.727 ± 0.174 0.946 ± 0.010 0.949 ± 0.010 0.789 ± 0.097 0.786 ± 0.101 SP RoBERTa + q 2 patterns 0.599 ± 0.212 0.702 ± 0.188 0.943 ± 0.020 0.948 ± 0.015 0.778 ± 0.094 0.775 ± 0.097Method
Unseen
Seen
Overall
Acc
F1
Acc
F1
Acc
F1
SP RoBERTa-NLI
0.669 ± 0.185 0.758 ± 0.151 0.943 ± 0.014 0.948 ± 0.012 0.808 ± 0.088 0.806 ± 0.089
SP BERT-NLI
0.624 ± 0.231 0.715 ± 0.197 0.941 ± 0.011 0.948 ± 0.010 0.785 ± 0.103 0.782 ± 0.105
SP BART-NLI
0.673 ± 0.174 0.753 ± 0.143 0.946 ± 0.012 0.950 ± 0.010 0.820 ± 0.079 0.814 ± 0.086
Table 11 :
11Ablation study, task transferring and lexicalization patterns for MultiWoZ dataset. SP RoBERTa + d 1 patterns 0.692 ± 0.031 0.766 ± 0.028 0.927 ± 0.009 0.931 ± 0.008 0.802 ± 0.018 0.817 ± 0.015 SP RoBERTa + d 2 patterns 0.685 ± 0.035 0.756 ± 0.031 0.923 ± 0.014 0.928 ± 0.012 0.796 ± 0.024 0.812 ± 0.021 SP RoBERTa + q 1 patterns 0.670 ± 0.034 0.747 ± 0.029 0.925 ± 0.010 0.930 ± 0.009 0.789 ± 0.019 0.808 ± 0.015 SP RoBERTa + q 2 patterns 0.554 ± 0.054 0.620 ± 0.055 0.919 ± 0.008 0.921 ± 0.009 0.725 ± 0.029 0.752 ± 0.022Method
Unseen
Seen
Overall
Acc
F1
Acc
F1
Acc
F1
SP RoBERTa + random IS 0.639 ± 0.038 0.731 ± 0.028 0.894 ± 0.009 0.903 ± 0.010 0.768 ± 0.017 0.760 ± 0.017
SP RoBERTa + random US 0.658 ± 0.043 0.735 ± 0.036 0.942 ± 0.007 0.903 ± 0.010 0.791 ± 0.024 0.816 ± 0.019
SP RoBERTa + hard IS
0.590 ± 0.039 0.669 ± 0.036 0.881 ± 0.008 0.901 ± 0.010 0.763 ± 0.020 0.754 ± 0.018
SP RoBERTa + hard US
0.661 ± 0.033 0.742 ± 0.028 0.946 ± 0.007 0.954 ± 0.005 0.794 ± 0.018 0.789 ± 0.020
SP RoBERTa-NLI
0.700 ± 0.040 0.771 ± 0.031 0.950 ± 0.004 0.955 ± 0.003 0.817 ± 0.020 0.836 ± 0.015
SP BERT-NLI
0.614 ± 0.035 0.695 ± 0.026 0.930 ± 0.007 0.938 ± 0.007 0.762 ± 0.020 0.791 ± 0.018
SP BART-NLI
0.770 ± 0.039 0.829 ± 0.034 0.973 ± 0.003 0.976 ± 0.002 0.865 ± 0.022 0.862 ± 0.024
Table 12 :
12Ablation study, task transferring and lexicalization patterns for CLINC dataset.Train data
SGD
MultiWoZ
CLINC
Unseen
Seen
Unseen
Seen
Unseen
Seen
Acc
F1
Acc
F1
Acc
F1
Acc
F1
Acc
F1
Acc
F1
intent labels +
original utterances
0.687 ±
0.018 0.716 ±
0.016 0.916 ±
0.005 0.922 ±
0.004 0.594 ±
0.180 0.705 ±
0.157 0.903 ±
0.055 0.912 ±
0.047 0.639 ±
0.038 0.731 ±
0.028 0.894 ±
0.009 0.903 ±
0.010
intent labels +
synthetic utterances
0.666 ±
0.019 0.688 ±
0.020 0.746 ±
0.014 0.778 ±
0.014 0.615 ±
0.138 0.642 ±
0.090 0.621 ±
0.101 0.713 ±
0.084 0.580 ±
0.045 0.613 ±
0.040 0.608 ±
0.016 0.654 ±
0.009
Table 13 :
13Dataless classififcation. Metrics are reported on seen and unseen intents. Fine-tuning SP-Roberta on synthetic utterances (bottom) shows moderate decline, compared to training on real utterances (top).Method
SGD
MultiWoZ
CLINC
Unseen
Seen
Unseen
Seen
Unseen
Seen
Acc
F1
Acc
F1
Acc
F1
Acc
F1
Acc
F1
Acc
F1
SEG
0.372
0.403
0.613
0.636
0.371
0.414
0.652
0.646
-
-
-
-
RIDE+PU
0.590
0.573
0.832
0.830
0.569
0.521
0.884
0.885
0.798
0.573
0.908
0.912
ZSDNN + CTIR
0.603± 0.002 0.580± 0.003 0.809± 0.006 0.878± 0.014 0.468± 0.185 0.437± 0.176 0.827± 0.022 0.892± 0.035 0.561± 0.059 0.493± 0.054 0.904± 0.031 0.871± 0.026
CapsNet + CTIR
0.567 ±
0.017 0.507 ±
0.026 0.897 ±
0.010 0.912 ±
0.009 0.481 ±
0.174 0.404 ±
0.243 0.903 ±
0.017 0.906 ±
0.026 0.530 ±
0.049 0.572 ±
0.033 0.866 ±
0.014 0.883 ±
0.020
SP RoBERTa (ours) 0.698 ±
0.012 0.732 ±
0.019 0.917 ±
0.003 0.925 ±
0.003 0.606 ±
0.244 0.686 ±
0.234 0.903 ±
0.033 0.919 ±
0.030 0.661 ±
0.033 0.742 ±
0.028 0.946 ±
0.007 0.954 ±
0.005
SP RoBERTa
+ patterns (ours)
0.750 ±
0.019 0.805 ±
0.021 0.931 ±
0.006 0.934 ±
0.004 0.624 ±
0.231 0.722 ±
0.175 0.941 ±
0.011 0.948 ±
0.010
0.692 ±
0.031 0.766 ±
0.028
0.927 ±
0.009 0.931 ±
0.008
Table 14 :
14Comparison of different methods. SP stands for Sentence Pair modeling approach. SP RoBERTa (ours) shows consistent improvements of F1 across all datasets for seen and unseen intents. The usage of lexicalized patterns improves performance.
AcknowledgementEkaterina Artemova is supported by the framework of the HSE University Basic Research Program.
MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, 10.18653/v1/D18-1547Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 5016-5026, Brus- sels, Belgium. Association for Computational Lin- guistics.
A comparative study on schema-guided dialogue state tracking. Jie Cao, Yi Zhang, 10.18653/v1/2021.naacl-main.62Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsJie Cao and Yi Zhang. 2021. A comparative study on schema-guided dialogue state tracking. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 782-796, Online. Association for Computational Linguistics.
Efficient intent detection with dual sentence encoders. Iñigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, Ivan Vulić, Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI. the 2nd Workshop on Natural Language Processing for Conversational AIIñigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. In Pro- ceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38-45.
Bert for joint intent classification and slot filling. Qian Chen, Zhu Zhuo, Wen Wang, arXiv:1902.10909arXiv preprintQian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Fine-grained event classification in news-like text snippets shared task 2, case 2021. Jacek Haneczok, Guillaume Jacquet, Jakub Piskorski, Nicolas Stefanovitch, Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL). the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL)Jacek Haneczok, Guillaume Jacquet, Jakub Piskorski, and Nicolas Stefanovitch. 2021. Fine-grained event classification in news-like text snippets shared task 2, case 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Ex- traction of Socio-political Events from Text (CASE 2021), online. Association for Computational Lin- guistics (ACL).
An evaluation dataset for intent classification and out-of-scope prediction. Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, Jason Mars, 10.18653/v1/D19-1131Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsStefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1311-1316, Hong Kong, China. Association for Computational Linguistics.
How furiously can colorless green ideas sleep? sentence acceptability in context. Carlos Jey Han Lau, Shalom Armendariz, Matthew Lappin, Chang Purver, Shu, Transactions of the Association for Computational Linguistics. 8Jey Han Lau, Carlos Armendariz, Shalom Lappin, Matthew Purver, and Chang Shu. 2020. How furi- ously can colorless green ideas sleep? sentence ac- ceptability in context. Transactions of the Associa- tion for Computational Linguistics, 8:296-310.
Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Alexander Jey Han Lau, Shalom Clark, Lappin, Cognitive science. 415Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probabil- ity: A probabilistic view of linguistic knowledge. Cognitive science, 41(5):1202-1241.
A unified mrc framework for named entity recognition. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, Jiwei Li, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsXiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2020. A unified mrc framework for named entity recognition. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5849- 5859.
Reconstructing capsule networks for zero-shot intent classification. Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, Albert Y S Lam, 10.18653/v1/D19-1486Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsHan Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, and Albert Y.S. Lam. 2019a. Re- constructing capsule networks for zero-shot intent classification. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 4799-4809, Hong Kong, China. As- sociation for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized BERT pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Zero-shot entity linking by reading entity descriptions. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsLajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity de- scriptions. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3449-3460.
Zero-shot event extraction via transfer learning: Challenges and insights. Qing Lyu, Hongming Zhang, Elior Sulem, Dan Roth, Qing Lyu, Hongming Zhang, Elior Sulem, and Dan Roth. Zero-shot event extraction via transfer learn- ing: Challenges and insights.
Issues with entailment-based zero-shot text classification. Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, Tiejun Zhao, 10.18653/v1/2021.acl-short.99Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics2Short Papers)Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, and Tiejun Zhao. 2021. Issues with entailment-based zero-shot text classification. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 786-796, Online. Association for Computational Linguistics.
Stress test evaluation for natural language inference. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, Graham Neubig, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsAakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353.
| [
"https://github.com/fanolabs/",
"https://github.com/RIDE-SIGIR/GZS",
"https://github.com/PhoebusSi/CTIR"
] |
[
"Structured Summarization: Unified Text Segmentation and Segment Labeling as a Generation Task",
"Structured Summarization: Unified Text Segmentation and Segment Labeling as a Generation Task"
] | [
"Hakan Inan inan@meta.com ",
"Rashi Rungta rashirungta@meta.com ",
"Yashar Mehdad mehdad@meta.com ",
"Meta Ai "
] | [] | [] | Text segmentation aims to divide text into contiguous, semantically coherent segments, while segment labeling deals with producing labels for each segment. Past work has shown success in tackling segmentation and labeling for documents and conversations. This has been possible with a combination of taskspecific pipelines, supervised and unsupervised learning objectives. In this work, we propose a single encoder-decoder neural network that can handle long documents and conversations, trained simultaneously for both segmentation and segment labeling using only standard supervision. We successfully show a way to solve the combined task as a pure generation task, which we refer to as structured summarization. We apply the same technique to both document and conversational data, and we show state of the art performance across datasets for both segmentation and labeling, under both high-and low-resource settings. Our results establish a strong case for considering text segmentation and segment labeling as a whole, and moving towards general-purpose techniques that don't depend on domain expertise or task-specific components. | 10.48550/arxiv.2209.13759 | [
"https://export.arxiv.org/pdf/2209.13759v1.pdf"
] | 252,567,766 | 2209.13759 | 3953e1877fe4e34d7ab23153aceaf9b32a125ad5 |
Structured Summarization: Unified Text Segmentation and Segment Labeling as a Generation Task
Hakan Inan inan@meta.com
Rashi Rungta rashirungta@meta.com
Yashar Mehdad mehdad@meta.com
Meta Ai
Structured Summarization: Unified Text Segmentation and Segment Labeling as a Generation Task
Text segmentation aims to divide text into contiguous, semantically coherent segments, while segment labeling deals with producing labels for each segment. Past work has shown success in tackling segmentation and labeling for documents and conversations. This has been possible with a combination of taskspecific pipelines, supervised and unsupervised learning objectives. In this work, we propose a single encoder-decoder neural network that can handle long documents and conversations, trained simultaneously for both segmentation and segment labeling using only standard supervision. We successfully show a way to solve the combined task as a pure generation task, which we refer to as structured summarization. We apply the same technique to both document and conversational data, and we show state of the art performance across datasets for both segmentation and labeling, under both high-and low-resource settings. Our results establish a strong case for considering text segmentation and segment labeling as a whole, and moving towards general-purpose techniques that don't depend on domain expertise or task-specific components.
Introduction
Text segmentation is the task of organizing text (documents or conversations) into contiguous segments that are individually coherent. This problem can be regarded as a standalone task; however, in practical applications, one is typically interested in labeling each segment with a description. The latter problem is akin to the familiar task of text summarization, applied to each segment individually. The two tasks together can then power more practical applications: dividing a long news piece into sections accompanied by section-level headlines, turning an informational document into topic segments with titles for each topic, generating an outline for conversation transcripts, etc. The segmentation problem was initially tackled as an unsupervised problem, by computing coherence scores between consecutive pairs of sentences in text, and determining segment boundaries according to certain procedures (Hearst, 1994). More recently, text segmentation has been recast as a purely supervised problem, where a neural network can learn segment boundaries from direct dataset supervision (Koshorek et al., 2018). These models perform well, especially in the presence of a large amount of training data. Despite the success of the more general-purpose supervised approaches, unsupervised objectives and taskspecific procedures are still competitive in lowresource settings, such as for conversation segmentation (Xing and Carenini, 2021;Solbiati et al., 2021). Conversations tend to be longer and less information-dense compared to expository text like from Wikipedia, which may provide additional reasoning for modality-specific treatment to segmentation. Currently, there is not a single, generalpurpose approach that works across text modality and training data volume. Further, due to the long nature of input text, existing methods typically adopt specific mechanisms, such as two-level encoding schemes (Koshorek et al., 2018;Arnold et al., 2019;Xing et al., 2020;Solbiati et al., 2021).
In the context of segmentation, label generation is thought of as a boundary-constrained summarization or classification problem, with freely generated or categorical labels respectively. In either case, the segment labels are produced using segmentationaware task components, like segment-restricted attention mechanisms (Zhang et al., 2019). We note that despite the unifying pressure of the practical applications mentioned above, the label generation task has not been well-coupled to the text segmentation literature, and hence there are fewer works where both segmentation and segment labeling are tackled in a unified context.
In this work, we argue for the unification of the segmentation and segment labeling tasks, and demonstrate that the two can naturally be put in a monolithic and general-purpose framework while mutually benefiting each others' performance. We start by establishing a simple yet strong baseline for segmentation using a single, long-input transformer encoder trained only with segmentation supervision. We then introduce generative segmentation, a novel approach to segmentation as a token generation task, and use an encoder-decoder transformer for it. We show a procedure for generative segmentation that matches the performance of the traditional, discriminative, approach. Finally, we put segmentation and segment labeling on equal footing by generating the segment boundary indices as well as labels in the same output token sequence. We show that this approach sets a new state-of-the-art for both segmentation and labeling. Henceforth, we refer to the combined task of generative segmentation + segment label generation as structured summarization.
We summarize our contributions as follows:
• We introduce a strong baseline for text segmentation that works across text modalities (documents, conversations) and data scale (high-and low-resource) using a monolithic, supervised learning approach. • We introduce generative segmentation, and show that it matches the performance of traditional, discriminative segmentation. • We propose unifying segmentation and labeling into a single structured summarization task with a general-purpose text generation scheme, and show it achieves state-of-the-art performance for both segmentation and segment labeling. • We show that one can pretrain models for structured summarization and transfer them to low-resource (including conversational) datasets, removing the need to use multiple objectives or task-specific logic for obtaining state-of-the-art performance.
Background: Text Segmentation and Segment Labeling
In text segmentation, one is given a list of sentences S = [s 1 , s 2 , . . .] with length |S|. Although several formulations exist, in a commonly accepted formulation, the task is to identify a subset of sentences as segment boundaries, which are understood to be the last sentences in each segment. If there are n target segments, the correct output can be understood as a list of n unique (& monotone) indices, each index being less than |S|. We note that in this work we focus on linear segmentation as opposed to hierarchical segmentation, wherein one can divide each segment into further sub-segments. One can tackle the segmentation task in several ways. In low-resource (or no-resource) settings, one can proceed without supervision, as has been done traditionally (Hearst, 1994;Xing and Carenini, 2021;Solbiati et al., 2021). The most common approach computes a coherence metric between each consecutive sentence pair (refs) and uses a post-processing method (e.g. TextTiling, Hearst (1994)) that determines "deep enough" valleys in the resulting coherence vs sentence index curve to draw segment boundaries at certain sentence indices. For instance, coherence can be calculated as the cosine similarity between sentence embeddings given by a large language model. In high-resource settings, one can use standard supervision: calculate embeddings for each sentence, then train a binary classifier on top of each sentence embedding that determines whether the sentence is the last sentence of a segment. Alternatively, one can implement supervision as sequence tagging, with the sentence classifier determining whether a sentence is the beginning or inside of a segment (Barrow et al., 2020). Segment labeling is a task that depends on segmentation, and involves producing a descriptive label for each identified segment. It can be implemented as either a discrimination, or generation task. For domains where labels can be binned into predetermined categories (e.g. particular Wikipedia domains with a well-defined ontology), one can implement it as a discriminative task with categorical labels with a fixed vocabulary. We note that the labeling task is discriminative for most prior work that produced or used publicly available datasets (Arnold et al., 2019;Barrow et al., 2020;Lo et al., 2021a). Given a label vocabulary of size N , one can either directly train an N -way classifier on top of sentence embeddings (Arnold et al., 2019), or pool sentence embeddings within a given segment prior to classification (Barrow et al., 2020). Unfortunately, for a vast number of domains, segment labeling is not easily amenable to a discriminative treatment. These domains include free-form conversations, meeting transcripts, news articles, and even most Wikipedia content. In this case, generation models may be used. For instance, one can utilize sequence-to-sequence neural networks with some form of "segmentation-aware" attention mechanism (Zhang et al., 2019;Liu et al., 2021).
Structured Summarization
We view segmentation and labeling as a single task that we refer to as structured summarization, guided by the principle of using simpler and more general-purpose approaches, as well as by the wholistic nature of practical applications that involves carrying out both tasks at once.
In this section, we first describe our encoder backbone that we use as a strong baseline for discriminative segmentation (Section 3.1). We then introduce generative segmentation (Section 3.2), which turns segmentation into a sequence generation task. We then combine generative segmentation with segment label generation (Section 3.3). Finally, we introduce a recipe for state-of-the-art performance for structured summarization on lowresource datasets (Section 3.4).
Discriminative Segmentation with a Single Transformer Encoder
As opposed to the common two-level encoding scheme for encoding text as tokens − → sentences (Koshorek et al., 2018;Glavaš and Somasundaran, 2020;Xing et al., 2020;Xing and Carenini, 2021), we encode the whole sequence at once with a long transformer. Recalling that the original sequence was described as a list of sentences S = [s 1 , s 2 , . . .] with length |S| (see Section 2), we expand each sentence in terms of tokens (e.g. SentencePiece, Kudo and Richardson (2018)):
s i = [t i 1 , t i 2 , . . .], i = 1, 2, . . . , |S|.
Thus, the overall sequence becomes
S = [t 1 1 , t 1 2 , . . . . . . , t |S| 1 , . . . , t |S|m ],
where m is the number of tokens in the last (|S| th ) sentence. We note that the first token of each sentence is a predetermined BOS token index:
t 1 1 = t 2 1 = . . . t |S| 1 = idx BOS .
(1)
We then feed the whole sequence S to a transformer encoder. In order to perform discriminative segmentation, we apply a shared binary softmax classifier on top of each BOS token. For training, we use the binary sentence labels in the dataset L s i ∈ {0, 1} that indicate whether the i th sentence is the last sentence of a segment. We use the standard cross-entropy (BCE) loss, which we call CLS seg :
CLS seg = |S| i=1 BCE(h(f (t i 1 )), L s i ).
Above, h refers to the binary classifier, while f refers to the representation of an input token using the neural network encoder. Refer to Figure 2 for a visual description.
Generative Segmentation
A major step towards combining segmentation and labeling tasks is to cast segmentation as a generation problem. To this end, we map target segmentation labels into target sequences. We simply turn segment boundary indices into strings, and combine them in a single target sequence. Particularly, given the sentence positions corresponding to segment boundaries, i.e. the ordered set pos seg = {i | L s i = 1}. We turn the integervalued positions to strings, and combine them in a single target sequence with delimiters. For instance, in a text with 3 target segments such that pos seg = [31, 410, 680], the target sequence is the string '31 | 410 | 680'. We then turn this string into a list of tokens {y 1 , y 2 , . . . , y m } and use as the target tokens. We use an encoder-decoder transformer with the encoder described as in Section 3.1, and train it with standard teacher forcing. Assuming that the NN decoder outputs a probability distribution over tokensp i at each each time step i, and denoting with 1 k a one-hot (discrete) probability distribution with the k th element set to 1, the generative segmentation loss is
GEN seg = m i=1 CE(p i , 1 y i ).
Above, CE refers to the standard cross-entropy loss, and m denotes the sequence length in tokens.
Encoding sentence positions. We note that the neural network needs to develop an implicit ability to convey position information from input to its decoder, since it is required to produce sentence position as tokens at decoder output. One may suspect that position embeddings may facilitate such ability. On the other hand, it is common for transformer architectures to utilize relative position embeddings (the architecture we use in our experiments, LongT5, falls into this category), without other explicit mechanisms to encode sentence or token positions. We attempt to alleviate this problem from a minimal effort perspective: At the encoder input for the i th sentence, we use the i th vocabulary token embedding in place of a fixed BOS token index. Formally, in contrast to (1), we set
t 1 1 = 0, t 2 1 = 1, . . . , t |S| 1 = |S| − 1.
In doing so, we expose the decoder to unambiguous position information that it can exploit for producing sentence indices at the decoder. Importantly, we achieve it without employing custom schemes, such as dedicated sentence position embeddings. In Section A of the Appendix, we show that this very simple approach improves substantially over the naive approach, enabling generative segmentation to close the gap with discriminative segmentation.
Post-processing. Finally, we post-process the decoder output into a list of segment boundary sentence positions. We split the output by the expected delimiter ("|"), and turn the resulting list of strings into a list of integers, while enforcing they be between 0 and |S| − 1. Any erroneous part of the output is omitted from the final list of boundary sentence positions. As shown in Section B of the Appendix, a well-trained transformer has an erroneous output only about 0.1 percent of the time.
Combined Generative Segmentation + Label Generation
Having described generative segmentation, we can now combine the segmentation and label information into a single target sequence. Given n target segments for an example text, we first prepare two target sequences:
• n ordered sentence position strings (0-based) separated by a delimiter ("|"), each position indicating the last sentence in a target segment in text. This is optimized via GEN seg (see Section 3.2). • n ordered segment labels separated by a delimiter ("|"). We use the same teacher forcing loss as with generative segmentation. We omit details since label generation is a standard seq2seq task, with no special treatment in our approach. We denote the loss for training segment labeling GEN label . For our canonical structured summarization setup, we simply interweave the two output sequences, aligning the target positions and the target labels (separated with a new delimiter, ":="). The decoder output is trained against GEN seg + label . See Figure 2 for a visual depiction.
High-resource Pretraining Prior to Low-resource Training
Judging from the dominance of unsupervised or loss-augmented methods in prior work in lowresource segmentation and labeling settings, one can argue that monolithic architectures like a single transformer trained purely with dataset supervision don't do well on low-resource datasets. We shall demonstrate this point quantitatively in our experiments (see Section 4). For generality and simplicity, it is nonetheless desired to close the performance gap for the monolithic approach trained only with dataset supervision. To this end, we adopt the modern findings from the field that demonstrated the use of pretraining (Devlin et al., 2018;Lewis et al., 2019;Brown et al., 2020). We pretrain our structured summarization models on a large dataset, and treat low-resource training as fine-tuning. In this work, our pretraining dataset (Wiki-727K) has ∼0.5M training examples, and we consider scales on the order 10 3 or less as low-resource. We show in our low-resource experiments that this simple transfer learning setup is enough to not only close the performance gap, but also to set new state-ofthe-art. Of note, we are able to translate performance gains from documents to conversations, the latter modality lacking high-resource datasets.
Experiments
In this section, we apply our method (see Section 3) to 3 datasets, each with distinct characteristics:
Wiki-727K (Koshorek et al., 2018): This dataset contains 727,746 English Wikipedia articles, with each article segmented per its existing sections, with segment labels being the section titles. The dataset is split into train/dev/test as 80/10/10 %. We use this dataset as a high-resource arena to compare structured summarization to existing segmentation methods. We also utilize this dataset for pretraining prior to low-resource training.
WikiSection (Arnold et al., 2019): This dataset contains 38k full-text documents from English and German Wikipedia, annotated with sections and topic labels. Differently from Wiki-727K, Wiki-Section contains normalized topic labels for each section, which are categorical. For that reason, it is amenable to and used for discriminative segment labeling (Arnold et al., 2019;Barrow et al., 2020). In our experiments, we use the English portion, which includes two datasets corresponding to two Wikipedia domains: 1) en_disease, containing 3.6K total docs, and 2) en_city, containing 19.5K total docs. Both datasets are split into train/dev/test as 70/20/10 %. We use this dataset as a mid-to-low-resource arena to compare structured summarization to existing segmentation as well as discriminative labeling methods.
QMSum (Zhong et al., 2021b): This dataset is a collection of long meeting transcripts from academic, product, and committee domains. Among other useful data, each example contains manually curated topic segments and accompanying topic labels. It contains a total of 232 examples split into train/dev/test roughly as 68/16/16 %. We use this dataset as a low-resource conversational arena, in which we compare structured summarization to existing segmentation approaches.
Throughout, we use the following segmentation and labeling metrics to judge performance: P k (Beeferman et al., 1999): A widely used segmentation metric, P k estimates the probability that randomly drawn sentence indices that are k sentences apart fall into non-agreeing segments between the reference and predicted segmentation. Keeping with the common practice, we set k equal to half the mean segment length of the reference.
Rouge (Lin, 2004): This is a standard text similarity metric used for summarization. In order to compute Rouge metrics, we gather all predicted and target segment labels and serialize them with a delimiter between labels. We use Rouge-1 (R-1), Rouge-2 (R-2), and Rouge-L (R-L). Label F1: WikiSection dataset contains categorical segment labels. For this reason, we compute the micro-averaged F1 as a measure of accuracy for our generated labels in alignment with prior work. We first align the generated segments to the target segments to have a many-to-one mapping between the two sets of segments. This is done by looking at the closed segment boundary to the left and the right of the predicted segment, mapping the predicted segment to the target segment with maximum overlap. The label for this target segment is then considered the ground truth label for the predicted segment during the F1 calculation.
Model. For all our experiments, we use Long T5-base with transient global attention (Guo et al., 2021). We were not able to set up a reliable workflow with large-sized models due to the limits of our computational setup. On the other hand, our hypotheses are testable using any-sized transformer of sufficient expressibility, and our results are selfcontained with fair comparisons including strong baselines of our own with same-sized setups. We use AdamW (Loshchilov and Hutter, 2017) with a learning rate of 0.0005 for all experiments. We use a batch size of 128, 64, and 16 for Wiki-727K, WikiSection, and QMSum respectively. We use maximum sequence length of 16384 for Wiki-727K and WikiSection, 32768 for QMSum.
Validating Structured Summarization
We train a Long T5-Base on the Wiki-727K training set using all viable combinations of available losses ( CLS seg , GEN seg , GEN label , and GEN seg + label ). Our aim is to show the merits of the combined approach, (Koshorek et al., 2018), TLT-TS: (Glavaš and Somasundaran, 2020), CATS: (Glavaš and Somasundaran, 2020). DS refers to Discriminative Segmentation and SS refers to Structured Summarization.
Model P k ↓ R-1 ↑ R-2 ↑ R-L ↑
while understanding the contributions of different components. The results are shown in Table 1. We make the following remarks:
• Generative segmentation performance ( GEN seg , P k = 15.8) is nearly on par with discriminative segmentation ( CLS seg , P k = 15.4), demonstrating the practical feasibility of generative segmentation.
• Although combining CLS seg and GEN seg doesn't improve segmentation performance, combining GEN seg and GEN label (aka structured summarization) leads to best segmentation. This shows that the labeling task improves generative segmentation. In contrast, when labeling is combined with discriminative segmentation ( CLS seg + GEN label ), the segmentation performance does not improve.
Our results provide evidence that unifying segmentation and labeling as a generative task is more favorable compared to carrying them out separately. In our next set of experiments, we validate structured summarization (using GEN seg + label ) against current best methods in high, medium, and lowresource settings, including both document and conversation data.
High Resource, Document Setting
We first set out to compare structured summarization to existing text segmentation approaches in the high-resource setting. We also aim to establish single transformer-based discriminative segmentation over the standard discriminative segmentation approach, namely using a two-level (token-level− → sentence-level) encoding. Although we are not aware of a published baseline for segment label generation for Wiki-727K, we also report Rouge metrics for our models where applicable. Table 2 shows the results. We first note that our single transformer encoder outperforms the best previous model, which uses a two-level transformer augmented with a coherence loss. We also find that structured summarization outperforms discriminative segmentation, setting a new state-ofthe-art on Wiki-727K. Overall, these results favor general-purpose methods trained with only dataset supervision.
Mid-to Low-resource, Document Setting
In order to test our hypothesis that generative segmentation and labeling is competitive for mid-to low-resource settings, we evaluate the English parts of the WikiSection dataset. We set up two types of experiments, namely naive and pretrained. The former uses a Long T5-base as described in Section 4, whereas for the latter we pretrain Long T5-base on Wiki-727k with GEN seg + label and then finetune on en_city and en_disease. We used sentence similarity as described in Section B of the Appendix to verify that there was no data leakage between the Wiki-727k train (pretraining) and Wikisection test (finetuning) datasets.
We evaluate each of these models with discriminative segmentation and structured summarization. Note that in these sets of experiments, we compare discriminative labeling from prior work to our generative labeling. Hence, along with the metrics used in Section 4.2, we also report the micro-averaged F1 for the task of segment labeling, similar to (Barrow et al., 2020) and (Lo et al., 2021a).
The results are shown in Table 3. Finetuning over Wiki-727K gives us the best structured summarization model for en_disease. For en_city, we notice that pretrained discriminative segmentation does result in a better P k than pretrained structured summarization, but only marginally so. For all the labeling metrics, namely micro-averaged F1 and the rouge metrics, both naive and pretrained structured summarization models perform better than the prior work, for en_disease as well as en_city.
Low-resource, Conversational Setting
Finally, we aim to establish the use of structured summarization in a low-resource setting for a different text modality, namely conversational. In this setting, segmentation regards each conversation turn as one "sentence", and predicts positions of turns at segment boundaries. The QMSum dataset is very low-resource, comprising only 157 training examples.
In the QMSum dataset, we deal with two interrelated problems: presence of very large number of turns (>1000), and large number of input tokens even when we truncate the number of tokens in a turn to a reasonably large number (200). The first problem is detrimental to generative segmentation, since a Wiki-727K-pretrained decoder can learn to map sentence positions up to the number of sentences encountered in Wiki-727K training set. To overcome this, we pretrain on Wiki-727K with a modification: We prepend a random number (up to 1000) of empty sentences to each training example, thereby increasing learnable sentence positions without affecting other task parameters. To address the second problem, we set a maximum number of tokens (95) per turn that keeps all test example inputs under 32768 tokens, while augmenting the training set with replicas with different maximum number of tokens per turn for each new replica (we use 20, 50, and 200). The rationale is to expose the network to varying degrees of truncation of turn tokens for robustness.
The results are shown in Table 4. Given very few training sentences, structured summarization performs labeling well as judged by Rouge metrics. For segmentation, although naive structured summarization isn't able to perform better than prior work, when pretrained on Wiki-727K, it sets a new state-of-the-art with generative segmentation, achieving a large performance improvement from the previous best approach. We note that although pretraining was done using a quite distinct text modality (descriptive documents), this facilitates major gains for the conversational modality.
Prior Work
Our work has most overlap with the text segmentation literature, wherein the task is sometimes called topic segmentation (e.g. Takanobu et al. (2018)). The earliest work for segmentation that we're aware of is TextTiling (Hearst, 1994), which draws on the insight that sentences belonging to the same topic/segment should be more coherent than sentences across segments, leading to an unsupervised method for segmentation. Following this work, many other methods that exploit intra-segment coherence/similarity were proposed, including works of Choi (2000), Malioutov (2006), Glavaš et al. (2016), and more recent works that combine coher- (Arnold et al., 2019), S-LSTM is from (Barrow et al., 2020), and Transformer 2 Bert from (Lo et al., 2021b). DS refers to Discriminative Segmentation and SS refers to Structured Summarization. (Zhong et al., 2021a). DS refers to Discriminative Segmentation and SS refers to Structured Summarization. All DS and SS metrics are averaged over 7 runs. ence objectives with neural networks, such as Xing et al. (2020), Xing and Carenini (2021), Glavaš and Somasundaran (2020), and Solbiati et al. (2021). There are also works for topic segmentation that draw on topic modeling. These works typically learn latent topic representations for segments, such as Riedl and Biemann (2012), Misra et al. (2009). With the introduction of large enough segmentation datasets for the document modality, such as Wiki-727K (Koshorek et al., 2018) and WikiSection (Arnold et al., 2019), recent work has shifted focus on supervised text segmentation for documents. These works train neural networks with the segment boundary supervision from the datasets. Examples from this category include works of Koshorek et al. (2018), Arnold et al. (2019), Xing et al. (2020, Barrow et al. (2020), Glavaš and Somasundaran (2020), and Lo et al. (2021a).
Model en_city en_disease P k ↓ F1 ↑ R-1 ↑ R-2 ↑ R-L ↑ P k ↓ F1 ↑ R-1 ↑ R-2 ↑ R-L ↑SECTOR
Whereas document segmentation enjoyed gains facilitated by large datasets, conversation segmentation has been most successfully performed in an unsupervised way to this day. Works in the conver-sation modality heavily depend on intra-segment coherence, with recent examples being Xing and Carenini (2021), Solbiati et al. (2021). Zhong et al. (2021a) perform segmentation on the QM-Sum dataset with standard supervision, although with limited success, as evidenced by the accuracy gap compared to our method (see Table 4).
Some works tackle generating labels in the context of segmentation. Most works that used publicly available datasets developed discriminative labeling techniques, such as Arnold et al. (2019) and Barrow et al. (2020). Some works also aimed at solving the labeling task using generation methods. Zhang et al. (2019) introduced "document outline generation", where they used multiple GRU networks, separately for segmentation and label generation, while using "segment-aware" attention to constrain generation. Liu et al. (2021) use an encoder-decoder transformer for segmenting and labeling news articles. Their method is closely related to our setup, although they use discriminativeonly segmentation, and utilize the decoder for only generating labels.
Conclusion
We introduced a generally applicable technique for unifying text segmentation and segment labeling as a single sequence generation task (structured summarization). We have shown that the task is suited to modern-day transformers that handle long inputs, and that one can achieve state-of-the-art performance for both segmentation and labeling across data scales and text modalities. We hope that these results will guide the field towards more general methods that perform structured summarization in research as well as in production settings.
Appendix for "Structured Summarization: Unified Text Segmentation and Segment Labeling as a Generation Task" A Encoding sentence positions by reuse of vocab tokens improves over naive models
As mentioned in Section 3.2, we validate the use of our simple approach to encode sentence positions in the model. We first note that the native LongT5 tokenizer doesn't use a dedicated BOS token. Therefore, when not using our approach, there is no BOS token we can use for discriminative segmentation. Regardless, we train models to classify the first token of each sentence. Additionally, we train models where we use the EOS token instead for segment boundary classification. We then compare both models to the model where we encode sentence positions according to Section 3.2. The results are in Table 5. Our approach substantially improves over the naive approach (when using either EOS or BOS), bringing generative segmentation accuracy to the same accuracy of discriminative segmentation within the same model. Here we follow up on our claim in Section 3.2 that generative segmentation leads to non-erroneous segmentation output when generated in the list of tokens at the NN output. To back this claim, we calculated the fraction of time any output sequence includes an invalid sentence boundary position. An example could be "10 | 31 | 413" in a text with only 300 sentences (last segment boundary is over 300). Another example could be one wherein the output sequence has a non-integer-convertible component, like in "10 | 31e | 299". In Table 6, we show this erroneous fraction for structured summarization models when tested on Wiki-727K, WikiSection, and QMSum. From the table, it is clear that transformer decoders are easily able to generate tokens that represent integers within the bounds of the task semantics.
Wiki-727K en_city en_disease QMSum 0.0001 0.0025 0 0 To mitigate the above-mentioned risk, we use Contriever (Izacard et al., 2021) to check for text similarity between the Wiki727-K train examples and Wikisection en_disease and en_city test examples to find examples that need to be eliminated from the Wiki727-K train set. In result, do not find any pairs with exact matches or alarmingly high cosine similarity.
D Details on compute and training the models
We used 8 A100 GPUs (40Gb memory) for training Wiki-727K and WikiSection models, and 1 A100 GPU for training QMSum models. The neural network we use (LongT5) has 220M parameters. One epoch of training took about 15 hrs on Wiki-727K, 30 minutes on Wikisection-en_city, 6 minutes on WikiSection-en_disease, and 20 minutes on QM-Sum. In all experiments, we report test results for the epoch for which the validation metric was best. The metric of interest for all models was P k where applicable, and Rouge otherwise. We performed hyperparameter tuning by selecting runs with the best validation metric. Instead of a full grid search, we used human judgement (over a few 10s of runs) and observed that results were mostly robust to modest changes in most parameters. Our manual approach is mostly necessitated by the lack of sufficiently large computing infrastructure. The only hyperparameters we tuned were learning rate and batch size. We set the maximum number of training epochs to 10 for all experiments.
Figure 1 :
1Text segmentation and segment labeling. The input text is depicted as a list of sentences S. Given n segments, the task output comprises of [(s i1 : label 1 ), . . ., (s in : label n )]. Shown are labeled segments from two examples of input text, along with the sentence positions that mark the segment boundaries.
Figure 2 :
2Our proposed setup for segmentation and segment labeling. In structured summarization, we use GEN seg + label .
Table 2 :
2Wiki-727K Test Set Segmentation Results. TextSeg:
Table 3 :
3Wikisection Test set results compared to previous state-of-the-art models. SECTOR refers to the best models from
Table 4 :
4QMSUM Test Set segmentation results. Di-alogLM is from
Table 5 :
5Wiki-727K test set results for models trainedwith CLS
seg + GEN
seg + label
B Generative segmentation leads to
accurately predicted sentence positions
in the output sequence
Table 6 :
6Fraction of examples with at least one erroneous segment boundary position. This is for the structured summarization models, when tested on the respective test set.C Verifying no data leakage between pretraining train-and finetuning testdatasetsBoth Wiki727-K and Wikisection datasets are derived from Wikipedia, and thus pretraining on one and finetuning on the other had a risk of exposing the model to examples it will encounter in the test set. For part of the Wikisection experiments which use a pretrained model as described in Section 3, we use such a setup.
Ethics StatementWe note that any generative AI technology has the potential to produce harmful, misleading, or offensive content. This should be a guiding principle when considering adopting the technology into reallife use cases.
Sector: A neural model for coherent topic segmentation and classification. Sebastian Arnold, Rudolf Schneider, Philippe Cudré-Mauroux, A Felix, Alexander Gers, Löser, Transactions of the Association for Computational Linguistics. 7Sebastian Arnold, Rudolf Schneider, Philippe Cudré- Mauroux, Felix A Gers, and Alexander Löser. 2019. Sector: A neural model for coherent topic segmenta- tion and classification. Transactions of the Associa- tion for Computational Linguistics, 7:169-184.
A joint model for document segmentation and segment labeling. Joe Barrow, Rajiv Jain, Vlad Morariu, Varun Manjunatha, W Douglas, Philip Oard, Resnik, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsJoe Barrow, Rajiv Jain, Vlad Morariu, Varun Manju- natha, Douglas W Oard, and Philip Resnik. 2020. A joint model for document segmentation and segment labeling. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 313-322.
Statistical models for text segmentation. Machine learning. Doug Beeferman, Adam Berger, John Lafferty, 34Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Ma- chine learning, 34(1):177-210.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
Advances in domain independent linear text segmentation. Y Y Freddy, Choi, cs/0003083arXiv preprintFreddy YY Choi. 2000. Advances in domain inde- pendent linear text segmentation. arXiv preprint cs/0003083.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Unsupervised text segmentation using semantic relatedness graphs. Goran Glavaš, Federico Nanni, Simone Paolo Ponzetto, Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics. the Fifth Joint Conference on Lexical and Computational SemanticsAssociation for Computational LinguisticsGoran Glavaš, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proceedings of the Fifth Joint Conference on Lexical and Compu- tational Semantics, pages 125-130. Association for Computational Linguistics.
Twolevel transformer and auxiliary coherence modeling for improved text segmentation. Goran Glavaš, Swapna Somasundaran, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Goran Glavaš and Swapna Somasundaran. 2020. Two- level transformer and auxiliary coherence modeling for improved text segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 7797-7804.
Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, arXiv:2112.07916Yun-Hsuan Sung, and Yinfei Yang. 2021. Longt5: Efficient text-to-text transformer for long sequences. arXiv preprintMandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yin- fei Yang. 2021. Longt5: Efficient text-to-text transformer for long sequences. arXiv preprint arXiv:2112.07916.
Multi-paragraph segmentation of expository text. A Marti, Hearst, cmp-lg/9406037arXiv preprintMarti A Hearst. 1994. Multi-paragraph segmentation of expository text. arXiv preprint cmp-lg/9406037.
. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, 10.48550/ARXIV.2112.09118and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learningGautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense in- formation retrieval with contrastive learning.
Text segmentation as a supervised learning task. Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, Jonathan Berant, arXiv:1803.09337arXiv preprintOmri Koshorek, Adir Cohen, Noam Mor, Michael Rot- man, and Jonathan Berant. 2018. Text segmenta- tion as a supervised learning task. arXiv preprint arXiv:1803.09337.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. Taku Kudo, John Richardson, arXiv:1808.06226arXiv preprintTaku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
End-to-end segmentation-based news summarization. Yang Liu, Chenguang Zhu, Michael Zeng, arXiv:2110.07850arXiv preprintYang Liu, Chenguang Zhu, and Michael Zeng. 2021. End-to-end segmentation-based news summariza- tion. arXiv preprint arXiv:2110.07850.
Transformer over pre-trained transformer for neural text segmentation with enhanced topic coherence. Kelvin Lo, Yuan Jin, Weicong Tan, Ming Liu, Lan Du, Wray Buntine, arXiv:2110.07160arXiv preprintKelvin Lo, Yuan Jin, Weicong Tan, Ming Liu, Lan Du, and Wray Buntine. 2021a. Transformer over pre-trained transformer for neural text segmenta- tion with enhanced topic coherence. arXiv preprint arXiv:2110.07160.
Transformer over pre-trained transformer for neural text segmentation with enhanced topic coherence. Kelvin Lo, Yuan Jin, Weicong Tan, Ming Liu, Lan Du, Wray Buntine, 10.18653/v1/2021.findings-emnlp.283Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsKelvin Lo, Yuan Jin, Weicong Tan, Ming Liu, Lan Du, and Wray Buntine. 2021b. Transformer over pre-trained transformer for neural text segmentation with enhanced topic coherence. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3334-3340, Punta Cana, Dominican Re- public. Association for Computational Linguistics.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. 2017. Decou- pled weight decay regularization. arXiv preprint arXiv:1711.05101.
Minimum cut model for spoken lecture segmentation. Igor Igor Mikhailovich Malioutov, Massachusetts Institute of TechnologyPh.D. thesisIgor Igor Mikhailovich Malioutov. 2006. Minimum cut model for spoken lecture segmentation. Ph.D. thesis, Massachusetts Institute of Technology.
Text segmentation via topic modeling: an analytical study. Hemant Misra, François Yvon, M Joemon, Olivier Jose, Cappé, Proceedings of the 18th ACM conference on Information and knowledge management. the 18th ACM conference on Information and knowledge managementHemant Misra, François Yvon, Joemon M Jose, and Olivier Cappé. 2009. Text segmentation via topic modeling: an analytical study. In Proceedings of the 18th ACM conference on Information and knowl- edge management, pages 1553-1556.
Topictiling: a text segmentation algorithm based on lda. Martin Riedl, Chris Biemann, Proceedings of ACL 2012 student research workshop. ACL 2012 student research workshopMartin Riedl and Chris Biemann. 2012. Topictiling: a text segmentation algorithm based on lda. In Pro- ceedings of ACL 2012 student research workshop, pages 37-42.
. Alessandro Solbiati, Kevin Heffernan, Georgios Damaskinos, Shivani Poddar, Shubham Modi, Alessandro Solbiati, Kevin Heffernan, Georgios Damaskinos, Shivani Poddar, Shubham Modi, and
Unsupervised topic segmentation of meetings with bert embeddings. Jacques Cali, arXiv:2106.12978arXiv preprintJacques Cali. 2021. Unsupervised topic segmen- tation of meetings with bert embeddings. arXiv preprint arXiv:2106.12978.
A weakly supervised method for topic segmentation and labeling in goal-oriented dialogues via reinforcement learning. Ryuichi Takanobu, Minlie Huang, Zhongzhou Zhao, Feng-Lin Li, Haiqing Chen, Xiaoyan Zhu, Liqiang Nie, IJCAI. Ryuichi Takanobu, Minlie Huang, Zhongzhou Zhao, Feng-Lin Li, Haiqing Chen, Xiaoyan Zhu, and Liqiang Nie. 2018. A weakly supervised method for topic segmentation and labeling in goal-oriented di- alogues via reinforcement learning. In IJCAI, pages 4403-4410.
Improving unsupervised dialogue topic segmentation with utterance-pair coherence scoring. Linzi Xing, Giuseppe Carenini, arXiv:2106.06719arXiv preprintLinzi Xing and Giuseppe Carenini. 2021. Improv- ing unsupervised dialogue topic segmentation with utterance-pair coherence scoring. arXiv preprint arXiv:2106.06719.
Improving context modeling in neural topic segmentation. Linzi Xing, Brad Hackinen, Giuseppe Carenini, Francesco Trebbi, arXiv:2010.03138arXiv preprintLinzi Xing, Brad Hackinen, Giuseppe Carenini, and Francesco Trebbi. 2020. Improving context mod- eling in neural topic segmentation. arXiv preprint arXiv:2010.03138.
Outline generation: Understanding the inherent content structure of documents. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xueqi Cheng, Proceedings of the 42nd International ACM SI-GIR Conference on Research and Development in Information Retrieval. the 42nd International ACM SI-GIR Conference on Research and Development in Information RetrievalRuqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng. 2019. Outline generation: Under- standing the inherent content structure of documents. In Proceedings of the 42nd International ACM SI- GIR Conference on Research and Development in Information Retrieval, pages 745-754.
Ming Zhong, Yang Liu, Yichong Xu, arXiv:2109.02492Chenguang Zhu, and Michael Zeng. 2021a. Dialoglm: Pre-trained model for long dialogue understanding and summarization. arXiv preprintMing Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021a. Dialoglm: Pre-trained model for long dialogue understanding and summa- rization. arXiv preprint arXiv:2109.02492.
Ming Zhong, Tao Da Yin, Ahmad Yu, Mutethia Zaidi, Rahul Mutuma, Ahmed Hassan Jha, Asli Awadallah, Yang Celikyilmaz, Xipeng Liu, Qiu, arXiv:2104.05938Qmsum: A new benchmark for query-based multidomain meeting summarization. arXiv preprintMing Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. 2021b. Qmsum: A new benchmark for query-based multi- domain meeting summarization. arXiv preprint arXiv:2104.05938.
| [] |
[
"A probabilistic framework for analysing the compositionality of conceptual combinations",
"A probabilistic framework for analysing the compositionality of conceptual combinations"
] | [
"Peter D Bruza ",
"Kirsty Kitto ",
"Brentyn J Ramm \nAustralian National University\n\n",
"Laurianne Sitbon ",
"\nQueensland University of Technology\nAustralia\n"
] | [
"Australian National University\n",
"Queensland University of Technology\nAustralia"
] | [] | Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilized in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. Additionally, the principle of semantic compositionality is underspecified, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Compositionality is first formalised by factorising the joint probability distribution modeling the combination, where the terms in the factorisation correspond to individual concepts. This leads to the necessary and sufficient condition for the joint probability distribution to exist. A failure to meet this condition implies that the underlying concepts cannot be modeled in a single probability space when considering their combination, and the combination is thus deemed "non-compositional". An empirical study of twenty-four non-lexicalised conceptual combinations showed convincing evidence that some conceptual combinations behave non-compositionally according to this framework. | 10.1016/j.jmp.2015.06.002 | [
"https://arxiv.org/pdf/1305.5753v3.pdf"
] | 18,719,786 | 1305.5753 | 6cc06fed9f8a5b0f74f2b248dc159ca76402e4a6 |
A probabilistic framework for analysing the compositionality of conceptual combinations
23 May 2013 May 27, 2013
Peter D Bruza
Kirsty Kitto
Brentyn J Ramm
Australian National University
Laurianne Sitbon
Queensland University of Technology
Australia
A probabilistic framework for analysing the compositionality of conceptual combinations
23 May 2013 May 27, 20131conceptual combinationquantum theorysemantic compositionality *
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilized in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. Additionally, the principle of semantic compositionality is underspecified, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Compositionality is first formalised by factorising the joint probability distribution modeling the combination, where the terms in the factorisation correspond to individual concepts. This leads to the necessary and sufficient condition for the joint probability distribution to exist. A failure to meet this condition implies that the underlying concepts cannot be modeled in a single probability space when considering their combination, and the combination is thus deemed "non-compositional". An empirical study of twenty-four non-lexicalised conceptual combinations showed convincing evidence that some conceptual combinations behave non-compositionally according to this framework.
Introduction
Humans frequently generate novel associates when presented with unfamiliar conceptual combinations. For example, in free association experiments, subjects frequently produce the associate "slave" when cued with the compound "pet human" (Ramm, 2000), but neither "pet" nor "human" will have the same effect when presented individually (Nelson et al., 2004). Such cases have been used by some authors to argue that conceptual combinations have a non-compositional semantics, as it is difficult to explain how the novel free associate "slave" can be recovered from its constituent concepts (Hampton, 1997).
Within cognitive science, the question of how to represent even single concepts is still being debated. Different positions have been put forward, including the prototype view, the exemplar view, and the theory theory view. Murphy (2002) contrasts these positions, asking which is most supported by the various aspects of cognition related to conceptual processing, e.g., learning, induction, lexical processing and conceptual understanding in children. He concludes, somewhat disappointingly, that "there is no clear, dominant winner". Moreover, there is a well documented tension in cognitive science between the compositionality and the prototypicality of concepts, which is difficult to reconcile (Frixione & Lieto, 2012;Fodor, 1998). Arguments in favour of compositionality centre around the systematicity and productivity of language; there are infinitely many expressions in natural language and yet our cognitive resources are finite. Compositionality ensures that this infinity of expressions can be processed because an arbitrary expression can be understood in terms of its constituent parts. Since compositionality is what explains systematicity and productivity, Fodor (1998) claims that concepts are, and must be compositional, however, such claims are at odds with well-known prototypicality effects (Frixione & Lieto, 2012;Fodor, 1998). For example, consider the conceptual combination PET FISH. A GUPPY is not a prototypical PET, nor a prototypical FISH, and yet a GUPPY is a very prototypical PET FISH (Hampton, 1997). Therefore, the prototype of PET FISH cannot result from a simple (e.g., additive) composition of the prototypes of PET and FISH which makes the characterization of concepts in prototypical terms difficult to reconcile with compositionality (Hampton, 1997;Fodor, 1998). This supports a view put forward by the philosopher Weiskopf (2007) when he observed that conceptual combinations are "highly recalcitrant to compositional semantic analysis", but even this observation has garnered no general support.
Here, we approach the problem of non-compositionality from a novel perspective. We shall show that a suite of sophisticated tools have already been developed for analysing non-compositionality, albeit in another field of science. These tools can be naturally extended to the analysis of concepts, and provide theoretically justified grounds for deciding whether a particular conceptual combination can be considered in terms of the semantics of its constituent parts. Specific cases will be discussed where conceptual combinations can be shown to be non-compositional using these analytical methods. We begin with a brief review of conceptual combination as it is currently understood in cognitive science.
Cognitive theories, compositionality and conceptual combination
The principle of compositionality states that the meanings of higher order expressions such as sentences are determined from a combination of the meanings of their constituent parts (Costello & Keane, 2000;Mitchell & Lapata, 2010). This is a principle underlying many general theories of language, both natural and artificial. A compositional account of conceptual combination is closely related to the notion that concepts are atomic in nature, but this assumption of atomicity is difficult to maintain when the full variety of possible semantic behavior is considered.
Perhaps most supportive of the principle are those combinations that have an intersective semantics, e.g., the meaning of BLACK CAT is the intersection of black objects and objects that are cats. Here, it is possible to apply a conjunction operator between the two predicates referring to the constituent concepts, i.e., black(x) ∧ cat(x). Such intersective semantics are compositional, as the semantics of BLACK CAT are determined solely in terms of the semantics of the constituent concepts BLACK and CAT. It is tempting to assume that most conceptual combinations can be modeled in this way, however, the study of intersective combinations in cognitive science has revealed that not all conceptual combinations display such intersective semantics (Hampton, 1997).
For example, the intersection of ASTRONAUT and PEN in the combination ASTRONAUT PEN is empty, and therefore its semantics are vacuous, despite being a concept that humans can easily comprehend (Gärdenfors, 2000;Weiskopf, 2007).
A second type of conceptual combination arises when the first concept modifies the head concept, e.g., in CORPORATE LAWYER, CORPORATE modifies the more general head concept to give a sub-category of LAWYER. Schema-based theories of conceptual combination (Murphy, 1988;Wisniewski, 1996), propose that the head concept is a schema-structure made up of various property dimensions (e.g., color, size, shape etc.) and relational dimensions (e.g., habitat, functions, behaviors etc.). Several studies have revealed that modification can produce emergent properties, e.g., in HELICOPTER BLANKET the modification of BLANKET by HELICOPTER generates associate properties such as "water proof", "camouflage", and "made of canvas", a phenomenon which present theories struggle to account for (Wilkenfeld & Ward, 2001), and so is sometimes viewed as evidence for non-compositional semantics (Hampton, 1997;Medin & Shoben, 1988).
Despite these tensions underlying the assumption of compositionality, virtually all researchers have at least assumed a weak form of compositionality in their analysis of human language, where for example, the initial combination process begins with separate meanings, but is supplemented later by external contextual information (Wisniewski, 1996;Swinney et al., 2007). For example, in Wisniewski (1996)'s dual process theory of conceptual combination, a competition occurs between the processes of relation linking (e.g., ZEBRA CROSSING as a crossing for zebras), and property mapping (e.g., ZEBRA CROSSING as a striped crossing), as the meaning of the compound is decided upon. This process is affected by the similarity of the constituent concepts, because similar concepts share many facets and so are more likely to result in a property interpretation, whereas dissimilar concepts are more likely to be combined using a relational process. Thus, ELEPHANT HORSE is more likely to result in a property interpretation (e.g., a large horse), than ELEPHANT BOX, which is more likely to result in a relational interpretation (e.g., a box for holding elephants), because similar concepts share many dimensions (four legs, similar shape etc. in the case of elephant and horse) and thus are easier to combine by mapping one property to another. However, it is important to note that these processes are all weakly compositional, in the sense that they rely almost exclusively on the properties of the individual concepts. It is only later that background knowledge is drawn upon to infer the possible emergent properties of the new concept. Thus an ELEPHANT BOX could be deemed as likely to be made of a strong material such as wood, and hopefully to contain air-holes. Swinney et al. (2007) found evidence for this form of weak compositionality in conceptual combination, when they showed that for adjectival combinations such as BOILED CELERY the properties of the individual words such as "green" are activated before emergent properties such as "soft". However, for the combination APARTMENT DOG, apartment modifies the "habitat" dimension of dog rather than its "size" (a dog the size of an apartment), which in turn shows that background knowledge also plays a role in early combinatory processes such as slot selection (Murphy, 1988).
Rather than entering the debate about the proper dividing line between weak and strong compositionality, it is our intention to provide an empirical test for the (non-)compositionality of conceptual combinations, motivated by the analysis of composite systems in quantum physics.
Thus, we feel that it is possible to shift this debate out of philosophy and into the realms of experimental psychology, 1 and this article is a step in that direction. In what follows we shall discuss the combination of concepts within a tiered model of cognition. This will provide a framework from which a (non-)compositional semantics can be developed in further sections.
2 Conceptual combination in conceptual space Gärdenfors (2000) proposed a three level model of cognition in which the representation of information varies greatly across each different level. Within the lowest level, information is pre-or sub-conceptual and is carried by a connectionist representation. At the uppermost level information is represented in terms of higher order symbolic structures such as sentences. It is at this upper symbolic level of cognition where a significant portion of the work on compositional semantics resides. Thus, a grammar specifies the parts of a sentence, and the manner in which they fit together. It makes sense that the semantics attributed to these primitive parts be intuitive, for example, a noun may be mapped to a set of entities. However, Zadrozny (1992) has suggested that it doesn't actually matter which components are chosen as primitive, a function can be found that will always produce a compositional semantics. In Zadrozny's own words, "..compositionality, as commonly defined, is not a strong [enough] constraint on a semantic theory". The consequence of this with respect to the compositional semantics of natural language, and hence conceptual combination, is that meaning need not be assigned to individual words, "we can do equally well by assigning meaning to phonemes or even LETTERS. . . " (Zadrozny, 1992). Opponents to Zadrozny may argue that his position is overly pessimistic because it applies to "strong compositionality", a position that is clearly wrong. Nevertheless, the question remains as to where the "meanings" might come from initially.
A pragmatic approach is to define the meaning of the a concept by the set of possible senses it has. Consider the concept BAT. One reliable way to seek an understanding of this concept is via free association experiments where subjects are cued with the word "bat" and asked to produce the first word that comes to mind. Over large numbers of subjects, probabilities can be calculated that a certain associate is produced. Fig. 1(a) depicts such a set of data taken from the University of South Florida word association norms (USF-norms) (Nelson et al., 2004) cue "bat"; a SPORT sense (with relevant associates in bold) and an ANIMAL sense. Considering the full dataset 2 allows us to generate the total probability p s of recall for the sport sense by summing the probabilities of the relevant associates: p s = 0.25 + 0.05 = 0.30. The rest of the associates all happen to be relevant to the ANIMAL sense of bat, so p a = 0.70. This suggests that the concept BAT can be modeled in a two dimensional vector space, the basis vectors of which correspond to the two possible senses, a situation that is illustrated in Fig. 1 Gärdenfors (2000) three tiered model of cognition. The same can be said for the concept BOXER (see Fig. 2(a) where, once again, the associates relevant to the sport sense of BOXER are in bold).
When applied to the conceptual combination (i.e., BOXER BAT) this reasoning suggests that four interpretations are possible. Thus when BOXER is interpreted as a sport and BAT as an animal, the corresponding interpretation of the combination maybe something along the lines of a "furry black animal with boxing gloves on", or perhaps BOXER could be interpreted as a sport and BAT as as a sport leading a subject to interpret the compound as "a fighter's implement".
The question we will now address is how compositionally can be empirically determined based on the set of possible interpretations.
3 Probabilistic approaches to characterizing compositionality
The preceding section has highlighted the fact that conceptual combinations usually have more than one possible interpretation. This may arise from a range of factors, including the meaning of the concepts themselves (e.g. BOXER can be interpreted as a dog, a sportsperson, a pair of shorts, someone who puts things in boxes, etc.), the sentence in which they appear, the background of the subject etc.. Different human subjects will often interpret the same conceptual combination differently, indeed, the same human subject, if placed in a new context may very well provide a different interpretation for the same concept. Thus, it is sensible to approach the analysis of compositionality probabilistically. In the experiments to be discussed in section 4, we will assume that the concepts being combined have at least two senses. In what follows each concept is assumed to have a dominant sense and one or more subordinate senses. The distinction between the two can be inferred from free association norms such as those discussed above. For example, the dominance of the sport sense of BOXER is clearly evident in Fig. 2(a), where the sport sense is greater than the animal sense which leads us to designate the sport sense and dominant and the animal sense as subordinate. Another possible subordinate sense is a CLOTHING sense, namely "boxer briefs".
It should be noted, however, that the distinction between "dominant" and "subordinate" senses is not necessary for the theory presented below, rather it is an explanatory aid. in that sense. Similarly, A2 = 1 represents a situation where a subordinate sense of concept A was first primed, and concept A was indeed subsequently interpreted in this sense, and A2 = 0
represents the case where a subordinate sense of concept A was primed, but A was not interpreted in this subordinate sense. Similar relationships hold for B1 and B2. Priming thus allows for the experimental control of the contextual cues influencing conceptual combinations. This is important because conceptual combinations always appear in a context (e.g., a discourse context), which affects how they will ultimately be interpreted. Fig. 3 gives a general representation of the Figure 3: A potentially compositional system S, consisting of two assumed components A and B. S can perhaps be understood in terms of a mutually exclusive choice of experiments performed upon those components, one represented by the random variables A1, A2 (pertaining to an interaction between the experimenter and component A), and the other by B1, B2 (pertaining to an interaction between the experimenter and component B). Each of these experiments can return a value of 1 or 0. reasoning used in the construction of the above probabilistic scenario. A 'black box' is depicted, with two proposed components, A and B, inside it. Two different experiments can be carried out upon each of the two presumed components, which will answer a set of 'questions' with binary outcomes, leading to four experimental scenarios. For example, one experimental scenario would be to ask whether subjects return an interpretation of the concept A that corresponds to the prime A1 and similarly for B in relation to the prime B2. What analysis can be brought to bear upon such a situation?
0 1 1 0 B A } = 0 } = 1 0 1 = B2 B1 = { A2 A1 {
As with many systems, the outcomes of our experiments will have a statistical distribution over all available outcomes. In what follows, we shall aim to develop a general mathematical apparatus that can be used to discover whether the presumed sub-components can be considered as isolated, influencing one another, or in some sense irreducible. We shall do this through a consideration of the joint probability distribution Pr(A1, A2, B1, B2) which is used to model the behavior of the experimental black box. While this analysis will be performed using conceptual combinations, we emphasize that this black box is potentially very general and that the analysis developed here can be applied to far more than the analysis of language.
We start by noting that we can construct 16 joint probabilities, corresponding to all the possible interpretations of concepts A and B that a subject might return, across the four priming conditions:
p 1 ≡ Pr(A1 = 1, B1 = 1) p 2 ≡ Pr(A1 = 1, B1 = 0) p 3 ≡ Pr(A1 = 0, B1 = 1) p 4 ≡ Pr(A1 = 0, B1 = 0) p 5 ≡ Pr(A1 = 1, B2 = 1) p 6 ≡ Pr(A1 = 1, B2 = 0) p 7 ≡ Pr(A1 = 0, B2 = 1) p 8 ≡ Pr(A1 = 0, B2 = 0) p 9 ≡ Pr(A2 = 1, B1 = 1) p 10 ≡ Pr(A2 = 1, B1 = 0) p 11 ≡ Pr(A2 = 0, B1 = 1) p 12 ≡ Pr(A2 = 0, B1 = 0) p 13 ≡ Pr(A2 = 1, B2 = 1) p 14 ≡ Pr(A2 = 1, B2 = 0) p 15 ≡ Pr(A2 = 0, B2 = 1) p 16 ≡ Pr(A2 = 0, B2 = 0).(1)
These sixteen probabilities can be set out in an array as follows:
A A1 1 0 A2 1 0 B B1 1 0 B2 1 0 p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 p 10 p 11 p 12 p 13 p 14 p 15 p 16 = P AB(2)
This matrix lists the different priming conditions in a set of four blocks, which allows us to consider the structure of the probabilities describing the likely interpretation of a given conceptual combination. Observe how the matrix P AB in equation (2) is complete, in that it covers all possible priming conditions across the respective senses of the concepts.
In what follows we will show that P AB can be used to determine whether a conceptual combination is compositional, or not. We start by considering what might be required in order for a conceptual combination to be deemed compositional.
Compositional semantics
Were the semantics of the conceptual combination AB to be compositional, how would this be reflected in its probabilistic structure? Intuitively, compositionality would imply that the joint probability distribution could be recovered from the probability distributions constructed using each individual concept. This intuition can be formalized by assuming that the joint probability distribution, (2), is factorisable. For example, a naive assumption would be that the concepts in the combination can be interpreted independently of one another,
Pr(Ai, Bj) = Pr(Ai) Pr(Bj), i, j ∈ {1, 2}.(3)
The syntax of this equation clearly reflects how the probabilistic behaviour of the conceptual combination as represented by the four joint distributions Pr(Ai, Bj) can be decomposed into a product of constituent probability distributions that correspond to the constituent concepts, namely Pr(Ai) and Pr(Bj).
Less naive formalizations of compositionality are possible. For example, the competition among relations in nominals (CARIN) theory of conceptual combination proposes sixteen possible relations that can be used to link concepts, e.g., causes, during, for and about (Gagne & Shoben, 1997;Gagne, 2001). Therefore, the interpretation can be thought of as being conditioned by an implicit linking relation r:
Pr(Ai, Bj) = r∈R Pr(Ai, Bj|r) Pr(r),(4)
where R is a random variable over the possible implicit linking relations. One form of compositionality assumption assumes that relation independently affects the interpretation of each concept:
Pr(Ai, Bj|R) = Pr(Ai|R) Pr(Bj|R), i, j ∈ {1, 2}.(5)
CARIN is founded on the assumption that when people interpret a novel conceptual combination AB, the availability of each of these sixteen relations is determined by the relation frequency distribution for the modifying concept A. The random variable R is thus assumed to range over these sixteen relations. Therefore, CARIN explains why "mountain goat" is easier to interpret than "mountain magazine" because the located relation is more often used with the modifier MOUN-TAIN than the about relation. The essence of the CARIN approach to interpreting conceptual combinations can therefore be derived as follows using the compositionality assumption (5) together with the assumption that the interpretation of concept B is independent of the linking relation,
Pr(Ai, Bj) = r∈R Pr(Ai, Bj|r) Pr(r) (6) = r∈R Pr(Ai|r) Pr(Bj|r) Pr(r) (7) = r∈R Pr(Ai, r) Pr(Bj)(8)
for i, j ∈ {1, 2}. The last line of the derivation shows how the the relation frequency distribution for the modifying concept A is formalized by the joint distribution Pr(Ai, R).
The goal of the preceding development of equations is not to formalize existing models, or
propose new ones, but to reveal a common aspect that is not frequently noticed; in all of the above cases the joint distribution is formed by taking the product of two simpler distributions, each obtained with reference to the constituent concepts. This reveals that an assumption of compositionality underlies the vast majority of models in this area.
Following the general approach used by Golshani & Fahmi (2001), let us explicitly formalize this compositionality assumption, with reference to two functions f and g which are probabilistic functions centered around Ai and Bj respectively. In addition, we will use the symbol λ to denote implicit variables such as R that affect the interpretation of the conceptual combination.
Assumption 1. If it is possible to factorise the joint probability distribution describing a compound of two presumed component concepts A and B such that:
Pr(Ai, Bj|λ) = f (Ai, λ)g(Bj, λ), i, j ∈ {1, 2},(9)
then the joint probability distribution describing the conceptual combination AB can be factorised in terms of probability distributions based on A and B alone, and the system will be deemed compositional.
Non-compositional semantics
To analyse non-compositionality we draw upon results from the field of quantum theory. This step is not as arbitrary as it might at first seem. Indeed, equation (9) forms the basis of much of the analysis of entangled quantum systems (Aspect, 1999;Fine, 1982a;Isham, 1995;Maudlin, 1994;Laloë, 2001). An entangled system is one that cannot be modeled as a composition of its parts, and the field of physics has a highly developed mathematical apparatus devoted to their detection and description. The fundamental lesson learned in physics is that the joint probability of entangled quantum systems cannot be factorised into probability distributions based on A and B, and thus, they must be thought of as non-compositional. In this section we shall make use of this formalism,
applying it to questions about the non-compositionality of conceptual combinations. We caution the reader that details are kept to a minimum, and encourage those interested towards the cited references and the supplementary details in the appendices.
Clauser & Horne (1974) published a versatile theorem which sets a bounded limit upon the probability distributions describing system which is assumed compositional in the general form described by equation (9) (See Appendix A):
− 1 ≤ f (A1, λ)g(B1, λ) − f (A1, λ)g(B2, λ) + f (A2, λ)g(B1, λ) + f (A2, λ)g(B1, λ) − f (A2, λ) − g(B1, λ) ≤ 0. (10)
Suppose that the compositionality assumption expressed in (9) holds, then the previous equation
can be rewritten in a much simpler form involving a set of probability distributions: (11) Observe how this inequality involves a set of four pairwise joint probability distributions together with two prior probabilities, which are in principle experimentally obtainable, allowing for the testing of the assumptions underlying (9). With some algebra (see Appendix A), the following inequality can be produced: This gives rise to the following expression of the CHSH inequality (Cereceda, 2000): The expectations can easily be computed from the matrix of probabilities (2). For example, (1) that p 1 = Pr(A1 = 1, B1 = 1) and (p1) and (2) when APPLE is not interpreted as fruit, CHIP is not interpreted as food (p4).
− 1 ≤ Pr(A1, B1|λ) − Pr(A1, B2|λ) + Pr(A2, B1|λ) + Pr(A2, B2|λ) − Pr(A2|λ) − Pr(B1|λ) ≤ 0.− 2 ≤ E(A1, B1) + E(A1, B2) + E(A2, B1) − E(A2, B2) ≤ 2(12)E(A1, B1) = p1 + p4 − (p2 + p3). Recalling fromp 4 = Pr(A1 = 0, B1 = 0),
The combination of these two conditions imply that p 1 + p 4 = 1 and p 2 + p 3 = 0. Conversely, p 2 + p 3 = 1 occurs when the senses are perfectly anti-correlated. For example, assume the fruit sense of APPLE is primed and CHIP is primed in its circuit sense. Perfect anti-correlation of senses means two conditions hold: (1) when APPLE is not interpreted as a fruit, CHIP is always interpreted as a circuit (p3) and (2) when APPLE is interpreted as fruit, CHIP is not interpreted in its circuit sense (p2).
With this as underlying intuition, the expectation value E(Ai, Bj) captures how well the senses of the constituent concepts are (anti-)correlating. The arrangement of probabilities in figure (2) is not significant. There are four possible ways to arrange the quadrants, each arrangement leading to another variant of the CHSH inequality (see Appendix A).
The quantity at the heart of the CHSH inequality E(A1,
B1) + E(A1, B2) + E(A2, B1) − E(A2, B2
) will be referred to as the CHSH value. Any model of conceptual combinations based on factorisation as generally expressed in equation (9) will produce a CHSH value the in the range [−2, 2]. Note the reverse does not hold, namely if the CHSH value lies in this range, it does not always imply the joint probability distribution is factorisable. It is therefore legitimate to question whether factorisability is the only way to express compositionality. An alternative is to employ addition, which is an approach commonly used to form probabilistic mixture models. For example, Mitchell & Lapata (2010) propose a composition function f of two semantic vectors u, v
as follows:
f (u, v) = αu + βv.(13)
Assume the vector u models a concept SPRING with senses 'season' and 'coil'. Similarly, assume that the concept PLANT ranges over the senses 'vegetation' and 'factory'. The previous equation can be used as the basis of mixture model describing the manner in which senses can be probabilistically attributed to the combined concept described by the vector f (u, v):
Pr(Ai, Bj) = α Pr(Ai) + (1 − α) Pr(Bj)(14)
The coefficient α allows the component distributions forming the mixture model to be weighted. Such a compositional model may have a CHSH value in the range [−2, 2] and not be factorisable. Therefore, although factorisability can cover a wide range of possibilities for formalising the compositionality of conceptual combinations, it is not an expressive enough means to model all forms of compositionality. As a consequence, the inability to factories the joint probability distribution is not a strong enough criterion to deem the associated conceptual combination to be non-compositional. A more reliable criterion is when the CHSH value lies outside of the range [−2, 2], meaning its absolute value is greater than 2 (|CHSH| > 2). When this condition holds, it is not possible to construct a joint probability distribution Pr(A1, A2, B1, B2) from the four pairwise distributions Pr(A1, B1), Pr(A1, B2), Pr(A2, B1), Pr(A2, B2). This latter fact determines a clear test for non-compositionality, which does not rely on compostionality to to be modelled by factorisation.
Compositionality: Existence of the joint probability distribution
Adherence to the CHSH inequality turns out to be a necessary condition that must be satisfied if the the joint probability distribution Pr(A1, A2, B1, B2) can be be constructed from the four pairwise joint probability distributions Pr(Ai, Bj), i, j ∈ {1, 2} (See Appendix B for the argument why this is the case). However, an even stronger result was provided by Fine (1982b), which provides both the necessary and sufficient conditions for the existence of the joint probability distribution Pr(A1, A2, B1, B2):
Fine Theorem 3 (Fine, 1982b): If A1, A2, B1, B2 are bivalent random variables with joint distributions Pr(Ai, Bj), i, j ∈ {1, 2}, then necessary and sufficient for a joint distribution Pr(A1, A2, B1, B2) is that the following system of inequalities is satisfied:
− 1 ≤ Pr(A1, B1) + Pr(A1, B2) + Pr(A2, B2) − Pr(A2, B1) − Pr(A1) − Pr(B2) ≤ 0 (15) − 1 ≤ Pr(A2, B1) + Pr(A2, B2) + Pr(A1, B2) − Pr(A1, B1) − Pr(A2) − Pr(B2) ≤ 0 (16) − 1 ≤ Pr(A1, B2) + Pr(A1, B1) + Pr(A2, B1) − Pr(A2, B2) − Pr(A1) − Pr(B1) ≤ 0 (17) − 1 ≤ Pr(A2, B2) + Pr(A2, B1) + Pr(A1, B1) − Pr(A1, B2) − Pr(A2) − Pr(B1) ≤ 0.(18)
Fine's third theorem, which permits us to analyze compositionality from the perspective of the above four equations, opens up a totally new avenue for the analysis of compositionality; if the four joint probabilities making up the matrix of probabilities in equation (2) The interpretation of conceptual combinations within a priming scenario can be treated with a model of selective influence, with primes corresponding to the factors affecting random variable corresponding to the interpretation of concepts. A type of selective influence, termed marginal selectivity, assumes that primes only affect the intended concept (as was depicted in figure 3). For example, with respect to the conceptual combination BOXER BAT, marginal selectivity entails the interpretation of BAT does not change when the primes of BOXER are varied from "fighter" to "dog". Therefore, for marginal selectivity to hold, the following four conditions must be satisfied:
Pr(Ai) = Pr(Ai, B1) + Pr(Ai,B1)(19)
Pr(Ai) = Pr(Ai, B2) + Pr(Ai,B2)
Pr(Bj) = Pr(A1, Bj) + Pr(Ā1, Bj)
Pr(Bj) = Pr(A2, Bj) + Pr(Ā2, Bj) We now illustrate how these probabilistic methods for analysing compositionality can be deployed in an experimental setting.
Empirical evaluation 4.1 Subjects
Sixty-five subjects were recruited from the undergraduate psychology pool at Griffith University and received credit for their participation. Only native English speakers were selected in order remove the possibility that the interpretation of conceptual combinations would be confounded by language issues.
Design and materials
We utilised four different priming regimes in order to generate the four different experimental scenarios suggested by Fig. 3. In these experiments, subjects were first primed and then presented with a non-lexicalised conceptual combination, which they were asked to interpret also designating the senses that were used in that interpretation. A probabilistic analysis was then performed upon the data so obtained. Subjects were presented with twenty-four 'true' conceptual combinations (see below for an explanation), and so participated in twenty-four test trials. Table 1 lists the set of conceptual combinations used, as well as the corresponding primes.
Primes were selected from the USF free association norms (Nelson et al., 2004) and the University of Alberta norms of homographs (Twilley et al., 1994). The majority of primes were selected from the USF norms. The procedure for selecting primes from these norms was to view a potential prime as a cue which produces the concept as an associate. As an example, "money" was chosen from the USF norms to prime the financial sense of BANK as "bank" is produced as a free associate of the cue 'money'. Similarly, "river" was chosen to prime the natural sense of BANK. Occasionally when a particular sense was not present in the USF norms, we drew upon the University of Alberta norms. Importantly, the USF norms were used to avoid cues such as 'account which was associated with both BANK and LOG, thereby minimising the possibility of priming more than one concept at a time.
A single factor design was used, which analysed responses to non-lexicalised conceptual combinations under priming conditions that varied between subjects. A subject was assigned to one of four priming conditions for each presented conceptual combination. For example, the four priming conditions for BANK LOG are (1) "money" and "journal" (A1 − B1), (2) "money" and "tree"
(A1 − B2), (3) "river" and "journal" (A2 − B1), or (4) "river" and "tree" (A2 − B2). This assignment of primes was based upon a between groups Latin square design, such that for the 24 combinations, each participant completed each priming condition 6 times. Combinations were chosen with the expectation that the ambiguity of its constituents would allow a number of alternative interpretations, where each interpretation arose from a different attribution of meaning to the underlying sense of the ambiguous concepts (Costello & Keane, 1997).
Procedure
Participants completed 3 practice trials, 24 test trials and 24 filler trials, and Fig. 4 shows a schematic illustration of the procedure followed during a test trial. All trials were composed of six phases, consisting of three initial time-pressured tasks followed by three non-timed tasks. The time limitation of the first three phases was utilised in order to maximise the effectiveness of the priming. The experiment took around 20-30 minutes to complete, and participants pushed the ENTER key to begin each trial.
Phases 1-2:
Two consecutive double lexical decision tasks were carried out, where participants were asked to decide as quickly as possible whether two letter strings, a prime and the concept to be presented as a part of the compound given in Phase 3, were legitimate words, or if one of the strings was a non-word. Each lexical decision consisted of the the two letter strings presented in the center of screen, one below the other, in order to discourage participants from interpreting the two words as a phrase. Participants responded to the decision tasks by pushing a button on the keyboard, labeled 'word' or a button labeled 'non-word' (left arrow and right arrow keys respectively). For instance, if given the strings "coil" and "spring", then participants were expected to decide that both strings were words and so push the 'word' key, whereas if given "grod" and "church" then participants were expected to decide that they had been shown a non-word and to push the 'nonword' key. For all of the test trials participants received two phases of word-word strings. The response ratio for the two priming phases were: 50% word → word (test trial), 25% non-word → non-word (filler trial), 12.5% word → non-word (filler trial), 12.5% non-word → word (filler trial). In phases where a non-word was present, it appeared equally often in the top or the bottom portion of the screen.
The double-lexical decision task was used to associate the priming word and test word together without participants interpreting them as a compound (Gagne, 2001). This procedure isolates the experimental priming to each concept in the combination. For example, the lexical decision task applied to "coil" and "spring" was designed to prime the coil sense of the concept SPRING in the conceptual combination SPRING PLANT. The order of the two double lexical decision tasks was counter-balanced, so that half were presented in the same order as the compound words (e.g., "coil" and "spring" were first presented, then "factory" and "plant") and half were presented in the reverse order (e.g., first "factory" and "plant" were presented for lexical decision, followed by "coil" and "spring".
Phase 3:
A conceptual combination was presented in the center of the screen (e.g., "spring plant"). Participants were asked to push the space bar as soon as they thought of an interpretation for the compound. Filler compounds were included for the filler (i.e., non-word) trials so as not to disrupt the participant's rhythm in making two lexical decisions followed by an interpretation.
Phase 4:
Participants were asked to type in a description of their interpretation.
Phases 5-6:
Two disambiguation tasks were carried out, where participants chose what sense they gave to each word from a list (e.g., plant = A. 'a living thing'; B. 'a factory'; C. 'other').
Results
Experimental subcomponents utilizing non-words were discarded during the analysis presented here, and the results obtained are presented in Table 1.
In total, 91.5% of the interpretations provided by the subjects fell within one of the four primed senses of the studied conceptual combinations. Confidence intervals around the CHSH value were computed using the bootstrap method. The bootstrap method is designed to estimate an unknown parameter, in this case the CHSH value, and provide confidence intervals for this estimate. The method assumes that the observed data is independent and identically distributed.
Independence in this experiment means that each subject's interpretation of a given conceptual combination is independent of interpretations of other subjects for that combination, and so is a reasonable assumption. The actual observed data consists of interpretations which populate each of the 16 cells in P AB (see equation (2) were applied (Efron & Tibshirani, 1986). IfĈHSH L,α denotes the estimate of the CHSH value from the bootstrap distribution such that only a fraction α/2 of all bootstrap estimates are less than this value, and likewiseĈHSH H,α is the estimate exceeded by only α/2 of all bootstrap estimates, then the confidence interval is given by (ĈHSH L,α ,ĈHSH H,α ). A 95% confidence level was applied meaning α = 0.05.
In order to apply Fine's theorem, marginal selectivity must first be tested. Appendix C depicts an analysis of marginal selectivity. This analysis reveals that the combination STOCK TICK fails marginal selectivity (α = 0.05) and can immediately be declared non-compositional as a joint distribution cannot be constructed when marginal selectivity fails (Dzhafarov & Kujala, 2012).
For compounds other than STOCK TICK, the bootstrap method was applied. The percentage of the two thousand bootstrap samples that violated the theorem was computed. This value is used to denote the confidence that the theorem was violated for the observed data.
Discussion
A violation of the CHSH inequality or Fine's theorem by a given conceptual combination gives us good reason to believe that it is non-compositional. In addition, it is possible to provide further details about how the joint probability distribution is structured when a violation occurs which serves to illustrate a number of key features about non-compositionality. In what follows, we shall utilize two examples: BOXER BAT and APPLE CHIP.
BOXER BAT
Equation (23) (23) we can see that the probability mass does not center sufficiently around the diagonals in such a way that it can produce the correlations between the senses necessary to violate the CHSH inequality. As |CHSH ≤ 2| is not a sufficient condition for the joint probability distribution to exist, Fine's theorem must be used. As this theorem is not violated, BOXER BAT is therefore deemed to be compositional as a joint probability distribution can be constructed which models how it is interpreted within the given priming conditions.
boxer
A1(dog) 1 0 A2(f ighter) 1 0 bat B2(vampire) 1 0 B1(ball) 1 0 (23)
APPLE CHIP
In contrast, APPLE CHIP leads to a joint distribution that has a more interesting structure:
apple
A1(banana) 1 0 A2(computer) 1 0 chip B1(potato) 1 0 B2(circuit) 1 0 (24)
In this case, we see a strong pattern of correlation between the senses across the four priming conditions because the probabilities are concentrated on the diagonals or reverse diagonals. Thus, whenever a subject interprets APPLE as a fruit they tend to interpret CHIP in its FOOD sense.
Conversely, if APPLE is interpreted as a 'computer' then a CHIP is interpreted as an 'electronic device'. A second key factor is that a non-zero value has been returned by the ensemble of subjects for one off-diagonal case p 2 = Pr(A1 = 1, B1 = 0) = 0.06 (see equation (2)) Even though the food sense of CHIP has been primed, atypical interpretations of the compound are produced, for example, "apples growth is controlled by an internal chip". Costello & Keane (2000) identify three categories of non-compositionality in novel conceptual combinations, and atypical instances are at the basis of one of these categories. Some other non-compositional combinations similarly showed atypical interpretations. For example, BANK LOG also exhibits a strong correlation between the senses: When BANK is interpreted as a financial institution, LOG tends to be interpreted as a record. Conversely, when BANK is interpreted in it's RIVER sense, LOG is interpreted as a PIECE OF WOOD. However, there were atypical cases where the senses cross over which produces an off-diagonal probability e.g., "a record of a bank of a river".
The CHSH value for APPLE CHIP is greater than 2, which means that the joint probability distribution Pr(A1, A2, B1, B2) cannot be constructed from the four pairwise joint probability distributions and is therefore non-compositional.
Agreement between CHSH and Fine's theorem
Ideally, the compositionality analysis using the CHSH inequality and Fine's theorem should be in agreement. For example, if the CHSH inequality is violated (|CHSH > 2|), then the system of inequalities (15) doesn't need to be tested when applying the CHSH inequality. Therefore, we advocate that the following procedure could be followed to analyse the potential compositionality of a given conceptual combination:
1. If the CHSH inequality is violated (|CHSH > 2|), then the conceptual combination is "noncompositional".
2. If CHSH inequality is not violated (|CHSH ≤ 2|), then employ the following three step procedure:
(a) First test marginal selectivity.
(b) If marginal selectivity fails, then the conceptual combination is "non-compositional".
(c) If marginal selectivity holds, then Fine's theorem can be applied. If any of the inequalities in Fine's theorem are violated, the conceptual combination is "non-compositional", otherwise it is "compositional".
This procedure is the major contribution of this paper. It provides a clear set of tests that can be used to move the debate about compositionality into the realm of empirical testability. We shall now conclude this section with some further analysis of our results emphasising this is not essential to the above key contribution.
Analysis of priming
The priming data was analysed according to whether participants interpretations matched or did not match the priming words. Interpretations were sorted into two conditions:
1. Consistent: The sense chosen for both words of the compound matched the senses of both of the priming words.
2. Inconsistent: The sense chosen for both words of the compound did not match the senses of the priming words.
As an example, if participants were given the primes "money" and "journal" followed by the compound BANK LOG, then the interpretation "a record in a financial institution" was consistent with both primes, whereas "a log on a river bank" was inconsistent with both primes. The effect of priming was analysed by investigating the number of interpretations and speed of response of each participant depending upon the priming condition. We also analysed the effect of prime order. Mean response time for interpreting a compound was 3533.26 ms, with a standard deviation of 3836.41 ms. Responses greater than 2 standard deviations were removed from the data.
The two dependent variables were the frequency of interpretations and the speed with which an interpretation was produced (response time).
Frequency of Interpretations
The frequency of interpretations was analysed using Wilcoxon signed-rank tests. As expected, overall participants gave significantly more interpretations that were consistent with the primes (mean = 6.88), than inconsistent with the primes (mean = 4.72), z = 4.06, p < .0001. This provides evidence that the primes were affecting the interpretations given in the correct direction. To analyse whether the order in which the primes were shown had an effect on number of interpretations, we divided the consistent and inconsistent interpretations into whether the priming words were in the same order or reverse order to that of the compound. differences were found. Furthermore, the priming effect was still present within the priming order conditions. That is, when prime order was the same, participants gave significantly more consistent interpretations (mean = 3.20) than inconsistent interpretations (mean = 2.32), z = 2.77, p = .006.
Likewise, when prime order was reversed, participants again gave significantly more consistent interpretations (mean = 3.67) than inconsistent interpretations (mean = 2.40), z = 3.34, p = .001. Overall, these results provide strong evidence that the priming was effective, and that it is independent of priming order.
Response time
The speed of producing an interpretation was analysed according to whether it was consistent or inconsistent with regards to the priming words, and whether this was affected by prime order. It was expected that if the priming was effective then interpretations that were inconsistent with the primes would be produced slower than interpretations that were consistent with the primes.
As seen in table 6, the mean response times were in the correct direction. Since a number of participants did not give responses for all of the categories, the number of participants in the analysis was 51. The analysis showed no main effect of Interpretation (p = 0.297), Prime Order (p = 0.718), nor an Interpretation x Prime Order interaction (p = 0.994). One likely reason for the non-significant effects is the large variance in response times (range = 369ms to 10035 ms), thus making it difficult for the mean differences to reach significance. For this reason we feel that the frequency scores are more reliable measures, and importantly these showed significant effects of priming.
Compound familiarity
One concern is that the evidence for non-compositionality found in this study may be a function of familiarity. In particular, highly familiar compounds would be expected to require less combinatorial processing as the combined meaning may simply be retrieved from long term memory.
We consider this possibility unlikely due to the experimental procedure followed. The fact that both words are ambiguous allows the priming procedure to shift participants into considering new combined meanings. For instance, while most participants (86%) interpreted SPRING PLANT as "a plant that grows in spring", when primed with 'coil' and 'leaf', 3% of participants gave the interpretation "a springy plant". Thus these participants have arguably been influenced by priming towards generating a new meaning, even though a highly common meaning already exists. In fact, as previously mentioned for spring plant and other compounds the findings of non-compositionality seem to depend upon participants producing novel meanings for the compounds. This finding goes against the hypothesis that non-compositionality is driven entirely by the retrieval of pre-stored meanings. To test whether familiarity is associated with non-compositionality, we obtained hit rates for each compound by typing each into google with quotes. This measure of familiarity has been used in previous studies (e.g., (Ramm & Halford, 2012;Wisniewski & Murphy, 2005). It was found that the novelty of compounds based upon hit rates ranged from 144 (STAG YARN) to 9,460,000 (BATTERY CHARGE). To reduce the large variance obtained in the hit rates we transformed the scores into logs of ten. If familiarity is driving the non-compositionality results it would be expected that CHSH scores would be positively correlated with google hit rates. To test this we calculated a Pearson R correlation. This showed a weak positive correlation between the two variables, though this was non-significant, r = 0.21, p = .337. Thus we did not find evidence for the hypothesis that the non-compositionality of compounds in this study is driven by familiarity. However, as there were only 24 compounds under study, we acknowledge that there may not have been enough power to derive a significant correlation.
More generally, the primes are an experimentally pragmatic means to manipulate the manner in which context affects the interpretation applied to conceptual combinations, and so they need only influence the interpretation, not determine it. The violations that do occur arise only with respect to the reported priming conditions, and may not occur in a different experimental context. Costello & Keane (2000) classify non-compositional conceptual combinations into three categories depending upon how their apparent non-compositionality arises. Firstly, some combinations are deemed non-compositional because of emergent properties, which generally arise from a meaning which is based on a subset of atypical instances. The aforemetioned PET FISH example is placed in this category. A second set of conceptual combinations are classified non-compositional due to the manner in which the senses of the combining words are extended beyond their standard usage, to refer to instances outside the categories usually named by those words. Finally, some conceptual combinations are classified as non-compositional because they make use of cognitive processes such as metaphor, analogy or metonymy in their interpretation. Costello & Keane (2000) use the conceptual combination SHOVEL BIRD to illustrate all three categories: 1. A "shovel bird" could be a bird with a flat beak for digging up food 2. A "shovel bird" could be a bird that comes to eat worms when you dig in the garden 3. A "shovel bird" could be a plane that scoops up water from lakes to dump on fires 4. A "shovel bird" could be a company logo stamped on the handle of a shovel 5. A "shovel bird" could be someone allowed out of jail (free as a bird) as long as he works on a road crew They argue that (1) and (2) are examples of the first category because a bird with a flat beak is atypical, whereas (3) illustrates the second category because it extends the sense of both SHOVEL and BIRD beyond their normal usage. Finally (4) and (5) are put forward as examples of third category due to their metaphoric nature. Costello & Keane (2000) detail how their constraint-based theory of conceptual combination specifically relates to each of these categories. The framework presented in this paper, however, models the non-compositionality of SHOVEL BIRD irrespective of the category of non-compositionality involved. For example, SHOVEL has the sense of being a tool, or being shaped like a shovel. The concept BIRD has three senses in the preceding example: relating to an animal, a plane, and a prisoner. Thus, the concept BIRD could be modeled as consisting of both a dominant ANIMAL (A1) and a subordinate PLANE (A2) sense, and then Fine's theorem or the CHSH inequality applied to test for the non-compositionality of each combination resulting from a combination of BIRD with SHOVEL.
Broader reflections on compositionality and non-compositionality
In addition, there is no requirement in the presented analytical framework that the concepts be homographs. We require only that there be ambiguity caused by multiple possible interpretations of a concept, and this readily presents. A WordNet analysis of the noun-noun combinations used in the compositional models explored by Mitchell & Lapata (2010) reveals that the vast majority have more than one synset and hence more than one shade of meaning, and these may even be related (as was the case for the polysemous concept SHOVEL). Ambiguity could also derive from relations. For example, the CARIN model assumes that relations apply to the modifier, so in ADOLESCENT DOCTOR (taken from Gagne (2001)), an ambiguity arises between the competing relations in "doctor FOR adolescents" and "doctor IS adolescent". Both of these possibilities for the concept ADOLESCENT could be accessed through priming, and then probabilistically represented with their corresponding variables A1 or A2 (Gagne (2001) provides an experimental procedure for priming relations). DOCTOR is also ambiguous because it is polysemous, e.g. a medical doctor, or someone holding a PhD. Both of these possibilities could be modeled by the variables B1 and B2. This example shows that the analytical framework presented here could be applied to the study of (non-)compositionality in conceptual combinations which have already been considered in the literature.
As the framework is general, and can be empirically tested, we argue that it has wide applicability for the analysis of conceptual combinations. However, the determination of compositionality that this analysis provides must take into account the priming conditions of the test, which empirically simulate the context (e.g., the discourse context) of the interpretation; there is no result without a supplied context (in this case the priming). This is also the case in quantum physics; a system may be deemed compositional in one measurement context, and not in another. It is a test for (non-)compositionality that this article contributes, not an adjudication of the debate on compositionality in conceptual representation.
This article has shown that compositionality can be expressed by a factorisable joint probability distribution. If the factorisabilty assumption is empirically borne out then the CHSH inequality will not be violated. Violation of the CHSH inequality, however, has an important modelling implication that goes beyond modelling by means of a factorisable joint probability distribution.
A parsimonious approach to modelling a conceptual combinations entails that a single model can describe how it is being interpreted, namely a global joint probability distribution can be constructed from the four empirically collected pairwise joint distributions. Adherence to the CHSH inequality is necessary for the global distribution to be constructed. The implications of Fine's theorem are stronger. It specifies the necessary and sufficient conditions for the global distribution to be constructed from the pairwise joint distributions. In other words, a violation of Fine's theorem equates the inability to treat the four variables A1, A2, B1 and B2 modelling the conceptual combination as random variables all defined on one probability space. Acacio De Barros (2012) labels this fact "contextuality" because the inability to construct the joint distribution over the four variables is equivalent to the inability to assign values to the four variables that is consistent with all the experimentally observed marginal distributions. This notion of contextuality provides some insight into non-compositionality as presented in this article. Intuitively, if the way the conceptual combination is being interpreted varies sufficiently across the different priming conditions it will not be possible to provide a global model of the interpretations which is consistent with how the the interpretations are behaving with respect to the marginal distributions. We contend that in such cases the combination is non-compositional and moreover provides an empirically testable dividing line between compositionality and non-compositionality. This view also holds that non-compositionality is a context-sensitive notion.
It appears that historically George Boole considered the problem of the constraints involved when trying to construct a global distribution of three variables from pairwise joint distributions (Pitowsky, 1994), however, it is the Russian mathematician Vorob'ev who discovered results equivalent to that of Fine's theorem. As he was a contemporary of Kolmogorov, who axiomatized probability theory, Vorob'ev was apparently ignored (Khrennikov, 2010). Thus, it was quantum physics that became famous for demonstrating the impossibility of modeling entangled systems in a single probability space. In our opinion, this is but a quirk of the past, and Dzhafarov & Kujala (2012) have independently shown how such results appear in cognitive psychology. The history just sketched, together with the fact that both the CHSH and Fine's theorem are based solely on conventional probability theory, opens the possibility to non-controversially apply them outside of quantum physics (Aerts et al., 2000;Bruza et al., 2011;Khrennikov, 2010;Aerts & Sozzo, 2011).
The resultant approach to modeling does not rest on the assumption that the cognitive system in question can be modeled in a single probability space, and so considerably widens the possibilities of analysis. It is likely that many systems traditionally deemed analytically intractable could be fruitfully modeled with such an extension.
Conclusions
This article departed from the assumption that conceptual combinations may not exclusively exhibit compositional semantics. The very idea of a non-compositional semantics has been resisted in the literature spanning cognitive science, philosophy and linguistics, probably because the "principle of compositionality" has had such a significant track record of success over a long period.
It is, however, precisely the assumption that semantics must necessarily be of a compositional form that has been regularly questioned in a wide range of literature. Despite this state of confusion, few analytical approaches have been proposed that are capable of demarcating the difference between the two forms of behavior. We have shown that it is possible to analyze the manner in which the semantics of a given conceptual combination might be considered as compositional, or noncompositional. Indeed, it is perhaps timely to remind the reader that we do not argue against compositional semantics per se. Rather, we have tried in this article to shed light on the line at which it breaks down: We believe that both compositional and non-compositional analyses will be necessary in order to provide a full account of the semantics of language.
The semantics of concepts were modeled in terms of the different senses in which a concept may be understood, where a given sense corresponds to the interpretation attributed to a particular ambiguous concept. These senses have a reliable intersubjective cognitive underpinning, as they were grounded in terms of human word association norm data, which was used to predict the probability that a subject would attribute a particular sense to an ambiguous concept. Utilising formal frameworks developed for analysing composite systems in quantum theory, we presented two methods that allow the semantics of conceptual combinations to be classified as "compositional" or "non-compositional". This classification differs from previous research in two ways.
Firstly, compositionality is not graded, e.g., "weak" vs. "strong" compositionality. Secondly, the declaration of compositionality, or non-compositionality, is not an absolute classification, but context sensitive. An empirical study of twenty-four novel conceptual combinations showed convincing evidence for non-compositional semantics in some combinations. An important corollary is that those conceptual combinations violating Fine's theorem and the CHSH inequality cannot be modeled in a single probability space. This result could have a marked impact in modeling cognitive phenomena more generally, as these phenomena are frequently assumed to be compositional, and are almost always modeled within a single probability space.
Finally, this article shows quantum theory is a fruitful source of new theoretical insights and tools for modeling conceptual semantics as it has already provided for other areas of cognition (Bruza et al., 2009;Aerts, 2009;Khrennikov, 2010;Busemeyer et al., 2011;Busemeyer & Bruza, 2012).
Thanks to James McGree for his expertise in the bootstrap statistical analysis. Thanks also to Dr. Mark Chappell for his assistance in running the experiments, and to the reviewers of an early draft of this article for the significant improvements that their comments helped to generate.
Wisniewski, E. J., & Murphy, G. (2005). Frequency of relation type as a determinant of conceptual combination: A reanalysis. Journal of Experimental Psychology: Learning, Memory and Cognition, 31 (1), 169-174. Zadrozny, W. (1992). On compositional semantics. In Proceedings of the international conference on computational linguistics (COLING-92) (p. 260-266).
A The Clauser Horne "A2 inequality"
In the appendix of (Clauser & Horne, 1974), the following inequality is proven (referred to as "inequality A2" in the paper). This general inequality can be used to generate inequalities that have been used in quantum physics. Theorem 1. Let x 1 , x 2 , y 1 , y 2 , X, Y be reals where 0 ≤ x 1 , x 2 ≤ X 0 ≤ y 1 , y 2 ≤ Y then −XY =≤ x 1 y 1 − x 1 y 2 + x 2 y 1 + x 2 y 2 − x 2 Y − y 1 X ≤ 0
In the following Ai is shorthand for Ai = 1 andĀi is shorthand for Ai = 0, i ∈ {1, 2}. Similarly for Bj andBj, j ∈ {1, 2}. Let λ denote implicit random variables which are assumed to affect the interpretation of a conceptual combination modelled by the random variables A1, A2, B1, B2.
Let x 1 = f (A1, λ), x 2 = f (A2, λ), y 1 = g(B1, λ), y 2 = g(B2, λ), X = Y = 1. Using inequality A2 (25):
−1 ≤ f (A1, λ)g(B1, λ) − f (A1, λ)g(B2, λ) + f (A2, λ)g(B1, λ) + f (A2, λ)g(B2, λ) − f (A2, λ) − g(B1, λ) ≤ 0
When compositionality is assumed (equation (9)), the previous can be rewritten as:
Each row of Table 2
Figure 1 :
1(a) Free association probabilities for the word"bat", and (b) the concept BAT represented in a two dimensional vector space of senses.
Figure 2 :
2(a) Free association probabilities for the word "boxer". (b) The concept BOXER represented in a two dimensional vector space.
where E(Ai, Bj) = Pr(Ai, Bj|λ) + Pr(Āi,Bj|λ) − Pr(Ai,Bj|λ) − Pr(Āi, Bj|λ) is an expectation value. (Note: In the preceding Pr(Ai) is shorthand for Pr(Ai) = 1 and Pr(Āi) is shorthand for Pr(Ai = 0), i ∈ {1, 2}. Similarly for Pr(Bj) and Pr(Bj), j ∈ {1, 2}).
do not satisfy the system of inequalities in Fine's theorem, then the joint distribution Pr(A1, A2, B1, B2) cannot be constructed from the four pairwise joint probability distributions Pr(Ai, Bj), i, j ∈ {1, 2}. The intuition behind this result is that non-compositionality equates with the inability to model the compound in the probability space defined by the variables A1, A2, B1 and B2. (Recall that these variables model how the individual concepts in the compound are interpreted). Recently, Dzhafarov & Kujala (2012) have established a connection between cognitive modeling and Fine's theorem using the theory of selective influences, a result that adds to the cognitive validity of Fine's theorem. In a model with several factors and a set of random variables describing responses, selective influence concerns the problem of what factors influence what variables.
Note how these equations express that the interpretation of the concept represented by the marginal probability, e.g.,Pr(Ai) is stable with respect to how the other concept is primed, represented by B1 and B2.Dzhafarov & Kujala (2012) point out that selective influence implies marginal influence. Failure of marginal selectivity means there can be no model of selective influence, meaning it is not possible to construct a joint probability distribution Pr(A1, A2, B1, B2) from the four pairwise distributions Pr(A1, B1), Pr(A1, B2), Pr(A2, B1), Pr(A2, B2), a result that is actually a generalization of Fine's theorem. Therefore, if marginal selectivity fails, then the conceptual combination can be immediately judged as non-compositional. The existence of a joint distribution over the variables A1, A2, B1, B2 implies the satisfaction of all the marginal conditions, including marginal selectivity, which is but one form of marginal condition. Thus if there is a Pr(A1, A2, B1, B2) (which means all marginal conditions are satisfied) then the system of four inequalities in Fine's theorem is not violated. Conversely, if marginal selectivity does not hold, then the joint distribution Pr(A1, A2, B1, B2) cannot be formed from the four pairwise joint distributions. For applications in cognitive science, this means marginal selectivity must first be tested before Fine's theorem can be applied.
Figure 4 :
4Example experimental structure for a trial. Non-word trials followed a similar structure, with primes in Phase 1 and/or Phase 2 replaced with non-words. The sequence of squares moving from left to right show the experimental flow, with each square a representation of the screen shown to a participant. Note: the figure does not show the exact text given to participants, and stimuli are not to scale.
Figure 5 :
5Mean Number of Interpretations (Consistent or Inconsistent) with the Primes by Prime Order (Overall, Same Prime Order, Reverse Prime Order)
Figure 6 :
6Mean Response Time for Producing Interpretations (Consistent or Inconsistent) with the Primes by Prime Order (Overall, Same Prime Order, Reverse Prime Order)(a) Mean response times (ms) before analysis (N = 65) (b) Mean response times (ms) used in ANOVA (N = 51)
− 1 ≤
1Pr(A1, B1|λ) − Pr(A1, B2|λ) + Pr(A2, B1|λ) + Pr(A2, B2λ) − Pr(A2|λ) − Pr(B1|λ) ≤ 0 The CH-A2 inequality allows other systems of inequalities to be derived depending on how the 2000):− 2 ≤ E(A2, B1) − E(A2, B2) + E(A1, B1) + E(A1, B2) ≤ 2(37)Similarly, the final two CHSH inequalities can be derived:− 2 ≤ E(A1, B2) − E(A1, B1) + E(A2, B2) + E(A2, B1) ≤ 2 (38) −2 ≤ E(A2, B2) − E(A2, B1) + E(A1, B2) + E(A1, B1) ≤ 2(39)B The CHSH inequality and existence of the joint probability distributionLet A1,A2, B1, B2 be random variables ranging over {+1, −1}. The CHSH inequality has the form:−2 ≤ E(A2, B1) − E(A2, B2) + E(A1, B1) + E(A1, B2) ≤ 2The heart of the inequality can be seen as a contrast of expectations: CHSH = (E(A1, B1) + E(A1, B2)) + (E(A2, B1) − E(A2, B2))
depicts a model of all possible interpretations across the senses which is the basis of the joint distribution Pr(A1, A2, B1, B2). The table also shows value X = (A1B1 + A1B2) + (A2B1 − A2B2), which computes the product of outcomes. The
. Upon examination of this table, we can see that these probabilities represent two clear senses for theAssociate Probability
ball
0.25
cave
0.13
vampire
0.07
fly
0.06
night
0.06
baseball
0.05
bird
0.04
blind
0.04
animal
0.02
· · ·
· · ·
(b). Such vector space representations are examples of how concepts can be modelled geometrically, which corresponds to the middle, or conceptual space layer of
represents the case where the dominant sense of concept A was primed but A was not interpretedStandard probabilistic reasoning suggests that if two ambiguous concepts A and B have behav-
ior that can be considered as compositional, then it should be possible to describe this behavior
in terms of four dichotomous random variables, {A1, A2} and {B1, B2}, ranging over two values
{1, 0}. The numbers 1 (dominant) and 2 (subordinate) correspond to the senses attributed to the
respective concepts A and B. However, if a human subject is first shown the word "vampire" and
subsequently asked to interpret the compound BOXER BAT, then they may be biased towards
giving an animal interpretation of BAT. This suggests a minimal natural extension where A1 = 1
represents a situation where the dominant sense of concept A was first primed and concept A
was indeed subsequently interpreted in that sense by the human subject. Conversely, A1 = 0
we recognize that p 1 corresponds to a situation where concepts A and B have both been interpreted in their dominant sense, when in both cases the dominant sense of each concept has been primed. Similarly, p 4 corresponds to both A and B being interpreted in their subordinate sense when the dominant sense of each concept has been primed. Thus, p 1 + p 4 = 1 occurs when the senses of the constituent concepts are perfectly correlated within the given priming condition. For example, assuming that the fruit sense of APPLE was primed and food sense of CHIP was primed, perfect correlation of senses in this priming condition means two conditions hold: (1) when APPLE is interpreted as a fruit CHIP is always interpreted as food
). A bootstrap sample is a random sample drawn with replacement from the actual sample. Two thousand samples were bootstrapped from the observed data for each conceptual combination. The CHSH value was computed for each bootstrap sample to produce a distribution of values across the two thousand samples. Percentile confidence limitsConcept A
Concept B
Results
Combination
Prime 1(A1) Prime 2 (A2) Prime 3 (B1) Prime 4 (B2) |CHSH|
Fine (%) N
boxer bat
dog
fighter
ball
vampire
0.74 [0.36,0.1.75] N (22.5)
64
bank log
money
river
journal
tree
2.13 [2,2.43]
Y (99.8)
65
apple chip
banana
computer
potato
circuit
2.11 [2,2.40]
Y (100)
65
stock tick
shares
cow
mark
flea
N/A
N/A
64
seal pack
walrus
envelop
leader
suitcase
2.14 [2,2.58]
Y (99.5)
64
spring plant
summer
coil
leaf
factory
2.02 [1.86,2.52]
Y (100)
64
poker spade
card
fire
ace
shovel
2.15 [2,2.57]
Y (87.7)
65
slug duck
snail
punch
quack
dodge
1.83 [1.07,2.63]
N (56.5)
63
club bar
member
golf
pub
handle
2.28 [2.01,2.75]
Y (99.9)
64
web bug
spider
internet
beetle
computer
2 [2,2]
Y (99.9)
63
table file
chair
chart
nail
folder
0.33 [0.31,1.51]
N (4.1)
63
match bowl
flame
contest
disk
throw
1.75 [1.5,2.43]
N (83.1)
64
net cap
gain
volleyball
limit
hat
1.86 [1.66,2.53]
N (62.4)
65
stag yarn
party
deer
story
wool
1.77 [0.96,2.68]
N (77.3)
61
mole pen
dig
face
pig
ink
1.18 [0.83,2.16]
N (48.5)
63
battery charge car
assault
volt
prosecute
2.01 [2,2.43]
Y (97.2)
63
count watch
number
dracula
time
look
1.40 [1.12,2.32]
N (43.8)
65
bill scale
phone
pelican
weight
fish
1.63 [1.43,2.39]
N (47.9)
64
rock strike
stone
music
hit
union
2.01 [2,2.42]
Y (100)
64
port vessel
harbour
wine
ship
bottle
1.56 [1.35,2.30]
N (49.6)
65
crane hatch
lift
bird
door
egg
1.92 [1.77,2.58]
Y (99.9)
63
toast gag
jam
speech
choke
joke
1.23 [0.63,2.16]
N (21)
63
star suit
moon
movie
vest
law
1.18 [0.66,2.11]
N (39.3)
62
fan post
football
cool
mail
light
2.13 [2,2.40]
Y (94.8)
63
Table 1 :
1Results of the CHSH and Fine theorem analysis: Bolded combinations denote those
deemed non-compositional, CHSH value with confidence interval (α = 0.05), 'Y'/'N' denotes
Fine's theorem was (not) violated together with the confidence (percentage) that the theorem
was violated, N the number of subjects. Conceptual combinations deemed non-compositional are
bolded.'N/A' denotes that it was not applicable to apply Fine's theorem.
.
depicts the the empirical results for BOXER BAT. Here, we see no particularordering or patterns. In particular, when we compare the form of the equation required for a
violation (12) and the the actual values
table shows that forthe joint probability distribution to exist the CHSH value must lie between -2 and 2.C Marginal selectivity
A1 A2 B1 B2 X
1
1
1
1
2
1
1
1
-1
2
1
1
-1
1
-2
1
1
-1
-1
-2
1
-1
1
1
2
1
-1
1
-1
-2
1
-1
-1
1
2
1
-1
-1
-1
-2
-1
1
1
1
-2
-1
1
1
-1
2
-1
1
-1
1
-2
-1
1
-1
-1
2
-1
-1
1
1
-2
-1
-1
1
-1
-2
-1
-1
-1
1
2
-1
-1
-1
-1
2
Table 2 :
2Expectation values of the CHSH inequalityCombination
diff (A1)
diff (A2)
diff (B1)
diff B2)
boxer bat
0.175 (0.46,0.50)
0.140 (0.24,0.62)
0.338 (2.60,0.11)
0.158 (0.27,0.60)
bank log
0.055 (0,1)
0.092 (0.02,0.88)
0.338 (3.30, 0.07)
0.257 (1.73,0.19)
apple chip
0.250 (2.77, 0.10) 0.114 (0.09,0.77)
0.294 (2.99, 0.09)
0.217 (0.78,0.38)
stock tick
0.163 (0.30.0.59)
0.085 (0.02,0.89)
0.488 (5.60,0.02) 0.386 (3.77,0.05)
seal pack
0.083 (0.01,0.91)
0.213 (0.77,0.38)
0.162 (0.38,0.54)
0.221 (0.78,0.38)
spring plant
0.294 (3.50,0.06)
0.133 (0.61,0.44)
0 (0,1)
0.173 (0.81,0.37)
poker spade
0.136 (0.26,0.61)
0.035 (0,1)
0 (0,1)
0.113 (0.09,0.76)
slug duck
0.096 (0.03,0.86)
0.153 (0.32,0.57)
0.133 (0.21,0.65)
0.026 (0,1)
club bar
0.133 (0.77,0.41)
0 (0,1)
0.125 (0.60,0.44)
0.138 (0.37,0.55)
web bug
0.210 (0.74,0.39)
0.067 (0, 1)
0.296 (1.7,0.19)
0.153 (0.32,0.57)
table file
0.058 (0,1)
0.235 (0,1)
0.114 (0.09, 0.77)
0.113 (0.09,0.76)
match bowl
0.137 (0.18,0.67)
0.250 (1.31,0.25)
0.075 (0.01,0.94)
0.022 (0,1)
net cap
0.035 (0,1)
0.092 (0.03,0.86)
0.059 (0,1)
0.175 (0.46,0.50)
stag yarn
0.375 (3.64, 0.06) 0.219 (1.14, 0.26) 0.104 (0.05,0.82)
0.045 (0,1)
mole pen
0.125 (0.29,0.59)
0.021 (0,1)
0.063 (0,1)
0.3 (1.9,0.17)
battery charge 0.067 (0,1)
0.048 (0,1)
0.117 (0.13,0.72)
0.120 (0.08, 0.78)
count watch
0.195 (0.88, 0.35) 0.063 (0,1)
0.011 (0,1)
0.063 (0,1)
bill scale
0.081 (0.01,0.90)
0.113 (0.09,0.76)
0.054 (0,1)
0.054 (0,1)
rock strike
0.188 (1.47, 0.23) 0.117 (0.13,0.71)
0.313 (3.79,0.05)
0.013(0,1)
port vessel
0.106 (0.07,0.80)
0.085 (0.02, 0.89) 0.113 (0.09,0.76)
0.118 (0.20,0.65)
crane hatch
0.141 (0.45,0.50)
0.296 (1.7,0.19)
0.149 (0.39, 0.53)
0.233 (0.93,0.33)
toast gag
(0,1)
0.008 (0,1)
0.018 (0,1)
0.026 (0,1)
star suit
0.308 (2.63, 0.10) 0.163 (0.30,0.59)
0.054 (0,1)
0.058 (0,1)
fan post
0.35 (2.6, 0.11)
0.125 (0.13, 0.72) 0.025 (0,1)
0.188 (0.55,0.46)
Table 3 :
3Differences in marginal probabilities. A two-sided test of proportions (χ-square value, p-value) is applied to the differences (critical value = 3.84, α = 0.05). Bolded items highlight where marginal selectivity fails
In much the same way as the field of physics entered the realms of experimental testing with the work of Bell and Aspect, after decades of more philosophical debate as to the separability and completeness of the quantum formalism(Isham, 1995;Laloë, 2001).
Available at http://web.usf.edu/FreeAssociation/AppendixC/Matrices.A-B .
AcknowledgementsThis project was supported in part by the Australian Research Council Discovery grants DP0773341 and DP1094974, and by the U.K. Engineering and Physical Sciences Research Council, grant number: EP/F014708/2. Welcome support was also provided by the Marie Curie International Research Staff Exchange Scheme: Project 247590, "QONTEXT -Quantum Contextual Information Access and Retrieval"). We thank Ehtibar Dzhafarov for his clarifications on selective influence.where E(Ai, Bj) = Pr(Ai, Bj|λ) + Pr(Āi,Bj|λ) − Pr(Ai,Bj|λ) − Pr(Āi, B|λ) is the expectation value of the product of outcomes Ai and BjThe derivation of the above form the CHSH inequality depends on the initial choice of values x 1 , x 2 , y 1 , y 2 in the CH-A2 inequality. Other choices are possible:This gives rise to the following expression of the CHSH inequality which is used in(Cereceda,
Acacio De Barros, J , arXiv:1206.6706v2Joint probabilities and quantum cognition. physics.gen-phAcacio De Barros, J. (2012). Joint probabilities and quantum cognition. arXiv:1206.6706v2 [physics.gen-ph].
Quantum structure in cognition. D Aerts, Journal of Mathematical Psychology. 535Aerts, D. (2009). Quantum structure in cognition. Journal of Mathematical Psychology, 53 (5), 314-348.
The violation of Bell inequalities in the macroworld. D Aerts, S Aerts, J Broeckaert, L Gabora, Foundations of Physics. 30Aerts, D., Aerts, S., Broeckaert, J., & Gabora, L. (2000). The violation of Bell inequalities in the macroworld. Foundations of Physics, 30 , 1378-1414.
D Aerts, S Sozzo, arXiv:1104.1322v1Quantum structure in cognition: Why and how concepts are entangled. Aerts, D., & Sozzo, S. (2011). Quantum structure in cognition: Why and how concepts are entangled. (arXiv:1104.1322v1)
Bell's inequality test: more ideal than ever. A Aspect, Nature. 398Aspect, A. (1999). Bell's inequality test: more ideal than ever. Nature, 398 , 189-190. Available from file:///home/loque/referencepapers/398189a0.pdf
Introduction to the special issue on quantum cognition. P Bruza, J Busemeyer, L Gabora, Journal of Mathematical Psychology. 53Bruza, P., Busemeyer, J., & Gabora, L. (2009). Introduction to the special issue on quantum cognition. Journal of Mathematical Psychology, 53 .
Quantum-like non-separability of concept combinations, emergent associates and abduction. P Bruza, K Kitto, B Ramm, L Sitbon, S Blomberg, D Song, Logic Journal of the IGPL. 202(In PressBruza, P., Kitto, K., Ramm, B., Sitbon, L., Blomberg, S., & Song, D. (2011). Quantum-like non-separability of concept combinations, emergent associates and abduction. Logic Journal of the IGPL, 20 (2), 445-457. (In Press)
Quantum cognition and decision. J Busemeyer, P Bruza, Cambridge University PressBusemeyer, J., & Bruza, P. (2012). Quantum cognition and decision. Cambridge University Press.
A quantum theoretical explanation for probability judgment errors. J Busemeyer, E Pothos, R Franco, J Trueblood, Psychological Review. 1182Busemeyer, J., Pothos, E., Franco, R., & Trueblood, J. (2011). A quantum theoretical explanation for probability judgment errors. Psychological Review , 118 (2), 193-218.
Quantum mechanical probabilities and general probabilistic constraints for Einstein-Podolsky-Rosen-Bohm experiments. J Cereceda, Foundations of Physics Letters. 135Cereceda, J. (2000). Quantum mechanical probabilities and general probabilistic constraints for Einstein-Podolsky-Rosen-Bohm experiments. Foundations of Physics Letters, 13 (5), 427-442.
Experimental consequences of objective local theories. J Clauser, M Horne, Physical Review D. 102Clauser, J., & Horne, M. (1974). Experimental consequences of objective local theories. Physical Review D , 10 (2), 526-535.
Polysemy in conceptual combination: Testing the constraint theory of combination. F Costello, M Keane, Nineteenth annual conference of the cognitive science society. ErlbaumCostello, F., & Keane, M. (1997). Polysemy in conceptual combination: Testing the constraint theory of combination. In Nineteenth annual conference of the cognitive science society. Erl- baum.
Efficient creativity: Constraint-guided conceptual combination. F Costello, M Keane, Cognitive Science. 242Costello, F., & Keane, M. (2000). Efficient creativity: Constraint-guided conceptual combination. Cognitive Science, 24 (2), 299-349.
Selectivity in probabilistic causality: Where psychology runs into quantum physics. R Dzhafarov, J Kujala, Journal of Mathematical Psychology. 561Dzhafarov, R., & Kujala, J. (2012). Selectivity in probabilistic causality: Where psychology runs into quantum physics. Journal of Mathematical Psychology, 56 (1), 54-63.
Boostrap methods for standard errors. confidence intervals and other methods of statistical accuracy. B Efron, R Tibshirani, Statistical Science. 11Efron, B., & Tibshirani, R. (1986). Boostrap methods for standard errors. confidence intervals and other methods of statistical accuracy. Statistical Science, 1 (1), 54-77.
Hidden variables, joint probability and the Bell inequalities. A Fine, Physics Review Letters. 485Fine, A. (1982a). Hidden variables, joint probability and the Bell inequalities. Physics Review Letters, 48 (5), 291-295.
Joint distributions, quantum correlations and commuting observables. A Fine, Journal of Mathematical Physics. 237Fine, A. (1982b). Joint distributions, quantum correlations and commuting observables. Journal of Mathematical Physics, 23 (7), 1306-1310.
J Fodor, Concepts, Where Cognitive Science Went Wrong. Oxford University PressFodor, J. (1998). Concepts, Where Cognitive Science Went Wrong. Oxford University Press.
Representing concepts in formal ontologies: Compositionality vs. typicality effects. M Frixione, A Lieto, International Journal of Logic and Logic Philosophy. 214Frixione, M., & Lieto, A. (2012). Representing concepts in formal ontologies: Compositionality vs. typicality effects. International Journal of Logic and Logic Philosophy, 21 (4), 391-414.
Relation and lexical priming during the interpretation of noun-noun combinations. C Gagne, Journal of Experimental Psychology. 271Gagne, C. (2001). Relation and lexical priming during the interpretation of noun-noun combina- tions. Journal of Experimental Psychology, 27 (1), 236-254.
Influence of thematic relations on the comprehension of modifiernoun combinations. C Gagne, E Shoben, Journal of Experimental Psychology. 23Gagne, C., & Shoben, E. (1997). Influence of thematic relations on the comprehension of modifier- noun combinations. Journal of Experimental Psychology, 23 , 71-87.
Conceptual Spaces: The Geometry of Thought. P Gärdenfors, MIT PressGärdenfors, P. (2000). Conceptual Spaces: The Geometry of Thought. MIT Press.
Is Bell's locality condition necessary for the derivation of Bell's inequality? Annales de la Fondation Louis de Broglie. M Golshani, A Fahmi, 26Golshani, M., & Fahmi, A. (2001). Is Bell's locality condition necessary for the derivation of Bell's inequality? Annales de la Fondation Louis de Broglie, 26 (4).
Conceptual combination. J Hampton, Knowledge, concepts, and categories. K. Lamberts & D. ShankMIT PressHampton, J. (1997). Conceptual combination. In K. Lamberts & D. Shank (Eds.), Knowledge, concepts, and categories (p. 133-160). MIT Press.
Lectures on quantum theory. C Isham, Imperial College PressIsham, C. (1995). Lectures on quantum theory. Imperial College Press.
Ubiquitous quantum structure. A Khrennikov, SpringerKhrennikov, A. (2010). Ubiquitous quantum structure. Springer.
Do we really understand quantum mechanics? Strange correlations, paradoxes, and theorems. F Laloë, American Journal of Physics. 696Laloë, F. (2001). Do we really understand quantum mechanics? Strange correlations, paradoxes, and theorems. American Journal of Physics, 69 (6), 655-701.
T Maudlin, Quantum non-locality and relativity: Metaphysical intimations of modern physics. Blackwell13Maudlin, T. (1994). Quantum non-locality and relativity: Metaphysical intimations of modern physics (Vol. 13). Blackwell.
Context and structure in conceptual combination. D Medin, E Shoben, Cognitive Psychology. 20Medin, D., & Shoben, E. (1988). Context and structure in conceptual combination. Cognitive Psychology, 20 , 58-190.
Composition in distributional models of semantics. J Mitchell, M Lapata, Cognitive Science. 34Mitchell, J., & Lapata, M. (2010). Composition in distributional models of semantics. Cognitive Science, 34 , 1388-1429.
Comprehending complex concepts. G Murphy, Cognitive Science. 12Murphy, G. (1988). Comprehending complex concepts. Cognitive Science, 12 , 529-562.
The big book of concepts. G Murphy, MIT PressMurphy, G. (2002). The big book of concepts. MIT Press.
The university of South Florida, word association, rhyme and word fragment norms. D Nelson, C Mcevoy, T Schreiber, Behavior Research Methods, Instruments & Computers. 36Nelson, D., McEvoy, C., & Schreiber, T. (2004). The university of South Florida, word association, rhyme and word fragment norms. Behavior Research Methods, Instruments & Computers, 36 , 408-420.
George Boole's 'Conditions of Possible Experience' and the Quantum Puzzle. I Pitowsky, The British Journal for the Philosophy of Science. 451Pitowsky, I. (1994). George Boole's 'Conditions of Possible Experience' and the Quantum Puzzle. The British Journal for the Philosophy of Science, 45 (1), 95-125.
Pet rocks, tame robots and desert fish: Developmental differences in the understanding of combined concepts. B Ramm, University of AdelaideUnpublished Honours ThesisRamm, B. (2000). Pet rocks, tame robots and desert fish: Developmental differences in the understanding of combined concepts. Unpublished Honours Thesis, University of Adelaide.
Novelty and processing demands in conceptual combination. B Ramm, G Halford, Australian Journal of Psychology. 644Ramm, B., & Halford, G. (2012). Novelty and processing demands in conceptual combination. Australian Journal of Psychology, 64 (4), 199-208.
Conceptual combination during sentence comprehension: Evidence for compositional processes. D Swinney, T Love, M Walenski, E Smith, Psychological Science. 185Swinney, D., Love, T., Walenski, M., & Smith, E. (2007). Conceptual combination during sentence comprehension: Evidence for compositional processes. Psychological Science, 18 (5), 397-400.
University of Alberta norms of relative meaning frequency for 566 homographs. L Twilley, P Dixon, D Taylor, K Clark, Memory & Cognition. 221Twilley, L., Dixon, P., Taylor, D., & Clark, K. (1994). University of Alberta norms of relative meaning frequency for 566 homographs. Memory & Cognition, 22 (1), 111-126.
Compound nominals, context and compositionality. D Weiskopf, Synthese. 156Weiskopf, D. (2007). Compound nominals, context and compositionality. Synthese, 156 , 161-204.
Similarity and emergence in conceptual combination. M Wilkenfeld, T Ward, Journal of Memory and Language. 45Wilkenfeld, M., & Ward, T. (2001). Similarity and emergence in conceptual combination. Journal of Memory and Language, 45 , 21-38.
Construal and similarity in conceptual combination. E J Wisniewski, Journal of Memory and Language. 353Wisniewski, E. J. (1996). Construal and similarity in conceptual combination. Journal of Memory and Language, 35 (3), 435-453.
− 1 ≤ Pr(A1, B1|λ) − Pr(A1, B2|λ) + Pr(A2, B1|λ) + Pr(A2, B2|λ) − Pr(A2|λ) − Pr(B1|λ) ≤ 0. − 1 ≤ Pr(A1, B1|λ) − Pr(A1, B2|λ) + Pr(A2, B1|λ) + Pr(A2, B2|λ) − Pr(A2|λ) − Pr(B1|λ) ≤ 0 (26)
− 1 ≤ Pr(Ā1,B1|λ) − Pr(Ā1,B2|λ) + Pr(Ā2,B1|λ) + Pr(Ā2,B2|λ) − Pr(Ā2|λ) − Pr(B1|λ) ≤ 0. − 1 ≤ Pr(Ā1,B1|λ) − Pr(Ā1,B2|λ) + Pr(Ā2,B1|λ) + Pr(Ā2,B2|λ) − Pr(Ā2|λ) − Pr(B1|λ) ≤ 0 (27)
− 1 ≤ Pr(A1,B1|λ) − Pr(A1,B2|λ) + Pr(A2,B1|λ) + Pr(A2,B2|λ) − Pr(A2|λ) − Pr(B1|λ) ≤ 0. − 1 ≤ Pr(A1,B1|λ) − Pr(A1,B2|λ) + Pr(A2,B1|λ) + Pr(A2,B2|λ) − Pr(A2|λ) − Pr(B1|λ) ≤ 0 (28)
− 1 ≤ Pr(Ā1, B1|λ) − Pr(Ā1, B2|λ) + Pr(Ā2, B1|λ) + Pr(Ā2, B2|λ) − Pr(Ā2|λ) − Pr(B1|λ) ≤ 0. − 1 ≤ Pr(Ā1, B1|λ) − Pr(Ā1, B2|λ) + Pr(Ā2, B1|λ) + Pr(Ā2, B2|λ) − Pr(Ā2|λ) − Pr(B1|λ) ≤ 0 (29)
. Multiply, 1Multiply (28) by -1:
≥ − Pr, A1,B1|λ) + Pr(A1,B2|λ) − Pr(A2,B1|λ) − Pr(A2,B2|λ) + Pr. ≥ − Pr(A1,B1|λ) + Pr(A1,B2|λ) − Pr(A2,B1|λ) − Pr(A2,B2|λ) + Pr(A2|λ
≥ − Pr, Ā1, B1|λ) + Pr(Ā1, B2|λ) − Pr(Ā2, B1|λ) − Pr(Ā2, B2|λ) + Pr(Ā2|λ) + Pr(B1|λ) ≥ 0 (31) Add (26). ≥ − Pr(Ā1, B1|λ) + Pr(Ā1, B2|λ) − Pr(Ā2, B1|λ) − Pr(Ā2, B2|λ) + Pr(Ā2|λ) + Pr(B1|λ) ≥ 0 (31) Add (26),(27),(30),(31):
−2 ≤ Pr(A1, B1|λ) − Pr(A1, B2|λ) + Pr(A2, B1|λ) + Pr(A2, B2|λ) − Pr(A2|λ) − Pr(B1|λ)+. −2 ≤ Pr(A1, B1|λ) − Pr(A1, B2|λ) + Pr(A2, B1|λ) + Pr(A2, B2|λ) − Pr(A2|λ) − Pr(B1|λ)+
Pr(Ā1,B1|λ) − Pr(Ā1,B2|λ) + Pr(Ā2,B1|λ) + Pr(Ā2,B2|λ) − Pr(Ā2|λ) − Pr(B1|λ)+. Pr(Ā1,B1|λ) − Pr(Ā1,B2|λ) + Pr(Ā2,B1|λ) + Pr(Ā2,B2|λ) − Pr(Ā2|λ) − Pr(B1|λ)+
− Pr, A1,B1|λ) + Pr(A1,B2|λ) − Pr(A2,B1|λ) − Pr(A2,B2|λ) + Pr(A2|λ) + Pr(B1|λ)+. − Pr(A1,B1|λ) + Pr(A1,B2|λ) − Pr(A2,B1|λ) − Pr(A2,B2|λ) + Pr(A2|λ) + Pr(B1|λ)+
− Pr, Ā1, B1|λ) + Pr(Ā1, B2|λ) − Pr(Ā2, B1|λ) − Pr(Ā2, B2|λ) + Pr(Ā2|λ) + Pr(B1|λ. − Pr(Ā1, B1|λ) + Pr(Ā1, B2|λ) − Pr(Ā2, B1|λ) − Pr(Ā2, B2|λ) + Pr(Ā2|λ) + Pr(B1|λ) ≤ 2
The probabilities conditioned on individual concepts all cancel out, therefore: −2 ≤ Pr(A1, B1|λ) + Pr(Ā1,B1|λ) − Pr(A1,B1|λ) − Pr(Ā1, B1|λ)+. The probabilities conditioned on individual concepts all cancel out, therefore: −2 ≤ Pr(A1, B1|λ) + Pr(Ā1,B1|λ) − Pr(A1,B1|λ) − Pr(Ā1, B1|λ)+
− Pr, A1, B2|λ) − Pr(Ā1,B2|λ) + Pr(A1,B2|λ) + Pr(Ā1, B2|λ)+. − Pr(A1, B2|λ) − Pr(Ā1,B2|λ) + Pr(A1,B2|λ) + Pr(Ā1, B2|λ)+
Pr(A2, B1|λ) + Pr(Ā2,B1|λ) − Pr(A2,B1|λ) − Pr(Ā2, B1|λ)+. Pr(A2, B1|λ) + Pr(Ā2,B1|λ) − Pr(A2,B1|λ) − Pr(Ā2, B1|λ)+
Pr(A2, B2|λ) + Pr(Ā2,B2|λ) − Pr(A2,B2|λ) − Pr(Ā2, B2|λ. Pr(A2, B2|λ) + Pr(Ā2,B2|λ) − Pr(A2,B2|λ) − Pr(Ā2, B2|λ) ≤ 2
| [] |
[
"DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation",
"DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation"
] | [
"Yuhang Lai ",
"Chengxi Li ",
"Yiming Wang ",
"Tianyi Zhang ",
"Ruiqi Zhong ",
"Luke Zettlemoyer ",
"Scott Wen-Tau Yih ",
"Daniel Fried ",
"Sida Wang ",
"Tao Yu "
] | [] | [] | We introduce DS-1000, a code generation benchmark with a thousand data science problems spanning seven Python libraries, such as NumPy and Pandas. Compared to prior works, DS-1000 incorporates three core features. First, our problems reflect diverse, realistic, and practical use cases since we collected them from Stack-Overflow. Second, our automatic evaluation is highly specific (reliable) -across all Codex-002predicted solutions that our evaluation accept, only 1.8% of them are incorrect; we achieve this with multi-criteria metrics, checking both functional correctness by running test cases and surface-form constraints by restricting API usages or keywords. Finally, we proactively defend against memorization by slightly modifying our problems to be different from the original Stack-Overflow source; consequently, models cannot answer them correctly by memorizing the solutions from pre-training. The current best public system (Codex-002) achieves 43.3% accuracy, leaving ample room for improvement. We release our benchmark at https://ds1000-code-gen. github.io. | 10.48550/arxiv.2211.11501 | [
"https://export.arxiv.org/pdf/2211.11501v1.pdf"
] | 253,734,939 | 2211.11501 | 8a4fc5f00cd4aca61e148e46a2125c3a406719f1 |
DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation
Yuhang Lai
Chengxi Li
Yiming Wang
Tianyi Zhang
Ruiqi Zhong
Luke Zettlemoyer
Scott Wen-Tau Yih
Daniel Fried
Sida Wang
Tao Yu
DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation
We introduce DS-1000, a code generation benchmark with a thousand data science problems spanning seven Python libraries, such as NumPy and Pandas. Compared to prior works, DS-1000 incorporates three core features. First, our problems reflect diverse, realistic, and practical use cases since we collected them from Stack-Overflow. Second, our automatic evaluation is highly specific (reliable) -across all Codex-002predicted solutions that our evaluation accept, only 1.8% of them are incorrect; we achieve this with multi-criteria metrics, checking both functional correctness by running test cases and surface-form constraints by restricting API usages or keywords. Finally, we proactively defend against memorization by slightly modifying our problems to be different from the original Stack-Overflow source; consequently, models cannot answer them correctly by memorizing the solutions from pre-training. The current best public system (Codex-002) achieves 43.3% accuracy, leaving ample room for improvement. We release our benchmark at https://ds1000-code-gen. github.io.
Introduction
Data science is important in many areas (Romero & Ventura, 2013; Bolyen et al., 2019; Faghmous & Kumar, 2014), but requires programming proficiency in specialized libraries, thus posing substantial barriers to lay users. Fortunately, these barriers could potentially be reduced by pre-trained code generation models: for example, Codex (Chen et al., 2021a) can complete small Python snippets with non-trivial * Equal contribution. Author ordering determined by alphabetical order. 1 The University of Hong Kong 2 Peking University 3 Stanford University 4 UC Berkeley 5 University of Washington 6 Meta AI 7 Carnegie Mellon University. Correspondence to: Tao Yu <tyu@cs.hku.hk>. accuracy and AlphaCode (Li et al., 2022) can tackle difficult competitive programming problems. We anticipate that these barriers will diminish if the community can make solid progress in applying these models to data science problems.
However, we currently lack a benchmark that 1) focuses on everyday data science applications, 2) includes naturalistic intents and contexts, and 3) has a reliable execution-based evaluation metric. Most of the existing datasets with reliable test cases (Hendrycks et al., 2021; Chen et al., 2021a) focus on competition or interview-style programming problems; they measure algorithmic understanding but do not target real-world usage. Also, as represented by e.g., user problems on StackOverflow, users' data science coding problems usually have diverse contexts including their incorrect code, error messages, and input-output examples, which cannot be found in most prior data science relevant code generation benchmarks ( To fill this gap, we introduce DS-1000, a benchmark with a thousand problems covering seven widely-used Python data science libraries: NumPy, Pandas, TensorFlow, PyTorch, SciPy, Scikit-learn, and Matplotlib. We highlight three core features of DS-1000: 1) it contains realistic problems with diverse contexts, 2) it implements reliable multi-criteria execution-based evaluation metrics, and 3) it proactively defends against memorization. We outline how we achieved each of them below.
First, we collected naturally occurring problems from Stack-Overflow, manually scored their representativeness and usefulness, and curated a subset of them to create our benchmark. While inputs in existing code generation datasets are either highly structured (problems or code context) or restricted in scope, our natural problems are diverse in content and format. Replace [insert] in the code context with following predicted code snippets Problem Code Context
Execute to evaluate
Multi-criteria Execution-based Evaluation Figure 1: An example problem in DS-1000. The model needs to fill in the code into "[insert]" in the prompt on the left; the code will then be executed to pass the multi-criteria automatic evaluation, which includes the test cases and the surface-form constraints; a reference solution is provided at the bottom left. efficient code implementations (Figure 1), provide incorrect code with an error message and ask for bug fixes (Figure 13), inquire about specific API usage (Figure 14), or ask for code that implements functionality they specify with input-output examples ( Figure 1). These problems better reflect realworld applications and open up new modeling challenges, which have been understudied in existing code generation benchmarks.
Second, it is challenging to evaluate program solutions to natural and diverse problems reliably. Unlike competitionstyle problems, natural problems might lack executable contexts and test cases, allow multiple solutions, depend on external libraries, etc. To address these challenges, five of the authors of this paper, all proficient in data science and experts in Python, hand-adapted the original problems by writing executable code contexts, rewriting problems to be specific enough to be testable, and implementing automatic multi-criteria execution-based evaluation using carefully written and reviewed test cases and constraints that check functional correctness and surface-form constraints. On program solutions predicted by Codex-002, we find that only 1.8% of the predicted programs passing our evaluation are incorrect (false discovery rate), indicating that our evaluation is reliable.
Third, one potential concern for adapting public problems is that the models might simply memorize the corresponding solution during pre-training time (Carlini et al., 2021a). We show in Section 2.4 that this can indeed happen: while Codex achieves 72.5% accuracy on the popular numpy-100 dataset, the accuracy drastically drops to 23.6% after per-turbing them without increasing their difficulty. Therefore, while building DS-1000, we proactively took measures against memorization by perturbing each problem. Figure 1 shows an example DS-1000 problem, its reference solution, and an expert-written automatic multi-criteria evaluation. To answer the problem, the model needs to fill in the solution; to pass our automatic evaluation, it needs to 1) return the correct output and 2) avoid inefficient implementations that use for-loops.
We
Benchmark Construction
Our pipeline for building DS-1000 contains five stages, illustrated in Figure 2 and described below. 1) We scraped and selected high-quality problems from StackOverflow (Section 2.1). 2) We rewrote the problem and the reference solution so that the problem is unambiguous and the reference solution is executable.(Section 2.2) 3) We implemented a multi-criteria automatic evaluation for each problem, which includes test cases and surface-form constraints (Section 2.3). 4) We performed a pilot study which shows that Codex can answer problems by memorizing the pre-training corpus, and proactively took measures to pre- Figure 2: The pipeline for building DS-1000. See the start of Section 2 for a detailed description. vent this by perturbing the problems and their reference solutions in DS-1000 (Section 2.4). 5) We improved our multi-criteria evaluation by requiring it to reject a small set of sample predictions that we considered incorrect via manual review (Section 2.5), and then calculated the false discovery rate of our metric on a larger set of sample predictions. To reliably carry out this data collection procedure, five authors who are computer science students and familiar with data science spent a total of about 1200 hours constructing DS-1000 (including steps from problem selection to quality review).
Problem Selection
Sourcing Popular StackOverflow Problems. To obtain natural and high-quality problems, we scraped data from StackOverflow under each library tag (e.g., "NumPy"). To select popular problems, we first removed duplicates and selected problems with at least 1 vote, 1000 views, that had an accepted answer. Next, we ranked problems based on votes and views and calibrated these statistics based on the time a problem was created since older problems naturally have more views and votes. We refer readers to Appendix A.1 for more details. Among the filtered problems, we randomly sampled an initial pool containing 4500 problems (1000 for NumPy, Pandas, and Matplotlib, 500 for Scikit-learn and SciPy, 250 for TensorFlow, and 250 for PyTorch).
Filtering Suitable Problems. To select problems from the above pool for our benchmark, our annotators scored each problem according to the following rubric: whether a problem a) contains input-output examples in the problem, b) is difficult to predict the solution for models according to the annotators' judgment, c) is practically useful, d) has a clear description, and e) is feasible to evaluate the solution. We aggregated these scores, reranked the candidate problems, and incorporated the top-ranked ones to create DS-1000. We ended up with 451 unique StackOverflow problems. More than half of the original StackOverflow problems were filtered out because they ask for an explanation for an algorithm or general content (see Appendix A.1).
Controlling Library Version.
Data science libraries are continuously evolving. As a result, the semantics of the problem is determined not only by the language description but also by the software environment (e.g., library version). For example, the same code snippet, tf.math.reciprocal(A), is only valid in the newer version of TensorFlow. We fixed the evaluation environment to include the latest versions of libraries that can be installed with Python 3.7.10 and present the detailed documentation in Appendix A.1.
Rewriting Problems and Reference Solutions
Creating Executable Context.
To implement an execution-based evaluation for each natural language problem, we needed to write an executable context. We first added package imports and defined the variables described in the problem. For example, in Figure 2, we imported the Pandas package and created the dataframe described in the problem as part of the context. Second, we needed to specify the desired behavior of the target program to be predicted. For example, in Figure 2, a code generation model can infer from the context that the resulting dataframe should be named as result, rather than output.
Rewriting Matplotlib Problems.
Many Matplotlib problems on StackOverflow clarify their problems with example figures, which, however, cannot be encoded by current pre-trained code models. Therefore, we rewrote the StackOverflow problems in symbols (i.e., code and text) and adopted a different format from other libraries (see Figure 15).
Collecting Reference Solutions. Finally, we obtained the reference solution for each problem from multiple highvote replies, edited all reference solutions to be executable given the context we provided, and fixed errors whenever Perturbation Categories Example Surface Convert to completing function Figure 16, change format of code context Paraphrase the description of the problem Figure 17, express the same problem in different words Change the example input and output Figure 18, replace this example with a longer one Semantic Replace keywords with analogy words Figure 19, replace "inv" with "exp" Change the required index Figure 20, need the specified rows and columns Reverse the order of the list, string or dataframe Figure 21, reverse the needed string Change the type of the required result Figure 22, change the DataFrame to a Series
Difficult Rewrite
Combining several surface and semantic perturbations Figure 23, change examples and replace "highest" with "lowest" Digging more perturbations that increase the difficulty Figure 24, hypothesis testing we noticed them (e.g., Figure 11). Even though we did not use the reference solution in DS-1000 for evaluation, we provided them in DS-1000 to facilitate future research.
Implementing Multi-Criteria Evaluations
Our automatic evaluation is multi-criteria, checking both functional correctness and surface-form constraints.
Functional Correctness. To evaluate functional correctness, we constructed test cases by converting the inputoutput examples provided in the StackOverflow problem; then the expert annotators manually wrote additional test cases to improve the evaluation. To evaluate a predicted program, we execute it on the test inputs and compare the outputs with the ground truth.
However, checking the exact equivalence of outputs can inadvertently reject correct programs. Many problems involve floating point arithmetic, and many return values are acceptable since they are close to the ground truth answer, but they are not exactly equal. Some problems require random outputs, e.g., generating 100 samples from a distribution, and even executing the reference solution twice can lead to different results. Many problems do not fully specify all the parameters, e.g., the color scheme for the output figure in the Matplotlib library, or the hyper-parameters of a learning algorithm in Scikit-learn; therefore, programs with different parameters can satisfy the requirement, returning values that are different. In all these cases, we relied on the best judgment of our expert annotators to implement the metric for each problem, which sometimes involves complicated techniques, such as using statistical tests to handle randomness. See more examples in Appendix A.2.
Surface-Form Constraints. Functional correctness alone is insufficient. For example, vectorized operations can be expanded using for-loops, which, however, are inefficient and do not meet the requirement of the problem. Therefore, we introduced additional surface-form constraints that require the presence/absence of specific APIs for keywords. Notably, such a check is different from the standard surface-form metrics such as CodeBLEU (Ren et al., 2020), which requires the whole model prediction to be uniformly similar to a reference solution; instead, DS-1000 precisely targets small but important parts of surface form.
Perturbation to Defend Against Memorization
Many models are pre-trained on web text and hence memorize its content (Elangovan et al., 2021; Carlini et al., 2021b); therefore, they might answer our problems correctly by simply recalling the solutions seen during pre-training if they were trained on StackOverflow or derivative sites. We demonstrate this effect on numpy-100, 1 a problem set of 100 NumPy problems with solutions that are copied several thousand times on GitHub. When prompted to answer a selected subset of 20 problems, Codex-002 achieves 72.5% pass@1 accuracy. 2
However, if the model truly knows how to solve those problems, it should be able to solve similar problems at the same level of difficulty. This motivates us to perturb the problems in two ways: surface perturbations and semantic perturbations. For surface perturbations, we paraphrased the problem or modified the code context in the problem, but the reference solution should stay the same after the perturbation; for example, changing from "Create a 5x5 matrix . . . " to "I need a matrix sized 5x5 . . . ". For semantic perturbations, we changed the semantics of the reference solution without changing its difficulty ; for example, asking for "min" instead of "max" in the problem. We provide more detailed categories in Table 1. In all of these cases, the difficulty of the problem does not change for humans.
Origin Surface Semantic Avg. Perturbation 72.5 50.8 23.6 40.6 Therefore, to proactively prevent memorization, we applied the above two perturbations to DS-1000 problems. Perturbation is a labor-intensive process. Even for a simple perturbation from min to max, our annotators needed to edit all mentions of min, smallest, minimum to make the problem coherent, and updated the code context, reference solution, and our evaluation metric accordingly.
Finally, to make DS-1000 more challenging, we additionally introduced several semantic perturbations that increase the difficulty on purpose ("Difficult Rewrite" in Table 1).
Quality Assurance
To ensure the quality of our benchmark, each problem, reference solution, and automatic multi-criteria evaluation were reviewed by at least three expert annotators familiar with the library. Additionally, we "red teamed" our automatic evaluation by requiring it to reject all programs known to be incorrect, e.g., solutions to semantically perturbed problems (see Figure 2). After the quality review, we also quantitatively measured the evaluation quality by examining whether our multi-criteria automatic metric can reject incorrect Codex-002 predictions (more details in Section 3).
Dataset Statistics
We provide detailed dataset statistics in Table 3. DS-1000 contains 1000 problems originating from 451 unique Stack-Overflow problems. To defend against potential memorization, more than half of the DS-1000 problems are modified from the original StackOverflow problems (Section 2.4); they include 152 surface perturbations, 235 semantic perturbations, and 162 difficult rewrites.
DS-1000 has carefully designed testing methods, checking both execution semantics and surface-form constraints. For each problem, there are 1.6 test cases (manually annotated corner test cases) on average, and 19.4% of them are accompanied by surface-form constraints. The average of problem words in DS-1000 is 140. On average, the reference solution contains 3.6 lines of code. Table 3 shows the library breakdown statistics: Most libraries have a similar distribution except Matplotlib because we adopted a different problem format due to its multimodal nature. Table 4: Comparison of DS-1000 to other benchmarks. The first three benchmarks target general Python usage and the next three involve data science code generation. DS-1000 adapts realistic problems from StackOverflow and checks both execution semantics and surface-form constraints.
dictions from Codex-002 for each problem (2800 problemcode examples in total). 3 We run our automatic metric on all the sample predictions, review the predictions manually, calculate how often they disagree, and report the following four quantities:
• Sample Level False Discovery Rate: among all predicted samples that pass our automatic evaluation, 1.8% of them are incorrect according to our annotator.
• Sample Level False Omission Rate: among all predicted samples that do not pass our automatic evaluation, 0.5% of them are correct according to our annotator.
• Problem Level False Positive Percentage: among all problems, 5.7% of the problems contain at least one incorrect sample prediction that pass our automatic metric.
• Problem Level False Negative Percentage: among all problems, 5.7% (it happens to be the same as the above) problems contain at least one correct sample prediction that fails to pass our automatic metric.
Generally, problem-level measures are especially stringent since they require correctly judging all predictions among the 40 sample predictions. While an apple-to-apple comparison with other datasets is not possible due to the difference in the underlying model and benchmark construc-
Benchmarking State-of-the-Art Models
We used DS-1000 to benchmark five pre-trained code models from three different families. The best model Codex-002 Insertion achieves 43.3% accuracy, indicating room for improvement. We also show the results on the perturbed and unperturbed examples in Section 4.4.
Prompt Format
We provide an official prompt format in DS-1000 because it significantly impacts the performance of pre-trained language models (Zhao et al., 2021). Figure 1 shows an example: each prompt starts with a natural language description and then provides a code context; the code context uses HTML-like markers to indicate the location of missing code that a model needs to fill in and provides both left and the right context to the missing code pieces.
We decide to use infilling as our official format because the right context is important to specify the behavior of the program predictions (e.g., the variable name for the result). More broadly, given that 1) infilling is an important functionality for real-world programming and 2) there is a growing trend in pre-training with the right context On the other hand, given that many current language models trained on code are not yet capable of infilling, we also provide an official prompt that transfers the right context information into the left context ( Figure 25 and 26). Nevertheless, despite our best effort to design the prompts for left-to-right models, they still lag behind models with infilling capabilities (Section 4.3). We conjecture that infilling models are inherently more effective at utilizing the right context information. Finally, we only have Completion format for Matplotlib problems because Matplotlib provides global access to the current figure so the right context is not necessary.
From now on, we refer to the infilling prompt format as Insertion format and the left-context-only format as Completion format.
Experimental Setup
Models. We experiment with three families of pre-trained models: Codex, InCoder (Fried et al., 2022), and Code-Gen (Nijkamp et al., 2022). For the Codex models, we experiment with codex-davinci-002 (Codex-002), codexdavinci-001 (Codex-001), and codex-cushman-001 (Codex-Cushman). For InCoder and CodeGen, we experiment with the 6B parameters models. Among these models, Codex and CodeGen models are trained to predict the right context while InCoder models are trained for both left-to-right generation and infilling. In addition, Codex-002 also supports infilling, although the exact model training details are not disclosed.
Implementation Details. We generate 40 samples for each DS-1000 problem with temperature set to 0.2, top-p cutoff set to 0.95, and max generation length set to 1024. We set the stop sequence tokens to "</code>" and "# SOLUTION END". These samples are used in the unbiased estimator of pass@1. For DS-1000, evaluating generated codes does not require special computational resources like GPUs.
Main Results
Table 5 displays the pass@1 accuracy on DS-1000. We find that DS-1000 can differentiate models with different capabilities. The best model Codex-002 achieves a nontrivial but far-from-perfect average accuracy of 43.3%, indicating substantial room for improvement. In contrast, other models like CodeGen-6B or InCoder-6B have much worse overall performance, with accuracy lower than 5% on some libraries. Qualitatively, these smaller models often cannot correctly follow the prompt instruction, generating additional comments instead of the required code. Future ablation is needed to understand the underlying cause for this performance gap, which could be the difference in model size, lack of instruction tuning, or the difference in pre-training data.
In addition, we observe that model accuracy varies across different libraries. This speaks to the importance of a holistic evaluation of multiple data science libraries because performance in a specific library may not directly generalize to other libraries.
Moreover, we find that Insertion format often leads to better performance. The same Codex-002 model has a 4.1% average accuracy improvement when used with Insertion format than used with Completion format. This shows the importance of the infilling capability for data science code completion.
Results by Perturbation
In Section 2.4, we demonstrated the risk of memorizing the solutions on the numpy-100 problem set; do we observe the same effect on DS-1000? To investigate this, we applied surface perturbations (i.e., the problem changes but the reference solution does not change) and semantic perturbations (the reference solution will change) to the problems in DS-1000. Table 6 shows the results. 5 The performance of Codex-002 drops after perturbation (3.4% on surface perturbations and 9.0% on semantic perturbations) but the drop is much less severe than what we observed on numpy-100. This indirectly suggests that Codex-002 might have memorized the solution for some StackOverflow problems, but the effect is less severe because they have not been repeated as often as numpy-100 on the internet. Still, we believe problem perturbation to be a useful strategy to defend against memorization by future models proactively.
Additionally, we rewrote some problems to create more DS-1000 problems by intentionally making them more difficult even for human programmers. As expected, Codex-002 performs much worse after the rewrite, and we plan to use these problems as a challenge for future models.
We give a preliminary error analysis in Appendix C. Table 6: Effect of three different types of problem perturbation. In each subsection, we compare the accuracy of the perturbed problems to that of the original problems. We observe that although Surface and Semantic perturbations also cause a performance drop on DS-1000 the performance drop is much smaller compared to that on numpy-100. * : Numbers are averaged from less than 10 problems.
Related
Conclusion
We propose DS-1000, a benchmark for generating code for data analysis. Our benchmark 1) contains realistic problems, 2) implements reliable automatic metrics, and 3) proactively defends against memorization strategies. We hope DS-1000 can track the progress of this research area and facilitate fair comparisons between models, and our methods to construct it can inspire other areas where the task is complicated and the ground truth is challenging to evaluate. Filtering Suitable Problems. From the initial pool of popular problems, our annotators selected problems that are suitable for building DS-1000. Besides the considerations mentioned in Section 2, we discuss those problems that are not selected here. In general, we consider a problem to be unsuitable if our multi-criteria evaluation is not applicable (untestable problems). For example, we leaved StackOverflow problems involving hardware problems (See Figure 29), software errors (See Figure 30), concrete execution time analysis, etc. out of DS-1000. See Figure 31 for a concrete example where the problem asks for a natural language explanation of a method in TensorFlow. We leave incorporating more unsuitable StackOverflow problems for future work.
Controlling Library Version.
A.2. Example Problems
Here we present an example problem from each of the seven libraries in DS-1000 to illustrate the challenges we encountered in creating DS-1000. Figure 9 shows a NumPy problem asking how to generate samples that suit log-uniform distribution. Since the result varies with different solutions and different settings, it's unreasonable to test the equivalence. Instead, we apply the Kolmogorov-Smirnov test that judges whether two groups of samples suit the identical or rather similar population. Figure 10 gives a SciPy problem that describes some trouble with the number of stored elements in a sparse matrix and asks for a solution without repetitive type conversion. Since our self-made assertion that checks the equivalence of two matrices cannot distinguish the difference between stored numbers, we need a special design for this problem.
For functional correctness, we check the type of b, match the elements, and check the number of non-zero elements(nnz), which is the core of the problem. For surface-form constraints, we reject the use of .toarray(), .A, .todense(), and .array(), which might attempt to transform a sparse matrix into a dense one. Figure 11 shows a Pandas problem. We found that the solution with the highest votes ignores the requirement "but does not exactly match it" in the description of the problem, and thus we had to fix the bug in our reference solution. Besides, we enhanced the test case to check the point. Figure 12 shows a TensorFlow problem. Since there is no built-in testing function defined in TensorFlow 2.10.0, we had to design it by ourselves. Figure 13 demonstrates a PyTorch problem. Here we use load_data() to hide the input and let the models learn from the description. The correct solution is not a regular type conversion, as indicated in the error message. Figure 14 shows a Scikit-learn problem. It requires applying the preprocessing method defined in Scikit-learn to a Pandas dataframe, and it tests whether the models learn Scikit-learn, Pandas, and their interaction well. Actually, these data science libraries are not independent of others, and this problem exemplifies the interactions. Figure 15 shows a Matplotlib problem. Here the original problem on StackOverflow contains an example figure, which cannot be processed by current code models. We rewrite the original problem into a standalone problem, that is, "Plot y over x and show blue dashed grid lines". The automatic evaluation comes in two parts. First, it compares the image produced by the generated program with the image produced by the reference program. If two images match exactly, then the generated program is considered correct. Otherwise, the automatic evaluation examines the Pandas vote 50 50 14 14 14 4 4 4 2 2 1 1 view 5k 5k 5k 5k 5k 1k 1k 1k 1.1k 1.1k 1k 1k problems 2 8 467 494 554 2139 2483 1894 1985 809 225 8 TensorFlow vote ----10 5 4 2 2 1 1 1 view ----3k 2k Matplotlib axis object and asserts the conditions relevant to the problem specification. In this example, the assertions are testing the existence of grid lines and the color of the grid lines.
A.3. Problem Perturbation
Here, we give an example for each type of perturbation, as shown in Table 1. We highlight the changes we made through perturbations. Figure 16, Figure 17 and Figure 18 give examples of surface perturbations, showing code context perturbation, paraphrasing, and changes in example respectively. The original task hasn't changed. Figure 19 shows how we replace keywords with analogy words in a Pandas problem. The perturbed problem asks for applying an exponential function to column A and B.
The problem in Figure 20 concentrates on changing the required index. Here we specify the target index on which to operate using ordinal numbers. Figure 21 gives an example of reversing the order. The desired output string is reversed(from "abc,def,ghi,jkl" to "jkl,ghi,def,abc"). We expect the models to capture the information and handle the perturbation. Figure 22 shows an example of changing the type of the required result. Here we change the type from pd.DataFrame to pd.Series. Figure 23 and Figure 24 demonstrate how we get difficult rewrites. The example in Figure 23 replaces "highest" with "lowest" and changes the shape of the desired output (from n × 1 to 1 × n). The example in Figure 24, on the other hand, focuses on digging more perturbations that could increase the difficulty. The models should not only learn how to use a two-sample KS test but also learn how to interpret the result of the KS test.
A.4. Prompt Format
As we've mentioned in Section 4.1, we also provide a prompt of Completion format. Here are two examples (Figure 25 and Figure 26) showing that we have to translate the code in the right context into natural language instructions as complementary information.
B. Details of Experiments on numpy-100
numpy-100 is a collection of 100 NumPy exercises from NumPy mailing list, StackOverflow, and NumPy documentation, which has been forked over 4.7k times on GitHub.
As shown in Figure 3, in the numpy-100 problem set, each problem is given a short, one-sentence description with no code context, followed by a reference solution.
#### 28. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
```python print(np.unravel_index(99, (6,7,8))) ```F igure 3: A numpy-100 example.
First, we wrote a code context for each problem and applied Insertion prompt, as shown in Figure 4. Then we paraphrased the problems and modified the code contexts as surface perturbations, as shown in Figure 5 and Figure 6. We changed the description from "Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?" to "I have an array with shape (6,7,8). I need to find the index of the 100th element.". In another way, we changed the code context to require models to complete a given function.
For semantic perturbation, we changed the requirements of the problems and also the semantics of the reference solutions without changing their difficulty. As shown in Figure 7, we changed "100" to "99". At last, we equipped each problem and its perturbation with one test case and an automatic evaluation. Then we tested the performance of Codex-002 on them. We sampled 20 problems from numpy-100 and generated 10 samples for each problem with temperature set to 0.7, and top-p cutoff set to 0.95.
C. Error Analysis
We provide a preliminary error analysis by showing an example model error in Figure 8 and provide additional examples in Figure 27 and 28. In this example, the problem asks for removing adjacent duplicated non-zero values in a given array, which cannot be satisfied by a single NumPy operation. The reference implements this problem by creating a binary array representing the selection and performing two operations to meet the problem requirement. However, we see Codex-002 fails on the composite request and attempts to answer the problem with a single method, np.unique, pointed out as incorrect in the problem already.. This example error demonstrates the challenges in DS-1000 problems, Problem: Consider a (6,7,8) import spicy.stats result = scipy.stats.loguniform.rvs(a = min, b = max, size = n)
Automa1c Evalua1on
Test code np.testing.assert_array_equal(result.shape, ans.shape) from scipy.stats import ks_2samp # Kolmogorov-Smirnov Test judges whether the two sampled # from similar distribution assert ks_2samp(result, ans)[0] <= 0.1 with 2 stored elements in Compressed Sparse Row format> Is that intended? If so, is it due to the compressed format of csr matrices? Is there any workaround else than going from sparse to dense to sparse again? A: <code> from scipy import sparse import numpy as np a = np.ones ((2, 2) Just iterate over DataFrame.columns , now this is an example in which you will end up with a list of column names that match: spike_cols = [col for col in df.columns if 'spike' in col] https://stackoverflow.com/questions/21285380/find-column-whose-name-contains-a-specific-string Figure 11: An example problem of Pandas. We need to write reference solutions by ourselves because high-vote replies from StackOverflow ignore the requirement "but does not exactly match it".
Surface-form constraints for and while should not appear in Syntax Tree
) b = sparse.csr_matrix(a) </code> BEGIN SOLUTION <code> [insert] </code> END SOLUTION <code> print(b) </code>
Highest-vote Solu1on
Reference Solu1on
plt.plot(y, x) plt.grid(color="blue", linestyle="dashed")
Automa1c Evalua1on
Test code Figure 16: An example problem of surface perturbation. We expect model complete the function(on the right).
Origin
Problem:
How do I convert data from a Scikit-learn Bunch object (from sklearn.datasets) to a Pandas DataFrame? from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # Is there a Pandas method to accomplish this?
Problem: Can you give me any suggestion that transforms a sklearn Bunch object (from sklearn.datasets) to a dataframe? I'd like to do it to iris dataset. Thanks!
from sklearn.datasets import load_iris import pandas as pd data = load_iris() print(type(data)) data1 = pd. # May be you can give me a Pandas method? Figure 17: An example problem of surface perturbation. The description in the prompt has been paraphrased. Figure 19: An example problem of semantic perturbation. "inverse" has been replaced with an analogy word "exponential".
Notice that e is the natural constant. …[omitted for brevity]
Origin
Problem: I have a 2D array `a` to represent a many-many mapping : 0 3 1 3 3 0 0 0 1 0 0 0 3 0 0 0 What is the quickest way to 'zero' out rows and column entries corresponding to a particular index (e.g. zero_rows = 0, zero_cols = 0 corresponds to the 1st row/column) in this array?
Problem: I have a 2D array `a` to represent a many-many mapping : 0 3 1 3 3 0 0 0 1 0 0 0 3 0 0 0 What is the quickest way to 'zero' out the second row and the first column? Figure 20: An example problem of semantic perturbation. The required index of rows and columns has been changed.
Origin
Problem:
I have the following dataframe: text 1 "abc" 2 "def" 3 "ghi" 4 "jkl" How can I merge these rows into a dataframe with a single row like the following one? text 1 "abc, def, ghi, jkl" Problem: I have the following dataframe: text 1 "abc" 2 "def" 3 "ghi" 4 "jkl" How can I merge these rows into a dataframe with a single row like the following one? text 1 "jkl, ghi, def, abc" I'm using a linear layer combined with a sofmax layer to return a n x 3 tensor, where each column represents the probability of the input falling in one of the three classes (0, 1 or 2). However, I must return a n x 1 tensor, so I need to somehow pick the highest probability for each input and create a tensor indicaMng which class had the highest probability. How How to avoid "CUDA out of memory" in PyTorch
Yin et al., 2018; Hendrycks et al., 2021; Chandel et al., 2022b; Chen et al., 2021a). Moreover, most of these benchmarks solely rely on surface-form metrics such as BLEU or CodeBLEU (Yin et al., 2018; Agashe et al., 2019; Chen et al., 2021b). These metrics diverge from the programmer's intent, increasingly so as model capability improves (Zhong et al., 2020). To our knowledge, no existing benchmarks contain both naturally occurring problems with diverse contexts and reliable evaluation metrics.
For example, users might search for more arXiv:2211.11501v1 [cs.SE] 18 Nov 2022 result = df.div(1).add_prefix("inv_") Prompt Reference Solution result = df.join(df.apply(lambda x: 1/x).add_prefix("inv_")) ans = ...[omit for brevity] pd.testing.assert_frame_equal(result, ans)Surface-form constraints for and while should not appear in Syntax Tree to add inverses of each existing column to the dataframe and name them based on existing column names with a prefix, e.g. inv_A is an inverse of column A and so on. The resulting dataframe should look like so:Obviously there are redundant methods like doing this in a loop, but there should exist much more pythonic ways of doing it … [omitted for brevity
tion method (as a point of reference, Li et al. (2022) find the problem Level False Positive Percentage to be 60% on APPS(Hendrycks et al., 2021)), these measures reflect that DS-1000 is reliable. 4
(Aghajanyan et al., 2022; Fried et al., 2022; Bavarian et al., 2022; Tay et al., 2022), we expect more future pre-trained models to perform infilling.
Aghajanyan, A., Huang, B., Ross, C., Karpukhin, V., Xu, H., Goyal, N., Okhonko, D., Joshi, M., Ghosh, G., Lewis, M., et al. Cm3: A causal masked multimodal model of the internet. arXiv preprint arXiv:2201.07520, 2022. Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Cai, C., Terry, M., Le, Q., et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Bavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255, 2022. Berant, J., Chou, A., Frostig, R., and Liang, P. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1533-1544, 2013. Black, S., Biderman, S., Hallahan, E., Anthony, Q., Gao, L., Golding, L., He, H., Leahy, C., McDonell, K., Phang, J., Pieler, M., Prashanth, U. S., Purohit, S., Reynolds, L., Tow, J., Wang, B., and Weinbach, S. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of BigScience Episode #5 -Workshop on Challenges & Perspectives in Creating Large Language Models, vir-tual+Dublin, May 2022. Association for Computational Linguistics. Bolyen, E., Rideout, J. R., Dillon, M. R., Bokulich, N. A., Abnet, C. C., Al-Ghalith, G. A., Alexander, H., Alm, E. J., Arumugam, M., et al. Reproducible, interactive, scalable and extensible microbiome data science using qiime 2 (vol 37, pg 852, 2019). Nature biotechnology, 2019.
Ren, S., Guo, D., Lu, S., Zhou, L., Liu, S., Tang, D., Sundaresan, N., Zhou, M., Blanco, A., and Ma, S. Codebleu: a method for automatic evaluation of code synthesis. CoRR, abs/2009.10297, 2020. URL https: //arxiv.org/abs/2009.10297. Romero, C. and Ventura, S. Data mining in education. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 3(1):12-27, 2013. Scholak, T., Schucher, N., and Bahdanau, D. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 9895-9901, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. Shi, F., Fried, D., Ghazvininejad, M., Zettlemoyer, L., and Wang, S. I. Natural language to code translation with execution. arXiv preprint arXiv:2204.11454, 2022. Tay, Y., Dehghani, M., Tran, V. Q., Garcia, X., Bahri, D., Schuster, T., Zheng, H. S., Houlsby, N., and Metzler, D.Unifying language learning paradigms. arXiv preprint arXiv:2205.05131, 2022.Tufano, M., Drain, D., Svyatkovskiy, A., Deng, S. K., and Sundaresan, N. Unit test case generation with transformers and focal context. arXiv preprint arXiv:2009.05617, 2020.
Figure 4 :
4A numpy-100 example prompt.
Figure 6 :
6A numpy-100 example of surface perturbation. We changed the code context.
Figure 7 :Figure 8 :
78A numpy-100 example of semantic perturbation. We only changed the required index. which require both natural language understanding and code generation abilities.Problem: Given a numpy array, I wish to remove the adjacent (before removing) duplicate non-zero value and all the zero value. For instance, for an array like that: , I'd like to transform it to: [1,2,1,3]. Do you know how to do it?I just know np.unique(arr) but it would remove all the duplicate value and keep the zero value. Thank you in advance! An example model mistake. The problem specifies a composite requirement, removing adjacent non-zero duplicates, which cannot be solved by a single operation. The model mistakenly generates a single operation that removes all duplicates.
Figure 9 :
9NumPy example problem involving randomness, requiring the use of a specialist knowledge test. number of nonzero elements assert b.nnz == ans.nnz Surface-form constraints .toarray(), .A, .todense(), .array() should not appear in Syntax Tree Test case 1 a = np.ones((2, 2))
Figure 10 :
10An example problem of SciPy. Specific checking on conversion between dense matrix and sparse matrix.
Figure 13 :
13An example problem of PyTorch, with failed attempt and error message given in the description.
has been set up with the following code.plt.rc('text', usetex=True) plt.rc('font', family='serif') fig, ax = plt.subplots() ax.set_xlabel("Run Number", fontsize=25) plt.grid(True, linestyle='--') ...
Figure 18 :I
18An example problem of surface perturbation. The example input in the prompt has been replaced with another one. Origin Problem: Sample dataframe: df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) I'd like to add inverses of each existing column to the dataframe and name them based on existing column names with a prefix, e.g. inv_A is an inverse of column A and so on. The resulting dataframe should look like so: result = pd.DataFrame({"A": 'd like to add exponentials of each existing column to the dataframe and name them based on existing column names with a prefix, e.g. exp_A is an exponential of column A and so on. The resulting dataframe should look like so: result = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "exp_A ": [e^1, e^2, e^3], "exp_B": [e^4, e^5, e^6]})
Figure 21 :
21An example problem of semantic perturbation. The order of the desired string has been reversed.
Figure 22 :
22An example problem of semantic perturbation. The type of the desired result has been changed but the content still keeps the same.OriginProblem: I have a logisMc regression model using Pytorch, where my input is high-dimensional and my output must be a scalar -0, 1 or 2.
Figure 23 :
23An example problem that is difficult re-written with a combination of surface and semantic perturbations
Figure 25 :Figure 27 :IFigure 28 :
252728Completion prompt corresponding to Figure 1. Wrong SoluHon # create a column named "ID" df['ID'] = df.groupby(['name']).An example wrong solution that misunderstands the requirements and modifies on the wrong column. 'm using tensorflow 2.10.0. I have a list of bytes and I want to convert it to a list of strings: How can I get the string result list in Tensorflow? An example wrong solution that uses a common function instead of a function of TensorFlow. 2022/11/2 20:22 python -How to avoid "CUDA out of memory" in PyTorch -Stack Overflow
Figure 29 :
29An example untestable problem involving hardware problems.
Figure 31 :
31An example untestable problem involving explanations.
use DS-1000 to evaluate several popular code generation models, including Codex (Chen et al., 2021a), CodeGen (Nijkamp et al., 2022), and InCoder (Fried et al., 2022).We found model performance ranges from 7.4% to 43.3%, with Codex-002 model being the best. This implies that these models have the potential to reduce the barrier for data analysis, yet still have large room for improvement.
Surface-form constraintsfor and while should not appear in Syntax Tree ❸ Implementing Automatic Tests ❺ Red Teaming ❹ Perturbing Original Problem ❶ Manually Selecting and Modifying StackOverflow Problems df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})Here is a sample dataframe: I'd like to add inverses of each existing column to the dataframe and … [omitted for brevity] try:inv_df = df.join(df.apply(lambda x: 1/x).add_prefix("inv_"))
df = pd.DataFrame({"A": [1, 2, 3],
"B": [4, 5, 6]})
### BEGIN SOLUTION
# A known WRONG SOLUTION
result = df.join(df.apply(lambda
x:math.e**x).add_prefix('exp_'))
### END SOLUTION
print(result)
… I'd like to apply the exponential function to each
existing column … The resulting dataframe should
look like so:
result = pd.DataFrame({"A": [1, 2, 3],
"B": [4, 5, 6],
"exp_A": [e^1, e^2, e^3],
"exp_B": [e^4, e^5, e^6]})
… [omitted for brevity]
❷ Adding Code Context
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3],
"B": [4, 5, 6]})
### BEGIN SOLUTION
[insert]
### END SOLUTION
print(result)
Test cases
…[omit for brevity]
pd.testing.assert_frame_equal(result,
ans)
High vote
Testable
Representative
Useful
Table 1 :
1The perturbation categories along with examples. "Surface" perturbations do not change the reference solution,
while "Semantic" perturbations do.
Table 2 :
2The performance of Codex-002 on numpy-100.1 https://github.com/rougier/numpy-100
2 The fraction of Codex-002 samples that are correct.
Table 3 :
3Detailed statistics of DS-1000.We manually applied these perturbations to numpy-100 and
show the result on Table 2. Although the difficulty level
remains the same to human users, the performance of Codex-
002 drops to 40.6% after perturbation (50.8% on surface per-
turbations and 23.6% on semantic perturbations). Further-
more, in 36% of the cases, the model still predicted the orig-
inal answer of the problem after the semantic perturbation,
implying that the model is solving the original problems by
memorizing their corresponding solutions. Therefore, we
could significantly overestimate model performance if we
test them on problems directly taken from the web. (See
Appendix B for more details)
Table 4
4compares DS-1000 to other datasets. Notably,
the average number of words per problem in DS-1000 is
much larger than other data science related datasets (e.g.,
DSP, Chandel et al. 2022a and CoNaLa, Yin et al. 2018).
More importantly, the problems in DS-1000 represent more
diverse and naturalistic intent and context formats that can-
not be seen in any other datasets. Unlike generic Python
code generation benchmarks (MBPP, Austin et al. 2021 and
HumanEval, Chen et al. 2021a), we note that data science
code generation benchmarks have fewer test cases since
the annotators need to define program inputs with complex
objects such as square matrices, classifiers, or dataframes
rather than simple primitives, such as floats or lists. Never-
theless, as we will show next, even a few test cases suffice
for DS-1000.
We evaluate our multi-criteria automatic metric by check-
ing whether it can reject incorrect solutions. We randomly
sampled 10 problems from each library and sampled 40 pre-
Work Natural Language to Code. Research on translating natural language to executable forms dates back several decades. The models have become increasingly capable of producing complex and general programs while requiring fewer human annotations. Zelle & Mooney (1996) and Zettlemoyer & Collins (2007) translate natural language queries to domainspecific database queries. Liang et al. (2013) and Berant et al. (2013) parse natural language into first-order logic to answer generic knowledge-based questions. Yu et al. (2018); Scholak et al. (2021) translate natural language problems to general SQL programs and develop models that can generalize across domains. While all the works above still need to train their models on the task they evaluate, recently Li et al. (2022); Chen et al. (2021a) show that generative models pretrained on code can produce Python snippets to tackle competitive programming challenges, without any additional human annotations. Many other recent works corroborated this finding (Nijkamp et al., 2022; Fried et al., 2022; Xu et al., 2022; Black et al., 2022), and additional techniques at inference time further improve the performance (Poesia et al., 2022; Shi et al., 2022).Format
Model
Pandas NumPy Matplotlib Scikit-learn SciPy TensorFlow PyTorch Overall
Left-to-right
Completion
Codex-002
26.5
43.1
57.0
44.8
31.8
39.3
41.8
39.2
Codex-001
9.4
26.6
41.8
18.5
15.0
17.2
9.7
20.2
Codex-Cushman
7.9
21.8
40.7
18.0
11.3
12.2
12.4
18.1
CodeGen-6B
1.9
12.1
18.6
5.8
7.4
12.8
3.4
8.4
InCoder-6B
3.1
4.4
28.3
2.8
2.8
3.8
4.4
7.4
Insertion
Codex-002
30.1
46.5
57.0 *
53.7
34.8
53.4
47.7
43.3
InCoder-6B
2.9
4.6
28.3 *
3.1
3.1
7.8
3.2
7.5
Table 5 :
5pass@1 accuracy with 40 samples generated for each problem. The upper part shows accuracy on the left-to-right Completion format, while the lower part shows the results of Insertion format. The rightmost "Overall" columns show the average accuracy on 1000 problems from all libraries. DS-1000 is able to differentiate the capabilities of different models and there is substantial room for improvement even for the best Codex-002 model. * : Matplotlib problems do not have the right context so Completion and Insertion formats are the same.Pandas
NumPy Scikit-learn
SciPy
TensorFlow PyTorch
Overall
Origin surface
37.3
61.2
52.6
33.0
64.9
64.8
53.2
Surface
31.9 −5.4
58.4 −2.8
55.7 +3.1
32.1 −0.9
58.0 −8.9
50.0 −14.8
49.8 −3.4
Origin semantic
36.8
56.7
60.6 *
40.3
71.3
65.1
47.2
Semantic
33.2 −3.6
49.0 −7.7
38.9 * −21.7 34.3 −6.0
42.5 −25.8 30.5 −34.6
38.2 −9.0
Origin difficult
39.9
52.7
5.0 *
58.1
73.0 *
53.8 *
46.8
Difficult Rewrite
17.7 −22.2 27.1 −25.6
0.0 * −5.0
13.8 −44.3 38.0 * −35.0 28.8 * −25.0
21.0 −25.8
Code Generation Benchmarks. As models become increasingly capable, researchers start to build increasingly difficult and general code generation benchmarks.While Zelle & Mooney (1996) focused only on domain-specific languages, Yu et al. (2018) builds a Text-to-SQL benchmark that evaluates the capability to write broad-domain SQL programs. Yin et al. (2018) evaluates the capability to write short but general Python snippets, while more recent papers Hendrycks et al. (2021); Li et al. (2022) evaluate models' capability to solve competitive programming problems in Python. If code generation models continue to improve, we expect future researchers to focus on more complex tasks.At the same time, however, it becomes more difficult to build reliable benchmarks aligned with real-world applications.Programs are most useful when they are executed; therefore,
we need to evaluate their execution semantics, and the best
general method so far is still to ask experts to manually write
test cases. Consequently, most benchmarks with test cases
focus on competition/interview/ programming challenges
(Hendrycks et al., 2021; Li et al., 2022), because these are
the only applications where a lot of test cases are already
available. Therefore, most recent papers that evaluate on
real-world programs have to rely on unreliable surface-form
metrics (Ren et al., 2020; Chen et al., 2021b; Xu et al., 2022).
This streetlight effect might incentivize the community to
work on problems that are easy to evaluate but not useful
in practice. In response to this challenge, our paper man-
ually implements a reliable metric for naturally occurring
problems. Future works can consider using models to help
humans write useful tests (Tufano et al., 2020), or formally
verify the correctness of a predicted solution (Chu et al.,
2017).
Carlini, N., Tramèr, F., Wallace, E., Jagielski, M., Herbert-Carlini, N., Tramèr, F., Wallace, E., Jagielski, M.,
Herbert-Voss, A., Lee, K., Roberts, A., Brown, T.,
Song, D., Erlingsson, Ú., Oprea, A., and Raffel, C.
Extracting training data from large language models. In
30th USENIX Security Symposium (USENIX Security
21), pp. 2633-2650. USENIX Association, August
2021a.
ISBN 978-1-939133-24-3.
URL https:
//www.usenix.org/conference/usenixsecurity21/
presentation/carlini-extracting.
Voss, A., Lee, K., Roberts, A., Brown, T. B., Song, D.,
Erlingsson, Ú., Oprea, A., and Raffel, C. Extracting
training data from large language models. In Bailey, M.
and Greenstadt, R. (eds.), 30th USENIX Security Sympo-
sium, USENIX Security 2021, August 11-13, 2021, pp.
2633-2650. USENIX Association, 2021b. URL https:
//www.usenix.org/conference/usenixsecurity21/
presentation/carlini-extracting.
Chandel, S., Clement, C. B., Serrato, G., and Sundaresan, N.
Training and evaluating a jupyter notebook data science
assistant. CoRR, abs/2201.12901, 2022a. URL https:
//arxiv.org/abs/2201.12901.
Chandel, S., Clement, C. B., Serrato, G., and Sundaresan, N.
Training and evaluating a jupyter notebook data science
assistant. arXiv preprint arXiv:2201.12901, 2022b.
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O.,
Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman,
G., et al. Evaluating large language models trained on
code. arXiv preprint arXiv:2107.03374, 2021a.
Chen, X., Gong, L., Cheung, A., and Song, D. Plotcoder:
Hierarchical decoding for synthesizing visualization code
in programmatic context. In Association for Computa-
tional Linguistics (ACL), 2021b.
Chu, S., Wang, C., Weitz, K., and Cheung, A. Cosette: An
automated prover for sql. In CIDR, 2017.
Elangovan, A., He, J., and Verspoor, K. Memorization
vs. generalization : Quantifying data leakage in NLP
performance evaluation. In Merlo, P., Tiedemann, J.,
and Tsarfaty, R. (eds.), Proceedings of the 16th Con-
ference of the European Chapter of the Association
for Computational Linguistics: Main Volume, EACL
2021, Online, April 19 -23, 2021, pp. 1325-1335. As-
sociation for Computational Linguistics, 2021. doi:
10.18653/v1/2021.eacl-main.113. URL https://doi.
org/10.18653/v1/2021.eacl-main.113.
Faghmous, J. H. and Kumar, V. A big data guide to under-
standing climate change: The case for theory-guided data
science. Big data, 2(3):155-163, 2014.
Fried, D., Aghajanyan, A., Lin, J., Wang, S., Wallace, E.,
Shi, F., Zhong, R., Yih, W., Zettlemoyer, L., and Lewis,
M. Incoder: A generative model for code infilling and
synthesis. CoRR, abs/2204.05999, 2022.
Hendrycks, D., Basart, S., Kadavath, S., Mazeika, M., Arora,
A., Guo, E., Burns, C., Puranik, S., He, H., Song, D., and
Steinhardt, J. Measuring coding challenge competence
with apps. NeurIPS, 2021.
Li, Y., Choi, D. H., Chung, J., Kushman, N., Schrit-
twieser, J., Leblond, R., Eccles, T., Keeling, J., Gi-
meno, F., Lago, A. D., Hubert, T., Choy, P., de Mas-
son d'Autume, C., Babuschkin, I., Chen, X., Huang,
P., Welbl, J., Gowal, S., Cherepanov, A., Molloy, J.,
Mankowitz, D. J., Robson, E. S., Kohli, P., de Freitas,
N., Kavukcuoglu, K., and Vinyals, O. Competition-level
code generation with alphacode. CoRR, abs/2203.07814,
2022. doi: 10.48550/arXiv.2203.07814. URL https:
//doi.org/10.48550/arXiv.2203.07814.
Liang, P., Jordan, M. I., and Klein, D. Learning Dependency-
Based Compositional Semantics. Computational Linguis-
tics, 39(2):389-446, 06 2013. ISSN 0891-2017. doi:
10.1162/COLI_a_00127. URL https://doi.org/10.
1162/COLI_a_00127.
Nijkamp, E., Pang, B., Hayashi, H., Tu, L., Wang, H., Zhou,
Y., Savarese, S., and Xiong, C. A conversational paradigm
for program synthesis. CoRR, abs/2203.13474, 2022.
Poesia, G., Polozov, A., Le, V., Tiwari, A., Soares, G., Meek,
C., and Gulwani, S. Synchromesh: Reliable code genera-
tion from pre-trained language models. In International
Conference on Learning Representations, 2022. URL
https://openreview.net/forum?id=KmtVD97J43e.
Xu, F. F., Alon, U., Neubig, G., and Hellendoorn, V. J. A systematic evaluation of large language models of code.In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pp. 1-10, 2022.Yin, P., Deng, B., Chen, E., Vasilescu, B., and Neubig,
G. Learning to mine aligned code and natural language
pairs from stack overflow. In International Conference on
Mining Software Repositories, MSR, pp. 476-486. ACM,
2018. doi: https://doi.org/10.1145/3196398.3196408.
Yu, T., Zhang, R., Yang, K., Yasunaga, M., Wang, D., Li, Z.,
Ma, J., Li, I., Yao, Q., Roman, S., Zhang, Z., and Radev,
D. R. Spider: A large-scale human-labeled dataset for
complex and cross-domain semantic parsing and text-to-
SQL task. In Empirical Methods in Natural Language
Processing (EMNLP), 2018.
Zelle, M. and Mooney, R. J. Learning to parse database
queries using inductive logic programming. In Associa-
tion for the Advancement of Artificial Intelligence (AAAI),
pp. 1050-1055, 1996.
Zettlemoyer, L. and Collins, M. Online learning of relaxed
ccg grammars for parsing to logical form. In Proceedings
of the 2007 Joint Conference on Empirical Methods in
Natural Language Processing and Computational Natu-
ral Language Learning (EMNLP-CoNLL), pp. 678-687,
2007.
Zhao, Z., Wallace, E., Feng, S., Klein, D., and Singh, S.
Calibrate before use: Improving few-shot performance
of language models. In International Conference on
Machine Learning, pp. 12697-12706. PMLR, 2021.
Zhong, R., Yu, T., and Klein, D. Semantic evaluation for
text-to-SQL with distilled test suites. In Proceedings
of the 2020 Conference on Empirical Methods in Natu-
ral Language Processing (EMNLP), pp. 396-411, On-
line, November 2020. Association for Computational Lin-
guistics. doi: 10.18653/v1/2020.emnlp-main.29. URL
https://aclanthology.org/2020.emnlp-main.29.
Appendices
A. Details on Data Collection
A.1. Problem Selection
Sourcing Popular StackOverflow Problems. We lever-
age StackOverflow to collect representative data science
code generation problems on each library. To select popular
problems, we first removed duplicates and selected prob-
lems with at least 1 vote, 1000 views, and accepted answers.
After this initial filtering, we obtain 15881 NumPy problems,
26248 Pandas problems, 1965 PyTorch problems, 8258
TensorFlow problems, 4141 SciPy problems, and 4499
Scikit-learn problems. Next, we performed a stratified
sampling on problems from each year to further subsample
the problems from Pandas and TensorFlow. We designed a
threshold for each year's problems differently because older
problems naturally have higher votes. Table 8 displays the
criteria we used to filter each year's problem on Pandas and
TensorFlow.
Table 7
7details the software versions that we build DS-1000 with.Package
Version
Seaborn
0.11.2
Matplotlib
3.5.2
NumPy
1.21.6
Pandas
1.3.5
Scikit-learn
1.0.2
SciPy
1.7.3
TensorFlow 2.10.0
PyTorch
1.12.1
Table 7 :
7The versions of software in DS-1000
Table 8 :
8The problem selection parameters and the number of result problems of Pandas and TensorFlow.
shape array, what is the index (x,y,z) of the 99th element?<code>
import numpy as np
[insert]
print(result)
</code>
lengths, ans = ...[omitted for brevity] I wish to create a mask of 1s and 0s whose number of 0s correspond to the entries to this tensor, padded in front by 1s to a total length of 8. I.e. I want to create this tensor:Figure 12: An example problem of TensorFlow. We implemented well-designed test function for tensor comparison.lengths_transposed = tf.expand_dims(lengths, 1)
range = tf.range(0, 8, 1)
range_row = tf.expand_dims(range, 0)
mask = tf.less(range_row, lengths_transposed)
result = tf.where(mask, tf.ones([4, 8]), tf.zeros([4, 8]))
Reference Solu1on
Test code
def tensor_equal(a, b): # self-made test function
if type(a) != type(b):
return False
if isinstance(a, type(tf.constant([]))) is not True:
if isinstance(a, type(tf.Variable([]))) is not True:
return False
if a.shape != b.shape:
return False
if a.dtype != tf.float32:
a = tf.cast(a, tf.float32)
if b.dtype != tf.float32:
b = tf.cast(b, tf.float32)
if not tf.reduce_min(tf.cast(a == b, dtype=tf.int32)):
return False
return True
assert tensor_equal(result, ans)
Test case 1
lengths = [4, 3, 5, 2]
ans = ... # generated by Reference solution
Test case 2
Problem:
I'm using tensorflow 2.10.0.
I have a tensor of lengths in tensorflow, let's say it looks like
this:
[4, 3, 5, 2]
[[1,1,1,1,0,0,0,0],
[1,1,1,0,0,0,0,0],
[1,1,1,1,1,0,0,0],
[1,1,0,0,0,0,0,0]
]
How might I do this?
A:
<code>
import tensorflow as tf
lengths = [4, 3, 5, 2]
</code>
BEGIN SOLUTION
<code>
[insert]
</code>
END SOLUTION
<code>
print(result)
</code>
tensor_of_tensors = torch.stack((list_of_tensors))
Automa1c Evalua1on
Test code
torch.testing.assert_close(tensor_of_tensors, ans,
check_dtype = False)
Test case 1
torch.random.manual_seed(42)
list_of_tensors = [torch.randn(3), torch.randn(3),
torch.randn(3)]
ans = ... # generated by Reference solution
Problem:
I have this code:
import torch
list_of_tensors = [ torch.randn(3), torch.randn(3),
torch.randn(3)]
tensor_of_tensors = torch.tensor(list_of_tensors)
I am geYng the error:
ValueError: only one element tensors can be converted
to Python scalars
How can I convert the list of tensors to a tensor of tensors in pytorch?
A:
<code>
import numpy as np
import pandas as pd
import torch
list_of_tensors = load_data()
</code>
BEGIN SOLUTION
<code>
[insert]
</code>
END SOLUTION
<code>
print(tensor_of_tensors)
</code>
Problem: I have this example of matrix by matrix mul^plica^on using numpy arrays: How can i do the same thing if m is scipy sparse CSR matrix? The result should be csr_matrix as well. This gives dimension mismatch: sp.sparse.csr_matrix(m)*sp.sparse.csr_matrix(c)# Precisely matching images with np.array
from PIL import Image
code_img, oracle_img = ... # load images
sample_image_stat = (
code_img.shape == oracle_img.shape
and np.allclose(code_img, oracle_img)
)
try:
assert sample_image_stat
# IF Failed, matching image components
ax = plt.gca()
assert ax.xaxis._major_tick_kw["gridOn"]
assert "grid_color" in ax.xaxis._major_tick_kw
assert ax.xaxis._major_tick_kw["grid_color"] in
["blue", "b"]
assert "grid_linestyle" in ax.xaxis._major_tick_kw
assert ax.xaxis._major_tick_kw["grid_linestyle"] in
["dashed", "--", "-.", ":"]
Test case 1
x,y = ...[shown in prompt]
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
x = np.arange(10)
y = np.arange(10)
# Plot y over x and show blue dashed grid lines
# SOLUTION START
Rewrite prompt
Figure 15: An example problem of Matplotlib. Matplotlib original problems often contain example figures which cannot
be processed by current code models. We rewrite original problems into standalone problems in the form of comments.
A:
<code>
from scipy import sparse
import numpy as np
sa = sparse.csr_matrix(np.array([[1,2,3],[4,5,6],
[7,8,9]]))
sb = sparse.csr_matrix(np.array([0,1,2]))
</code>
BEGIN SOLUTION
<code>
[insert]
</code>
END SOLUTION
<code>
print(result)
</code>
A:
<code>
from scipy import sparse
import numpy as np
example_sA = sparse.csr_matrix(np.array([[1,2,3],
[4,5,6],[7,8,9]]))
example_sB = sparse.csr_matrix(np.array([0,1,2]))
def f(sA = example_sA, sB = example_sB):
</code>
BEGIN SOLUTION
<code>
[insert]
</code>
END SOLUTION
<code>
return result
</code>
import numpy as np
m = np.array([[1,2,3],[4,5,6],[7,8,9]])
c = np.array([0,1,2])
m * c
array([[ 0, 2, 6],
[ 0, 5, 12],
[ 0, 8, 18]])
Origin Problem: How to convert a numpy array of dtype=object to torch Tensor? Problem: How to convert a numpy array of dtype=object to torch Tensor?array([
array([0.5, 1.0, 2.0], dtype=float16),
array([4.0, 6.0, 8.0], dtype=float16)
], dtype=object)
x = np.array([
np.array([1.23, 4.56, 9.78, 1.23, 4.56, 9.78], dtype=np.double),
np.array([4.0, 4.56, 9.78, 1.23, 4.56, 77.77], dtype=np.double),
np.array([1.23, 4.56, 9.78, 1.23, 4.56, 9.78], dtype=np.double),
np.array([4.0, 4.56, 9.78, 1.23, 4.56, 77.77], dtype=np.double),
np.array([1.23, 4.56, 9.78, 1.23, 4.56, 9.78], dtype=np.double),
], dtype=object)
Origin Problem: I have a square correlation matrix in pandas, and am trying to divine the most efficient way to return all values where the value (always a float -1 <= x <= 1) is above 0.3. The pandas.DataFrame.filter method asks for a list of columns or a RegEx, but I always want to pass all columns in. Is there a best practice on this? desired DataFrame: Problem: I have a square correlation matrix in pandas, and am trying to divine the most efficient way to return all values where the value (always a float -1 <= x <= 1) is above 0.3. The pandas.DataFrame.filter method asks for a list of columns or a RegEx, but I always want to pass all columns in. Is there a best practice on this?Pearson Correlation Coefficient
Col1 Col2
0
3
0.373153
1
3
0.419219
4
0.356149
3
4
0.389972
desired Series:
0 3 0.373153
1 3 0.419219
4 0.356149
3 4 0.389972
dtype: float64
can I achieve this using Pytorch? To illustrate, my Sofmax outputs this: Problem: …[omit for brevity] However, I must return a 1 x n tensor, and I want to somehow pick the lowest probability for each input and create a tensor indicaMng which class had the lowest probability. How can I achieve this using Pytorch? To illustrate, my Sofmax outputs this: And I must return this: [1, 2, 2], which has the type torch.LongTensor[[0.2, 0.1, 0.7],
[0.6, 0.3, 0.1],
[0.15, 0.8, 0.05]]
Problem: I can't figure out how to do a Two-sample KS test in Scipy. …[omit for brevity]Which means that at p-value of 0.76 we can not reject the null hypothesis that the two distribuMons are idenMcal. However, I want to compare two distribuMons and see if I can reject the null hypothesis that they are idenMcal. TypeError: 'numpy.ndarray' object is not callable Is there a way to do a two-sample KS test in Python? If so, how should I do it? Thank You in Advance Problem: …[omit for brevity] Is there a way to do a two-sample KS test in Python, then test whether I can reject the null hypothesis that the twodistribuHons are idenHcal(result=True means able to reject, and the vice versa) based on alpha? If so, how should I do it? Thank You in AdvanceFigure 24: An example problem that is difficult re-written for more complexity df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) I'd like to add inverses of each exisPng column to the dataframe and name them based on exisPng column names with a prefix, e.g. inv_A is an inverse of column A and so on. … [omiTed for brevity] Obviously there are redundant methods like doing this in a loop, but there should exist much more pythonic ways of doing it … [omiTed for brevity]Origin
test_stat = kstest(x, 'norm')
#>>> test_stat
#(0.021080234718821145, 0.76584491300591395)
…[omit for brevity]
I tried the naive:
test_stat = kstest(x, z)
and got the following error:
Problem:
Sample dataframe:
A:
<code>
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3],"B": [4, 5, 6]})
</code>
result = ...# put solution in this variable
BEGIN SOLUTION
<code>
Asked 2 years, 11 months ago Modified 3 months ago 288k times Viewed 102 I think it's a pretty common message for PyTorch users with low GPU memory: I tried to process an image by loading each layer to GPU and then loading it back:But it doesn't seem to be very effective. I'm wondering is there any tips and tricks to train large deep learning models while using little GPU memory. What's up with the smileys? lol.. Also, decrease your batch size and/or train on smaller images. Look at the Apex library for mixed precision training. Finally, when decreasing the batch size to, for example, 1 you might want to hold off on setting the gradients to zero after every iteration, since it's only based on a single image. -sansa Dec 1, 2019 at 21:02R
u
n
t
i
m
e
E
r
r
o
r
:
C
U
D
A
o
u
t
o
f
m
e
m
o
r
y
.
T
r
i
e
d
t
o
a
l
l
o
c
a
t
e
M
i
B
(
G
P
U
;
G
i
B
t
o
t
a
l
c
a
p
a
c
i
t
y
;
G
i
B
a
l
r
e
a
d
y
a
l
l
o
c
a
t
e
d
;
M
i
B
f
r
e
e
;
c
a
c
h
e
d
)
m
s
e
l
f
.
c
h
i
l
d
r
e
n
(
)
:
m
.
c
u
d
a
(
)
x
=
m
(
x
)
m
.
c
p
u
(
)
t
o
r
c
h
.
c
u
d
a
.
e
m
p
t
y
_
c
a
c
h
e
(
)
f
o
r
i
n
python
deep-learning
pytorch
object-detection
low-memory
Share Edit Follow Flag
edited Mar 28 at 12:27
Matee
Mateen Ulhaq
22.4k 16 86 127
asked Dec 1, 2019 at 20:46
voilale
voilalex
1,525 2 11 17
1
DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation ImportError: No module named sklearn.cross_validation -Stack Overflow https://stackoverflow.com/questions/30667525/importerror-no-module-named-sklearn-cross-validation 1/6 ImportError: No module named sklearn.cross_validation Asked 7 years, 5 months ago Modified 1 year, 6 months ago 444k times Viewed 247 I am using python 2.7 in Ubuntu 14.04. I installed scikit-learn, numpy and matplotlib with these commands: But when I import these packages: It returns me this error: What I need to do? perhaps your module name is wrong if you have installed sklearn and anaconda correctly. -CKM Oct 11, 2016 at 9:52 15 I am really just repeating it, but you have to use sklearn.model_selection from now on. cross_validation is not usable since sklearn 20+ -Michal Mikuláši Mar 23, 2019 at 18:51 Wow 13 answers to say the same thing. The latest 4 years after the first one (in case bits would fade with time I suppose) -mins Jan 9, 2021 at 18:12 Highest score (default) It must relate to the renaming and deprecation of sub-module to . Try substituting to Share Edit Follow FlagFigure 30: An example untestable problem involving software errors. What is the purpose of tf.global_variables_initializer? -Stack Overflow What is the purpose of tf.global_variables_initializer? Asked 5 years, 4 months ago Modified 2 months ago 37k times Viewed 55 I would like to understand what does in a bit more detail. A sparse description is given here Returns an Op that initializes global variables.But that doesn't really help me. I know that the op is necessary to initialize the graph, but what does that actually mean? Is this the step where the graph is complied?Highest score (default)2022/11/2 20:25
python -s
u
d
o
a
p
t
-
g
e
t
i
n
s
t
a
l
l
b
u
i
l
d
-
e
s
s
e
n
t
i
a
l
p
y
t
h
o
n
-
d
e
v
p
y
t
h
o
n
-
n
u
m
p
y
\
p
y
t
h
o
n
-
n
u
m
p
y
-
d
e
v
p
y
t
h
o
n
-
s
c
i
p
y
l
i
b
a
t
l
a
s
-
d
e
v
g
+
+
p
y
t
h
o
n
-
m
a
t
p
l
o
t
l
i
b
\
i
p
y
t
h
o
n
s
k
l
e
a
r
n
.
c
r
o
s
s
_
v
a
l
i
d
a
t
i
o
n
t
r
a
i
n
_
t
e
s
t
_
s
p
l
i
t
f
r
o
m
i
m
p
o
r
t
I
m
p
o
r
t
E
r
r
o
r
:
N
o
m
o
d
u
l
e
n
a
m
e
d
s
k
l
e
a
r
n
.
c
r
o
s
s
_
v
a
l
i
d
a
t
i
o
n
python
scikit-learn
Share Edit Follow Flag
edited Feb 16, 2019 at 18:23
deser
desertnaut
54.8k 21 132 161
asked Jun 5, 2015 at 13:15
arthu
arthurckl
5,071 6 16 16
15 Answers
Sorted by:
785
c
r
o
s
s
_
v
a
l
i
d
a
t
i
o
n
m
o
d
e
l
_
s
e
l
e
c
t
i
o
n
c
r
o
s
s
_
v
a
l
i
d
a
t
i
o
n
m
o
d
e
l
_
s
e
l
e
c
t
i
o
n
edited Nov 26, 2019 at 14:14
answered Jan 17, 2016 at 22:09
2022/11/2 20:16
tensorflow -:
t
f
.
g
l
o
b
a
l
_
v
a
r
i
a
b
l
e
s
_
i
n
i
t
i
a
l
i
z
e
r
tensorflow
deep-learning
Share Edit Follow Flag
edited Oct 27, 2019 at 22:09
nbro
14.4k 27 104 188
asked Jun 8, 2017 at 10:38
Tok
Faurb
Toke Faurby
5,618 9 38 60
2 Answers
Sorted by:
We use a higher temperature of 0.7 compared with 0.2 in Section 4.2 to get more diverse predictions.4 Some problems in APPS might apply quite similar tests, and some problems may have even as few as 2 or 3 test cases in the test split. Thus, insufficient test coverage probably happens though there are more test cases in average (Li et al., 2022).
Note that the results are not comparable toTable 5since for each kind of perturbation, we only selected a subset of problems to perturb.
AcknowledgementsWe thank Noah A. Smith, Tianbao Xie, Shuyang Jiang for their helpful feedback on this work.ReferencesA: <code> import numpy as np import pandas as pd from sklearn.ensemble import BaggingClassifier from sklearn.model_selection import GridSearchCV from sklearn.tree import DecisionTreeClassifier Problem: I am using Pandas to get a dataframe like this: name a b c 0 Aaron 3 5 7 1 Aaron 3 6 9 2 Aaron 3 6 10 3 Brave 4 6 0 4 Brave 3 6 1 I want to replace each name with a unique ID so output looks like: name a b c 0 1 3 5 7 1 1 3 6 9 2 1 3 6 10 3 2 4 6 0 4 2 3 6 1 How can I do that?
Reference SoluHon # df: pd.DataFrame as input result = df.replace(df. 1unique(), range(1, len(df['name'].unique(Reference SoluHon # df: pd.DataFrame as input result = df.replace(df['name'].unique(), range(1, len(df['name'].unique()) + 1))
| [
"https://github.com/rougier/numpy-100"
] |
[
"The \"Room Theory\": a computational model to account subjectivity into Natural Language Pro- cessing",
"The \"Room Theory\": a computational model to account subjectivity into Natural Language Pro- cessing",
"The \"Room Theory\": a computational model to account subjectivity into Natural Language Pro- cessing",
"The \"Room Theory\": a computational model to account subjectivity into Natural Language Pro- cessing"
] | [
"Carlo Lipizzi \nSchool of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA\n",
"Dario Borrelli dborrell@stevens.edu \nSchool of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA\n",
"Fernanda De Oliveira Capela \nSchool of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA\n",
"Carlo Lipizzi \nSchool of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA\n",
"Dario Borrelli dborrell@stevens.edu \nSchool of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA\n",
"Fernanda De Oliveira Capela \nSchool of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA\n"
] | [
"School of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA",
"School of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA",
"School of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA",
"School of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA",
"School of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA",
"School of Systems & Enterprises\nStevens Institute of Technology\nHobokenNew JerseyUSA"
] | [] | This work introduces a novel method to consider subjectivity and general context dependency in text analysis. The proposed method takes into account subjectivity using a computational version of the Framework Theory by MarvinMinsky (1974)leveraging on text vectorization -such as Word2Vec byMikolov et al. (2013). The embeddings created by the vectorization represent the knowledge of the context to be used to for the text analysis. Our approach is based on three components: 1. a framework/"room" representing the point of view of the individual or the collective; 2. a benchmark/set of keywords representing the criteria for the analysis; and 3. the document to be analyzed. By measuring the similarity between the vectors representing words/semantic elements, we extract the relative relevance of the elements in the benchmark for the document to be analyzed. Our method provides a way to consider the point of view of the reader of the document or the specific domain we want to use to get insights. This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text. Subjectivity is relevant to evaluate human reactions or to analyze text in a given context or domain. | null | [
"https://arxiv.org/pdf/2005.06059v2.pdf"
] | 218,613,680 | 2005.06059 | c28fdd88499ced45c7e5c8ea91618ffc1b693040 |
The "Room Theory": a computational model to account subjectivity into Natural Language Pro- cessing
Carlo Lipizzi
School of Systems & Enterprises
Stevens Institute of Technology
HobokenNew JerseyUSA
Dario Borrelli dborrell@stevens.edu
School of Systems & Enterprises
Stevens Institute of Technology
HobokenNew JerseyUSA
Fernanda De Oliveira Capela
School of Systems & Enterprises
Stevens Institute of Technology
HobokenNew JerseyUSA
The "Room Theory": a computational model to account subjectivity into Natural Language Pro- cessing
1SubjectivityText MiningNatural Language ProcessingText VectorizationSocial Media
This work introduces a novel method to consider subjectivity and general context dependency in text analysis. The proposed method takes into account subjectivity using a computational version of the Framework Theory by MarvinMinsky (1974)leveraging on text vectorization -such as Word2Vec byMikolov et al. (2013). The embeddings created by the vectorization represent the knowledge of the context to be used to for the text analysis. Our approach is based on three components: 1. a framework/"room" representing the point of view of the individual or the collective; 2. a benchmark/set of keywords representing the criteria for the analysis; and 3. the document to be analyzed. By measuring the similarity between the vectors representing words/semantic elements, we extract the relative relevance of the elements in the benchmark for the document to be analyzed. Our method provides a way to consider the point of view of the reader of the document or the specific domain we want to use to get insights. This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text. Subjectivity is relevant to evaluate human reactions or to analyze text in a given context or domain.
Introduction
Subjectivity refers to the idea that any opinion of an individual (or a collective) is shaped by its socio-cultural experience. The way one feels and reacts is affected by the social and cultural situations the subject has been exposed. Each subject has its own experience, creating their unique way to read the environment. This form of diversity affects different aspects of human life. For instance, political subjectivity is the set of thoughts, motivations, feelings that a social subject has within a society. This type of subjectivity determines the bias towards a political ideology (Ransom, 1997). According to Damasio (2018), subjectivity is the central constitutive element of consciousness. Without subjectivity, the individual is incapable of reflection and discernment and, therefore, is incapable of being creative.
Subjectivity is also linked to emotions, as highlighted by Harré (1986), stating that emotions are the result of a social construction mechanism ("The social construction of emotions"). More recently, the anthropologist Tanya M. Luhrmann (2006) creates a theory of subjectivity with the help of a psychological model of emotion. Emotional responses are the consequence of the way the subject perceived an external stimulus. Due to subjectivity, this response occurs differently, depending on the subject perceiving it.
Subjectivity has an impact in a wide variety of human activities and the criteria to determine it may be applied to evaluate a more general relative relevance of a context to evaluate text. We will provide some examples of this in the conclusions.
In this paper, we use as a test case to evaluate our approach to value subjectivity in text analysis, the emotional reactions to a divisive political topic.
We use emotions as an example of a criteria for subjective classifications, being emotions a quintessential example of subjectivity. For all intents and purposes, emotions are the "what" we want to evaluate, not the "how", that is the core of this paper
In more general terms, our approach to subjectivity can be applied to all the cases when a context-dependent analysis is required. We will mention later the use of this approach to classify documents based on a given point of view.
To frame the emotions, we used the emotion classification created by Plutchik in 1980. This framework -summarized by the chart in Figure 1 -is known as the "Plutchik's wheel of emotions". According to this approach, there are eight major emotional "channels", with three levels of intensity each. Using this classification, we could define the emotions polarity according to the context where those emotions have been evaluated. Admiration/trust/acceptance may be positive in analyzing what people in a sport team is saying, negative in a radicalization analysis. Using a combination of emotions, we could also determine what Plutchik defined as "condition", such as + = .
Plutchik's wheel in Figure 1 reports 8 basic channels consisting of 3 different shades each. Between two different emotional channels, emotional conditions are also reported
Figure 1
A common method to extract emotions from text is based on ontologies of language (Shivhare & Khethawat, 2012). Emotions are extracted from text using the semantic similarity between words in the text and the words representing emotions, leveraging on ontology trees (e.g. Lin-similarity with WordNet). This approach is not factoring in the social and cultural differences as well as the continuous evolution of the language, leading to different perceptions of emotions.
Other works introduce the variable of subjectivity in sentiment (Do, Prasad, Maag, & Alsadoon, 2019), but they do not provide a comprehensive method to detect emotions. For example, recent studies use manually annotated corpora as training set for supervised learning algorithms. Although some of these methods reach significant performances (Mohammad & Bravo-Marquez, 2017), they do not take into consideration the intrinsic bias in the model from the manual annotation performed by humans with specific cultural and social backgrounds.
A recent approach to text processing is based on a vector representation of words, word embeddings (Giatsoglou, et al., 2017), as input features. A particular type of vector representations of words, called Word2Vec, was introduced in 2013 by Mikolov and colleagues. Several Natural Language Processing (NLP) and Natural Language Understanding (NLU) tasks have been executed with this technique, with better results than rule-based methods. Word2Vec uses an Artificial Neural Network (ANN) to predict words belonging to a similar context by maximizing the probability that a given word appear near a set of other words. By training an ANN, they are able to use the weights of the nodes in the hidden layers to create a vector representation of the words in the original corpus. The matrix containing all the unique words in the text ("embeddings") is unique for the corpus used to train the Word2Vec model and is a numerical representation of the specific semantic domain represented by the corpus, assuming the corpus is large enough to be representative.
Using this approach, we generate domain specific embeddings ("rooms"), representing the points of view of the different entities reading the same text.
In this paper, we first provide a review of the most relevant studies on subjectivity. We then introduce our methodology and finally we present a case study to validate our approach.
For the case study, we use the evaluation of emotions as an example of the impact of subjectivity. As domain, we use a political example, based on data from the past United States presidential election (Clinton vs. Trump) to build the Word2Vec models representing the two factions of voters. Using the two models, we analyze the tweets published by Donald J. Trump on his Twitter account (@realDonaldTrump) to show how the emotional reactions of the two factions confirm our hypothesis of both subjectivity of emotional reaction and validity of our representation of this subjectivity.
Literature Review
Existing techniques on textual information processing concentrate on mining and retrieval of factual information (e.g., information retrieval, text classification, text clustering, among others). On the other hand, the processing of subjective perceptions, such as emotions and opinions, is still a developing field. Current sentiment analysis methods offer positive or negative outcome, as if the nuances of the human perception had only two polarities and not a large range of interpretations. This study addresses this issue and brings a new consideration to textual processing: different communities have different perceptions, opinions, and emotions.
Subjectivity
Subjectivity can be considered both as individual ("my reaction to something is different from someone else reaction") and as collective ("the experts in one area" or "a culturally homogeneous group"). For continuity with the case study, we focus our literature review primarily on the individual subjectivity.
On the individual side, there are many ways of thinking and studying subjectivity: philosophers, psychologists and theorists have approached the topic in many different ways. For the scientific community focused on natural language, the approach combining subjectivity and language by the psychoanalyst Jaques Lacan is particularly relevant. Lacan (1977) states that the unconscious is structured like a language. In particular, the author affirms that language reveals the nature of our psychology and, therefore, our vision of the world. With this concept, the author challenged the common-sense idea that language exists as a means of communication. Instead, he wants to show that language is an expression of subjectivity, in which words are not just "meaning's placeholders", but they convey subjective value depending on who is the entity that is reading, writing, listening, or speaking.
The link between subjectivity and emotions is highlighted by Harré (1986), which states that emotions, for a subject, are the result of a social construction mechanism ("The social construction of emotions"). According to the author, language and values of a society determine emotions of the individuals or groups that compose it. More recently, the anthropologist Tanya M. Luhrmann (2006) creates a theory of subjectivity, incorporating a psychological model of emotion. Watson (1919), precursor of researches behaviorism, started an evolutionary study of emotions.
The subjectivity of emotions in centered on the subject as an entity that understands, learns, classifies and evaluates. This is why a person can feel -for example -fear for an event or a situation that could be irrelevant for another: emotional responses are the consequence of the way the subject perceived an external stimulus.
Emotion Theories
Researchers have investigated several aspects of human emotions in order to converge to a set of commonly accepted emotion categories (Picard, 1997). Plutchik developed his emotion "wheel" to illustrate the various relationships among emotions. After decades, it is still one of the main references in the field because it covers the numerous complex definitions of emotions into one image and stipulate the basic emotions as joy, sadness, fear, anger, disgust, surprise, acceptance and anticipation. It also comprehends the two dimensions for the basic emotions: valence (joy versus sadness) and arousal (anger in a lower level is annoyance, in higher level is rage). Written expression of emotion lacks gestures, tones and facial expressions, and instead relies on creative use of words for communicating emotion (Aman, 2007). Some words convey emotion explicitly, while other are used to convey emotion implicitly depending on the context (Clore, Ortony, & Foss, 1987).
Emotion Detection
As the word "affect" is commonly used in the scientific domain to refer to emotions, Piccard (1997) also denominates emotion recognition as Affective Computing. According to Aman (2007), "recognition and classification of emotion in text can be regarded as a sub-field of sentiment analysis". Extracting from text insights on emotions may benefit many areas, like personality analysis and modeling (Liu & Maes, 2004), text-to-speech synthesis (Alm, Roth, & Sproat, 2005), consumer feedback analysis, Human-Computer Interaction and Affective Interfaces (Brave & Nass, 2003), affective tutoring in elearning applications (Zhang, Zhou, Briggs, & Nunamaker Jr., 2006), affective communication systems (Neviarouskaya, Prendinger, & Ishizuka, 2007), virtual counselling and design of agents based on emotional users preferences.
Earlier studies of emotion recognition relied on datasets that were manually annotated for emotion and were typically keyword-based, identifying the presence of an emotion based on the appearance of predetermined lexical markers.
Aman (2007) explores approaches for automatic detection of emotions in text using natural language processing and machine learning techniques, training classifiers using semantic resources such as WordNet Affect and Roget's Thesaurus. Suttles and Ide (2013) used the eight basic emotions of Plutchik to treat the emotion classification task as a binary problem for four opposing emotion pairs. The approach applies distant supervision, which aims to overcome the need for a large set of manually labeled data to produce classifiers. There is still the need to train classifiers and opposed emotions are not considered as possibly coexisting.
Recent Applications
The automatization of subjective tasks is not new in Natural Language Processing. Many efficient algorithms, tools, and techniques have been developed in the past few years and can deliver reasonable results. More recent studies appear to focus on improving these existing methods or creating frameworks that combine them for a certain application.
Methodology
Semantic Frames and Subjectivity
According to the Social Judgment Theory (Sherif & Hovland, 1961), individuals evaluate new ideas based on their social background. Social and cultural elements are part of individuals knowledge, that needs to be represented to be placed into consideration for our goal.
More specifically on knowledge representation, Marvin Minsky (1974) in his pioneer study in AI and Cognitive Science -"A Framework for Representing Knowledge" -introduced the idea of "frames". According to his work, "a frame is a data-structure for representing a stereotyped situation like being in a certain kind of living room".
We leveraged this concept to recreate "rooms" representing the semantic context for a specific social/cultural entity. Consider an instant of time t and a social entity P that interacts with the external environment using a textual content. This interaction would unfold into two processes. The first is an internalization, that corresponds to the reading/acquisition action: the entity is exposed to a textual content and the reading of this content is the action that the entity performs to start the process of internalization. The second process is the externalization: the entity produces a content and through the action of publishing this content makes it usable to other entities.
Introducing the time reference, we can collect the results of these two processes in a single corpus that contains both contents produced by the entity and the contents to which the entity has been exposed up to the instant of time t. It is necessary to define the starting time for the collection of textual contents. If the social entity is a single individual, we can refer to the date on which this individual began internalizations and externalizations (interactions). If the social entity is a collective, the starting time coincides with its first specific interaction. The social entity's socio-cultural background at time t will then be a function of the corpus which has been constructed from its first interaction up to the present temporal instant. Thus, for any collective or individual it is possible to build the corpus at a precise moment of time.
The timeline in Figure 2 shows the interactions of a social entity P since time ! . C is the corpus resulting from all textual interactions (externalization and internalization of textual content).
This allows us to compare the corpus related to different subjects and evaluate if the subjects are similar. It is also possible to evaluate how each one of these social entities perceives a new corpus in terms of emotion intensity, and this is will be detailed in the next section. At the instant t, emotions can be measured using the corpus C(t) representing the reader's point of view (that is a representation of the reader's socio-cultural background).
Accounting subjectivity
The proposed methodology for evaluating subjective interpretation of text is summarized by the chart in Figure 3. Using this methodology, we:
• Create a "room" by generating embeddings from a domain specific corpus, to represent the point of view for the analysis. This room is a computational representation of the point of view and -in a more generalized way -a computational representation of a knowledge base.
• Define a word set to be used as criteria for the analysis. This is going to be a benchmark for the comparison, like the list of emotions based on the Plutchik classification. Word set can be composed by single words or small sets of words (like in "software engineering"), referred as "chunks" or "n-grams".
• Compare words/chunks _ in the incoming document (the one to be evaluated) with the words/chunks _ in the benchmark, using the "room" to calculate the distance between _ and _
• Adding and normalize the collected similarity values for each word/chunk _ in the benchmark to have an evaluation of the incoming document based on the elements in the benchmark, according to the point of view represented by the "room".
In order to provide a more accurate evaluation of the similarities, before comparing _ and _ , we transform the words/chunks into "simsets", lists of words/chunks most similar to each _ and _ , where the similarity is calculated by selecting the words/chunks from the "room" with the highest cosine similarity. Comparison is then performed between each element of the two lists.
Measuring Emotions
Transforming text into vectors
The proposed methodology is based on the distributed representation of the words -Word2Vec -introduced in the Natural Language Processing by Mikolov et al. (2013), and is represented by Figure 4.
Figure 4
The text to be analyzed can be viewed as a string composed by n non-unique words:
' . . . . . . '
By splitting the text string by the ' ' spaces, and cleaning the punctuation (if any), we create a single dimensional list composed by the non-unique words contained in the text: Using the Word2Vec method (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013), we assign a vector representation to each word ("embedding"), where the values defining the vectors are the output layer of the neural network used by Word2Vec and based on the probability of co-occurrence of the words in the text within a given number of words of separation. Each word is then transformed into a vector: If the embedding size is = , the words can be represented in a 3-dimensional vector space ( Figure 5) and the list can be rewritten as follows: Because of the way the vectors are created by Word2Vec, the greater the probability that two words appear in the same context (meaning higher probability of co-occurrence), the higher will be the proximity of the two vector representations in the space.
→ & ====⃗ The generic & ====⃗ is an dimensional vector,
Figure 5
A measure that can be used to estimate the contextual proximity of two words' vectors & ====⃗ and ) ====⃗ ,for any , is the cosine similarity:
=====⃗ =====⃗ = ‖=====⃗‖‖=====⃗‖
From which:
(=====⃗, =====⃗) = = =====⃗=====⃗ ‖=====⃗‖‖=====⃗‖ Then, given a word, it is possible to generate a set of words contextually similar to the given word. This set, that we call "simset" in reference to the traditional synonym sets "synsets", can be defined as follows:
(=====⃗) = M ) ====⃗ ∶ O & ====⃗, ) ====⃗P > R ∪ { & ====⃗}
where is a threshold parameter between 0 and 1.
Using vectors to evaluate emotions
We represented Plutchik's classification of emotions (1980) with a 8x3 matrix E, where each row is an emotional channel and the columns are the intensity of each emotion: We can then calculate the similarity of the vector representing any word in the incoming text -also extracted as a lookup from the "room" -with each emotion. For example, given a generic ' ', whose vector is & ====⃗ , we can calculate the similarity of this word with each of the 24 emotions in . The result is a matrix of emotions conveyed in the given word:
( & ====⃗) = ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡
( ! ****⃗, 0 ********************⃗ ) ( ! ****⃗, 4 *******⃗) ( ! ****⃗, ******************⃗ ) ( ! ****⃗, ***************************⃗ ) ( ! ****⃗, ************⃗ ) ( ! ****⃗, 0 0 ****************************⃗ ) ( ! ****⃗, 0 **********************************⃗ ) ( ! ****⃗, **********⃗ ) ( ! ****⃗, ***************⃗ ) ( ! ****⃗, 0 0 ***************************⃗ ) ( ! ****⃗, 0 ********************⃗) ( ! ****⃗, ****************************⃗ ) ( ! ****⃗, 0 *****************************⃗) ( ! ****⃗, *******************⃗ ) ( ! ****⃗, 0 ************⃗ ) ( ! ****⃗, *********************⃗ ) ( ! ****⃗, 0 ******************⃗ ) ( ! ****⃗, 0 *********************⃗ ) ( ! ****⃗, **************************⃗) ( ! ****⃗, **************⃗) ( ! ****⃗, ***********⃗) ( ! ****⃗, 0 *******************⃗) ( ! ****⃗, 0 0 0 ******************************⃗) ( ! ****⃗, 0 0 ***********************⃗ ) ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤ As mentioned in 3.3.1 we can expand the granularity of the matching using "simsets" for the words. For a 300-dimensional embedding model (Rekabsaz, Lupu, & Hanbury, 2017), we set a similarity threshold = . (out of 1), to each word present in the simset can be associated a weight given by the similarities with & ====⃗ . As an example, if we assign to the remaining two similarities random values greater than 0.7: ( ! ))))⃗) = [ ( ! ))))⃗, ! ))))⃗), ( ! ))))⃗, ))))))⃗), ( ! ))))⃗, )))))⃗)) ] =
= [ , . , . ]
At this point, each emotion can be rewritten by considering the simset. For example, for joy it becomes:
( & ====⃗) = = ( !)
)))⃗, 2 )))))))⃗) + ())))))⃗, 2 )))))))⃗) * . + ()))))⃗, 2 )))))))⃗) * . | ( ! ))))⃗)| For trust, it becomes:
( & ====⃗) = = C ! DDDD⃗, DDDDDDDDDDDD⃗G + CDDDDDD⃗, DDDDDDDDDDDD⃗G * . + CDDDDD⃗, DDDDDDDDDDDD⃗G * . | ( ! DDDD⃗)| Where the | ( & ====⃗)| is the cardinality of ( & ====⃗)
, which is equal to 3 in this case. Then, the resulting emotional condition will be:
( & ====⃗) = ( & ====⃗) + ( & ====⃗)
The following are some examples from sample texts. Figure 6 is an example of emotional stacked bar chart ("emotional DNA") and the emotional conditions histogram for a given text ("terrorist attack paris"). Colors refer to those used in the Plutchik's wheel of emotions in Figure 1. Figure 6 4 Case study A case study to validate the above methodology, we analyze the emotional reactions to Trump's tweets for different groups of population.
As different groups, we selected a potential "pro Trump" and a potential "against Trump". To create the "rooms" for two different groups of population, we collected about 1.6 million tweets published between September 2016 and November 2016 by geolocated users in the United States. Tweets in this dataset are classified into two classes: those that favor the candidate Donald Trump, and those that favor the candidate Hillary Clinton.
To classify whether a tweet is belonging to the first or second class, we used the hashtag co-occurrence method proposed by Bovet et al. (2018): the authors extract 4 sets of partisan hashtags used by Twitter users during the presidential race. These sets are: 1 set of hashtags in favor of Trump, 1 set of hashtags in favor of Clinton, 1 set of hashtags against Trump, 1 set of hashtags against Clinton. The 4 sets created with this approach are summarized in Table 2. Using this classification, we created two corpora: a "trumpers-corpus" and a "clintoners-corpus". Table 2 Entity Set Hashtags
Trumpers
Pro -Trump #trump2016, #trump16, #makeamericagreatagain, #maga, #trumppence16,#trumptrain, #presidenttrump, #makeamericasafeagain, #democratsfortrump, #vetsfortrump, #women4trump, #gays4trump, #democrats4trump, #latinos4trump, #blacks4trump, #buildthewall, #votetrump2016, #alwaystrump, #bikersfortrump, #makeamericaworkagain, #trumpiswithyou, #onlytrump, #heswithus, #trumpcares, #votegop Trumpers Anti-Clinton #neverhillary, #imnotwithher, #crookedhillary, #nevereverhillary, #nomoreclintons, #stophillary, #kilary, #clintoncrimefoundation, #hillno, #dropouthillary, #riskyhillary, #clintoncorruption, #notwithher, #hillary4jail, #deletehillary, #hillarylies, #hypocritehillary, #iwillneverstandwithher, #crookedclinton, #crookedclintons, #lyinghillary, #hillaryliesmatter, #hillaryliedpeopledied Clintoners Pro-Clinton #hillary2016, #imwithher, #strongertogether, #vote4hillary, #imwithhillary, #clinton-kaine2016, #hillarysopresidential, #hillarystrong, #uniteblue, #voteblue, #sheswithus, #votehillary, #madampresident, #yeswekaine, #welovehillary, #itrusther, #istrusthillary, #estoyconella, #repubblicans4hillary, #bluewave2016, #hillstorm2016, #hillaryforpr, #hillaryforamerica, #hillarysoqualified, hillaryforpresident Clintoners Anti-Trump #nevertrump, #dumpthetrump, #crybabytrum, #trumpthefraud, #lyingtrump, #stoptrump, #dirtydonald, #crookeddonald, #lyintrump, #nevertrumppence, #boycotttrump, #lyindonald, #lovetrumpshates, #notrumpanytime, #defeattrump, #weakdonald, #sleazydonald, #chickentrump. #loserdonald, #losertrump, #showusyourtaxes, #antitrump, #freethedelefates, #stoptrump, #traitfortrump We cleaned and pre-processed the corpora using the pipeline represented by Figure 4. The two corpora have been then used to train two Word2Vec models using a Python librarygensim -with the skip-gram algorithm (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013), an embedding size equal to 300, window size equal to 5, and minimum number of word counts of 2
To analyze how the two groups perceive emotions, we used a dataset containing all tweets published by @real-DonaldTrump, official account of Donald J. Trump. For each tweet, the emotions perceived by the two groups were calculated using the two different Word2Vec models trained with the partisan textual contents ("trumpers-corpus" and "clintoners-corpus").
Tweets by @realDonaldTrump Twitter account have been cleaned and pre-processed, same as for tweets from the groups of supporters. Figure 7 shows the comparison between emotions for the two groups, where red is for trumpers and blue for clintoners. The lighter is the cloud the higher is the number of datapoints concentrated in that area. The difference in mean values of emotions expresses the emotional subjectivity To better evaluate the results, they have been clustered, using a measure of emotional polarization (Primario, Borrelli, Iandoli, Zollo, & Lipizzi, 2017; Morales, Borondo, Losada, & Benito, 2015) for each emotional channel. We calculated the polarization P that takes into account the distance between the two peaks of distributions (of both supporters' group) and their population in terms of number of tweets published by each group of supporters:
= c − | ( ) − ( )| ( ) + ( ) i * Where = | ( ) − ( )|
is the absolute value of the difference between the average for the emotion E perceived by T (trumpers) and the average for the same emotion E perceived by the population C (clintoners).
( ) is the total number of tweets of a group .
Emotions inducing a polarizing behavior, are the first three in Table 3. The table aggregates results and includes values of d and P. Figure 8 and Figure 9 show the distribution of values for the different clusters and emotions. On the left graphs, the lighter is the cloud the higher is the number of datapoints concentrated in that area. On the right side, the same emotions have been visualized using the pair plots, which show red distributions for "trumpers" and blue distributions for "clintoners". Figure 8 contains the results for cluster . In fact, the intensity of Trust, which is the most polarizing emotion (0.21) is biased towards trumpers group as we are analyzing Trump's tweets. As contrary, Fear and Anger are biased towards clintoners group. Figure 9 contains the results for cluster , composed by emotions that seems to be less polarizing than the emotions in cluster A. Those are more generic categories of emotions like Joy, Sadness but even some unexpected emotions as Disgust. In particular, Disgust has the highest intensity for both the communities of supporters with a slight advantage for clintoners (+0.02). Amazement and Interest have low values of emotional polarization (0.03 and 0.01) but both are biased towards trumpers in terms of intensity, which still confirm our hypothesis.
Discussion
We selected and classified two different categories of users' text to create two different perspectives from which our methodology measures the emotions. We found that by training two different Word2Vec models, with two differently biased political corpora, we obtain a distinction in intensity of emotions that confirms the hypothesis of emotional subjectivity that we made. In our experiment, the distinction is particularly significant for certain types of emotions such as Anger, Fear, and Trust. In addition, the choice of analyzing text of @realDonaldTrump tweets helped us validate the correctness of our method. In fact, emotions like Trust are perceived with greater intensity by the trumpers group, while emotions as Anger have a higher intensity for the clintoners. Anger, Trust, Fear show signals of polarization.
Conclusion
In this paper, we presented a novel approach to detect subjectivity in text corpora and applied it to one of the most subjective areas of human life, that is emotion perception.
Unlike other studies that provide a general method for measuring emotions in a text, we intend to contribute with an approach that uses the point of view of the social entity (individual or collective) that reads the text. Our approach is essentially different from what is already present, both from a methodological and interpretative perspective. The proposed method has generated interesting results on the case study. Using two groups of individuals who interact online with different political ideas, we have found that our method produces emotional differences in the two groups, and these emotional differences are also oriented towards the political bias of the analyzed text, thus validating our method.
The application to emotion detection can be relevant in marketing, finance, politics, psychology and social science studies, providing elements to understand people's reactions to text beyond traditional sentiment analysis.
We used the "room theory" as represented by Figure 3 with different benchmarks to extract numerical values from incoming documents.
In particular, we used it to create a Decision Support System to help decision maker to determine the most appropriate investments in technologies to better compete with their competitors. The source of information was streams of news, patents and papers related to technology and the risk was calculated using the decision maker "room". We then created a risk panel that we visualized in an interactive mode.
We also used the same approach to determine emerging and coming technologies, using as benchmark a list of current technologies, as incoming documents papers, news, patents, blogs and calculating distances using a "room" created from text related to technologies.
Another application was in determining the most appropriate contract type for given purchasing requests. In this case, the "room" was created from a large corpus representing the contracting officer knowledge base and the benchmark the list of characteristics of the different contract types.
This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text, such as emotion, sentiment, and opinion mining, language translation, text summarization, topic labeling, amongst others. Subjectivity is not limited to human reactions, but it could be used to provide a text with an interpretation related to a given domain.
Machine learning (ML) techniques are commonly used for subjective analysis, in particular for detection of opinion (Jimenez-Marques, Gonzalez-Carrasco, Lopez-Cuadrado, & Ruiz-Mezcua, 2019) and sentiment (Pinto & Murari, 2019). Goularte et al. (2019) used fuzzy rules to improve previous text summarization methods. Another study (Li , et al., 2019) is focused on subjective queries and databases. Wu et al. (2019) created an algorithm to deal with subjectivity on crowdsourced label aggregation problems. Finally, a study from 2006 (Lin, Wilson, Wiebe, & Hauptmann) highlighted the need for a perspective analysis when detecting subjectivity in text. This line of study became known as stance detection and is commonly used in opinion mining, to identify if the author is in favor or against the object being analyzed (D'Andrea, Ducange, Bechini, Renda, & Marcelloni, 2019).
Figure 2
2Figure 2
Figure 3
3Figure 3
where is equal to the embedding size used to build the Word2Vec model. The list of non-unique words can then be written as a dimensional list of non-unique vectors each consisting of components:
in the matrix can be represented as a vector using Word2Vec, generating 24 dimensional vectors, where is the embedding size. The values of the vectors are extracted as a lookup from the "room".
Figure 7
7Figure 7
Figure 8
8Figure 8
Figure 9
9Figure 9
Leveraging theories from Psychology (Plutchik, 1980), Cognitive Sciences (Minsky, 1974), Social Sciences (Sherif & Hovland, 1961), and recent text mining approaches (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013) we have built a framework able to: o Create a point of view ("room") to analyze a text.o Measure the basic emotions perceived using that point of view.
Table 1 (
1Aman, 2007) contains a recap of different classifications.Table 1
Tomkins
Izard
Plutchik
Ortony
Ekman
Joy
Enjoyment
Joy
Joy
Happiness
Anguish
Sadness
Sorrow
Sadness
Sadness
Fear
Fear
Fear
Fear
Fear
Anger
Anger
Anger
Anger
Anger
Disgust
Disgust
Disgust
Disgust
Disgust
Surprise
Surprise
Surprise
Surprise
Surprise
Interest
Interest
Acceptance
Table 3
3EmotionUsing this metric, we considered the following two clusters of emotions:Trump-
ers
Clin-
toners
( $ )
( % )
Trust
0.25652
2
0.0288
36
0.2222
19
0.21721
5
Fear
0.06841
8
0.2381
26
0.1650
7
0.16135
3
Anger
0.04160
4
0.1550
71
0.1144
31
0.11185
4
Amaze-
ment
0.05937
4
0.0306
79
0.0335
77
0.03282
1
Disgust
0.51626
5
0.5361
75
0.0102
35
0.01000
4
Interest
0.04960
9
0.0335
09
0.0171
01
0.01671
6
Joy
0.04802
6
0.0360
52
0.0049
57
0.00484
5
Sadness
0.04621
4
0.0372
04
0.0107
4
0.01049
8
= {
,
,
}
= {
,
,
,
,
}
Emotions from text: machine learning for text-based emotion prediction. C O Alm, D Roth, R Sproat, Association for Computational LinguisticsAlm, C. O., Roth, D., & Sproat, R. (2005, October). Emotions from text: machine learning for text-based emotion prediction. Association for Computational Linguistics, 579-586.
Recognizing emotions in text. S Aman, Masters Abstracts International. 463Aman, S. (2007). Recognizing emotions in text. Masters Abstracts International, 46(3).
Discrete emotions or dimensions? The role of valence focus and arousal focus. L F Barrett, Cognition & Emotion. 124Barrett, L. F. (1988). Discrete emotions or dimensions? The role of valence focus and arousal focus. Cognition & Emotion, 12(4), 579-599.
Emotion categories across languages. Handbook of categorization in cognitive science. J S Boster, Boster, J. S. (2005). Emotion categories across languages. Handbook of categorization in cognitive science, 187-222.
Validation of Twitter opinion trends with national polling aggregates: Hillary Clinton vs Donald Trump. A Bovet, F Morone, H Makse, Scientific reports. 81Bovet, A., Morone, F., & Makse, H. (2018). Validation of Twitter opinion trends with national polling aggregates: Hillary Clinton vs Donald Trump. Scientific reports, 8(1), 1-16.
The Human-Computer Interaction Handbook. S Brave, C Nass, J. Jacko, & A. (. SearsLawrence Erlbaum AssociatesMahwah, New JerseyEmotion in human-computer interactionBrave, S., & Nass, C. (2003). Emotion in human-computer interaction. In J. Jacko, & A. (. Sears, The Human-Computer Interaction Handbook (pp. 103-118). Mahwah, New Jersey: Lawrence Erlbaum Associates.
A dictionary of critical theory. I Buchanan, OUPOxfordBuchanan, I. (2010). A dictionary of critical theory. OUP Oxford.
New avenues in opinion mining and sentiment analysis. E Cambria, B Schuller, Y Xia, C Havasi, IEEE Intelligent systems. 282Cambria, E., Schuller, B., Xia, Y., & Havasi, C. (2013). New avenues in opinion mining and sentiment analysis. IEEE Intelligent systems, 28(2), 15-21.
The psychological foundations of the affective lexicon. G L Clore, A Ortony, M Foss, Journal of personality and social psychology. 534751Clore, G. L., Ortony, A., & Foss, M. (1987). The psychological foundations of the affective lexicon. Journal of personality and social psychology, 53(4), 751.
Descartes' error and the future of human life. A R Damasio, Scientific American. 2714144Damasio, A. R. (1994). Descartes' error and the future of human life. Scientific American, 271(4), 144.
The Strange Order of Things: Life, Feeling, and the Making of Cultures. A R Damasio, VintageNew York, NYDamasio, A. R. (2018). The Strange Order of Things: Life, Feeling, and the Making of Cultures. New York, NY: Vintage.
Monitoring the public opinion about the vaccination topic from tweets analysis. E D'andrea, P Ducange, A Bechini, A Renda, F Marcelloni, Expert Systems with Applications. 116D'Andrea, E., Ducange, P., Bechini, A., Renda, A., & Marcelloni, F. (2019). Monitoring the public opinion about the vaccination topic from tweets analysis. Expert Systems with Applications, 116, 209-226.
Deep learning for aspect-based sentiment analysis: a comparative review. H H Do, P Prasad, A Maag, A Alsadoon, Expert Systems with Applications. 118Do, H. H., Prasad, P., Maag, A., & Alsadoon, A. (2019). Deep learning for aspect-based sentiment analysis: a comparative review. Expert Systems with Applications, 118, 272-299.
Sentiment analysis leveraging emotions and word embeddings. M Giatsoglou, M Vozalis, K Diamantaras, A Vakali, G Sarigiannidis, K Chatzisavvas, Expert Systems with Applications. 69Giatsoglou, M., Vozalis, M., Diamantaras, K., Vakali, A., Sarigiannidis, G., & Chatzisavvas, K. (2017). Sentiment analysis leveraging emotions and word embeddings. Expert Systems with Applications, 69, 214-224.
A text summarization method based on fuzzy rules and applicable to automated assessment. B F Gourlarte, S M Nassar, R Fileto, H Saggion, Expert Systems With Applications. 115Gourlarte, B. F., Nassar, S. M., Fileto, R., & Saggion, H. (2019). A text summarization method based on fuzzy rules and applicable to automated assessment. Expert Systems With Applications, 115, 264-275.
R Harré, The social construction of emotions. OxfordBlackwell42Harré, R. (Ed.). (1986). The social construction of emotions (Vol. 42). Oxford: Blackwell.
W James, The principles of psychology. LondonMacmillan1James, W. (1890). The principles of psychology (Vol. 1). London: Macmillan.
Towards a biga data framework for analyzing social media content. J Jimenez-Marques, I Gonzalez-Carrasco, J Lopez-Cuadrado, B Ruiz-Mezcua, International Journal of Information Management. 44Jimenez-Marques, J., Gonzalez-Carrasco, I., Lopez-Cuadrado, J., & Ruiz-Mezcua, B. (2019). Towards a biga data framework for analyzing social media content. International Journal of Information Management, 44, 1-12.
Sentiment classification of movie reviews using contextual valence shifters. A Kennedy, D Inkpen, Computational intelligence. 222Kennedy, A., & Inkpen, D. (2006). Sentiment classification of movie reviews using contextual valence shifters. Computational intelligence, 22(2), 110-125.
The function and field of speech and language in psychoanalysis. J Lacan, Écrits: A selection. Lacan, J. (1977). The function and field of speech and language in psychoanalysis. In Écrits: A selection (pp. 30-113).
Distributed representations of sentences and documents. Q Le, T Mikolov, International conference on machine learning. Le, Q., & Mikolov, T. (2014). Distributed representations of sentences and documents. International conference on machine learning, 1188-1196.
Y Li, A Feng, J Li, S Mumick, A Halevy, V Li, W.-C Tan, Subjective Databases. Proceedings of the VLDB Endowment. 12Li , Y., Feng, A., Li, J., Mumick, S., Halevy, A., Li, V., & Tan, W.-C. (2019). Subjective Databases. Proceedings of the VLDB Endowment, 12(11), 1330-1343.
Text-based emotion classification using emotion cause extraction. W Li, H Xu, Expert Systems with Applications. 414Li, W., & Xu, H. (2014). Text-based emotion classification using emotion cause extraction. Expert Systems with Applications, 41(4), 1742-1749.
Which side are you on? Identifying perspectives at the document and sentence levels. W.-H Lin, T Wilson, J Wiebe, A Hauptmann, Proceedings of the 10th Conference on Computational Natural Language Learning. the 10th Conference on Computational Natural Language LearningLin, W.-H., Wilson, T., Wiebe, J., & Hauptmann, A. (2006, June). Which side are you on? Identifying perspectives at the document and sentence levels. Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-X), 109-116.
Synthesis lectures on human language technologies. B Liu, 5Sentiment analysis and opinion miningLiu, B. (2012). Sentiment analysis and opinion mining. Synthesis lectures on human language technologies, 5(1), 1-167.
What would they think?: a computational model of attitudes. H Liu, P Maes, Proceedings of the 9th international conference on Intelligent user interfaces. the 9th international conference on Intelligent user interfacesLiu, H., & Maes, P. (2004). What would they think?: a computational model of attitudes. Proceedings of the 9th international conference on Intelligent user interfaces, 38-45.
A model of textual affect sensing using real-world knowledge. H Liu, H Lieberman, T Selker, Proceedings of the 8th international conference on Intelligent user interfaces. the 8th international conference on Intelligent user interfacesLiu, H., Lieberman, H., & Selker, T. (2003, January). A model of textual affect sensing using real-world knowledge. Proceedings of the 8th international conference on Intelligent user interfaces, 125-132.
Identifying individual expectations in service recovery through natural language processing and machine learning. Y Liu, Y Wan, X Su, Expert Systems with Applications. 1311Liu, Y., Wan, Y., & Su, X. (2019, October). Identifying individual expectations in service recovery through natural language processing and machine learning. Expert Systems with Applications, 131(1), 288-298.
. T M Luhrmann, Subjectivity. Anthropological Theory. 63Luhrmann, T. M. (2006). Subjectivity. Anthropological Theory, 6(3), 345-361.
Emotions in collectivist and individualist contexts. B Mesquita, Journal of personality and social psychology. 80168Mesquita, B. (2001). Emotions in collectivist and individualist contexts. Journal of personality and social psychology, 80(1), 68.
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G Corrado, J Dean, Advances in neural information processing systems. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 3111-3119.
Smartphonebased conversational agents and responses to questions about mental health, interpersonal violence, and physical health. A S Miner, A Milstein, S Schueller, R Hegde, C Mangurian, E Linos, JAMA internal medicine. 1765Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., & Linos, E. (2016). Smartphone- based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA internal medicine, 176(5), 619-625.
A framework for representing knowledge. M Minsky, Minsky, M. (1974). A framework for representing knowledge.
S M Mohammad, F Bravo-Marquez, arXiv:1708.03700WASSA-2017 shared task on emotion intensity. arXiv preprintMohammad, S. M., & Bravo-Marquez, F. (2017). WASSA-2017 shared task on emotion intensity. arXiv preprint arXiv:1708.03700.
Measuring political polarization: Twitter shows the two sides of Venezuela. A Morales, J Borondo, J Losada, R Benito, Chaos: An Interdisciplinary Journal of Nonlinear Science. 25333114Morales, A., Borondo, J., Losada, J., & Benito, R. (2015). Measuring political polarization: Twitter shows the two sides of Venezuela. Chaos: An Interdisciplinary Journal of Nonlinear Science, 25(3), 033114.
Textual affect sensing for sociable and expressive online communication. A Neviarouskaya, H Prendinger, M Ishizuka, International Conference on Affective Computing and Intelligent Interaction. Neviarouskaya, A., Prendinger, H., & Ishizuka, M. (2007). Textual affect sensing for sociable and expressive online communication. International Conference on Affective Computing and Intelligent Interaction, 218-229.
A Ortony, G Clore, A Collins, The Cognitive Structure of Emotion. Cambridge University PressOrtony, A., Clore , G., & Collins, A. (1988). The Cognitive Structure of Emotion. Cambridge University Press.
Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval. B Pang, L Lee, 2Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends® in Information Retrieval, 2(1-2), 1-135.
A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. B Pang, L Lee, Proceedings of the 42nd annual meeting on Association for Computational Linguistics. the 42nd annual meeting on Association for Computational Linguistics271Pang, B., & Lee, L. (2014). A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. Proceedings of the 42nd annual meeting on Association for Computational Linguistics, 271.
Affective computing. R W Picard, MIT pressPicard, R. W. (1997). Affective computing. MIT press.
Real Time Sentiment Analysis of Political Twitter Data Using Machine Learning Approach. J P Pinto, V Murari, International Research Journal of Engineering and Technology. 644124Pinto, J. P., & Murari, V. (2019, April). Real Time Sentiment Analysis of Political Twitter Data Using Machine Learning Approach. International Research Journal of Engineering and Technology, 6(4), 4124.
A general psychoevolutionary theory of emotion. R Plutchik, Theories of Emotion. R. Plutchik, & H. (. KellermanNew YorkAcademic Press1Plutchik, R. (1980). A general psychoevolutionary theory of emotion. In R. Plutchik, & H. (. Kellerman, Emotion: Theory, Research, and Experience, Volume 1: Theories of Emotion. New York: Academic Press.
L Polanyi, A Zaenen, Contextual valence shifters. Computing attitude and affect in text: Theory and applications. Polanyi, L., & Zaenen, A. (2006). Contextual valence shifters. Computing attitude and affect in text: Theory and applications, 1-10.
Measuring Polarization in Twitter Enabled in Online Political Conversation: The Case of 2016 US Presidential Election. S Primario, D Borrelli, L Iandoli, G Zollo, C Lipizzi, Primario, S., Borrelli, D., Iandoli, L., Zollo, G., & Lipizzi, C. (2017). Measuring Polarization in Twitter Enabled in Online Political Conversation: The Case of 2016 US Presidential Election. 2017
IEEE International Conference on Information Reuse and Integration (IRI). IEEE International Conference on Information Reuse and Integration (IRI), 607-613.
Foucault's discipline: The politics of subjectivity. J S Ransom, Duke University PressRansom, J. S. (1997). Foucault's discipline: The politics of subjectivity. Duke University Press.
Exploration of a threshold for similarity based on uncertainty in word embedding. N Rekabsaz, M Lupu, A Hanbury, European Conference on Information Retrieval. Rekabsaz, N., Lupu, M., & Hanbury, A. (2017, April). Exploration of a threshold for similarity based on uncertainty in word embedding. European Conference on Information Retrieval, 396-409.
Psychological models of emotion. K R Scherer, The neuropsychology of emotion. 137Scherer, K. R. (2000). Psychological models of emotion. In The neuropsychology of emotion (Vol. 137, pp. 137-162).
Structures of feeling: Affectivity and the study of culture. Sharma, D., & Tygstrup, F.Sharma, D., & Tygstrup, F. (Eds.). (2015). Structures of feeling: Affectivity and the study of culture (Vol.
Social judgment: Assimilation and contrast effects in communication and attitude change. M Sherif, C Hovland, Yale University PressSherif, M., & Hovland, C. (1961). Social judgment: Assimilation and contrast effects in communication and attitude change. Yale University Press.
S N Shivhare, S Khethawat, arXiv:1205.4944Emotion detection from text. arXiv preprintShivhare, S. N., & Khethawat, S. (2012). Emotion detection from text. arXiv preprint arXiv:1205.4944.
Science and human behavior. B F Skinner, Simon and SchusterSkinner, B. F. (1965). Science and human behavior . Simon and Schuster.
Distant supervision for emotion classification with discrete binary values. J Suttles, N Ide, International Conference on Intelligent Text Processing and Computational Linguistics. Suttles, J., & Ide, N. (2013). Distant supervision for emotion classification with discrete binary values. International Conference on Intelligent Text Processing and Computational Linguistics, 121- 136.
Sentiment strength detection in short informal text. M Thelwall, K Buckley, G Paltoglou, D Cai, A Kappas, Journal of the American society for information science and technology. 12Thelwall, M., Buckley, K., Paltoglou, G., Cai, D., & Kappas, A. (2010). Sentiment strength detection in short informal text. Journal of the American society for information science and technology, 61(12), 2544-2558.
A schematic outline of the emotions. J B Watson, Psychological Review. 263165Watson, J. B. (1919). A schematic outline of the emotions. Psychological Review, 26(3), 165.
Structures of feeling. R Williams, Marxism and literature. 1Williams, R. (1977). Structures of feeling. Marxism and literature, 1, 128-135.
A subjectivity-aware algorithm for label aggregation in crowdsourcing. M Wu, Q Li, S Wang, J Hou, IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). Wu, M., Li, Q., Wang, S., & Hou, J. (2019, August). A subjectivity-aware algorithm for label aggregation in crowdsourcing. 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), 373-378.
W M Wundt, Outlines of psychology. LeipzigWilhelm Engelmann1Wundt, W. M. (1897). Outlines of psychology (Vol. 1). Leipzig: Wilhelm Engelmann.
Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. D Zhang, L Zhou, R Briggs, J NunamakerJr, Information & management. 431Zhang, D., Zhou, L., Briggs, R., & Nunamaker Jr., J. (2006). Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information & management, 43(1), 15-27.
| [] |
[
"Which Learning Algorithms Can Generalize Identity-Based Rules to Novel Inputs?",
"Which Learning Algorithms Can Generalize Identity-Based Rules to Novel Inputs?"
] | [
"Paul Tupper \nDepartment of Mathematics\nDepartment of Computer Science\nSimon Fraser University Burnaby\nV5A 1S6BCCanada\n",
"Bobak Shahriari \nUniversity of British Columbia Vancouver\nV6T 1Z4BCCanada\n"
] | [
"Department of Mathematics\nDepartment of Computer Science\nSimon Fraser University Burnaby\nV5A 1S6BCCanada",
"University of British Columbia Vancouver\nV6T 1Z4BCCanada"
] | [] | We propose a novel framework for the analysis of learning algorithms that allows us to say when such algorithms can and cannot generalize certain patterns from training data to test data. In particular we focus on situations where the rule that must be learned concerns two components of a stimulus being identical. We call such a basis for discrimination an identitybased rule. Identity-based rules have proven to be difficult or impossible for certain types of learning algorithms to acquire from limited datasets. This is in contrast to human behaviour on similar tasks. Here we provide a framework for rigorously establishing which learning algorithms will fail at generalizing identity-based rules to novel stimuli. We use this framework to show that such algorithms are unable to generalize identitybased rules to novel inputs unless trained on virtually all possible inputs. We demonstrate these results computationally with a multilayer feedforward neural network. | null | [
"https://arxiv.org/pdf/1605.04002v1.pdf"
] | 6,825,888 | 1605.04002 | 5c29337d1b4d8561bd6b5d8f991ca3ed4c71dc1f |
Which Learning Algorithms Can Generalize Identity-Based Rules to Novel Inputs?
Paul Tupper
Department of Mathematics
Department of Computer Science
Simon Fraser University Burnaby
V5A 1S6BCCanada
Bobak Shahriari
University of British Columbia Vancouver
V6T 1Z4BCCanada
Which Learning Algorithms Can Generalize Identity-Based Rules to Novel Inputs?
phonologylearning algorithmssymmetriescon- nectionism
We propose a novel framework for the analysis of learning algorithms that allows us to say when such algorithms can and cannot generalize certain patterns from training data to test data. In particular we focus on situations where the rule that must be learned concerns two components of a stimulus being identical. We call such a basis for discrimination an identitybased rule. Identity-based rules have proven to be difficult or impossible for certain types of learning algorithms to acquire from limited datasets. This is in contrast to human behaviour on similar tasks. Here we provide a framework for rigorously establishing which learning algorithms will fail at generalizing identity-based rules to novel stimuli. We use this framework to show that such algorithms are unable to generalize identitybased rules to novel inputs unless trained on virtually all possible inputs. We demonstrate these results computationally with a multilayer feedforward neural network.
Introduction
Suppose a subject is asked to learn an artificial language in which all words consist of two letters. They are told that CC, AA, HH, EE, and RR are all examples of valid words in the language but that GA, EH, RA, ER, MG are not valid words. Now suppose that the learner is asked whether YY and YZ could be valid words in the language. Presumably they will say that YY could be a valid word in the language whereas YZ could not be. The obvious feature that all the valid words have in common is that they consist of two identical letters. This feature is not shared by the invalid words. We say in this case that the learners have learned an identity-based rule, and are able to generalize the rule to novel inputs.
We do not know if this exact experiment has ever been performed, but there have been analogous tests in the phonological domain (Berent, Marcus, Shimron, & Gafos, 2002;Gallagher, 2013). In artificial language learning tasks, human subjects are sensitive to identity relations between segments, and are able to generalize them to novel inputs. This kind of effect is not specific to language though: consider a task where subjects are presented with pictures of pairs of socks, and are asked to say whether they form a matching pair.
Surprisingly, given how obvious the above pattern is to human learners, many computer models of learning are not able to learn identity-based rules like those implicit in the data above, without being presented with nearly all possible inputs. These computational learners may give the same rating to both the forms YY and YZ, since neither of them have any similarity to the training words in a manner that is deemed relevant by the algorithms. Important classes of such algorithms include basic connectionist algorithms (Rumelhart & McClelland, 1988) and the "Plain" (Baseline version) of the UCLA Phonotactic Learner (Hayes & Wilson, 2008). There are ways to modify these algorithms to perform better on such tasks, for example, by introducing copying (Colavin, Levy, & Rose, 2010), special representations of identical segments in the input (Gallagher, 2013), or weight sharing across connections as is done in convolutional neural networks (LeCun & Bengio, 1995).
There are many informal arguments given for why the basic versions of these algorithms cannot learn identity-based rules. Such algorithms are unable to generalize "outside the training space" (Marcus, 2003), or "do not instantiate variables" (Berent, 2012). Though these terms describe a genuine limitation of such algorithms, they suffer the drawback of not being defined formally. Even though computational learners themselves are clearly defined, whether a particular algorithm is able to learn identity relations or instantiate variables is impossible to determine precisely since the criterion for these conditions is not formalized. Our present goal is to provide a rigorous framework for these informal statements about algorithms, and to provide criteria for when an algorithm cannot generalize identity-based rules to novel inputs.
In the following we define learning algorithms, symmetries of sets of words, and what it means for an algorithm to be invariant under a symmetry. In our main result we show that if an algorithm is invariant under some symmetry, and the training data is invariant under the same symmetry, then the algorithm cannot learn a grammar that is not invariant under that symmetry. As an application, we demonstrate a symmetry that identity-based rules are not invariant under, and then show that a wide class of algorithms are invariant under it. This means that such algorithms cannot learn identity-based grammars with invariant training data, in contrast to human performance on analogous tasks. We then demonstrate how feed-forward neural networks suffer from these limitations, independent of the number of hidden layers in the network.
Formal Definitions
We consider a set W , which we call the set of words, containing all well-formed inputs. We stress that in the linguis-tic case W consists of both words that are good (grammatical) and words that are bad (ungrammatical). In what follows we will consider words to be strings of letters, but individual words can be anything, such as strings of segments or feature bundles.
To fix ideas, in what follows we will often consider a particular example of a set of words: we let W be the set of all two letter words, where the letters are capitals taken from the English alphabet, such as AA or MG.
We define the training data D to be a collection of wordrating pairs w, r where w is a word in W and r is a number interpreted as a rating of how "good" w is. For example, using the word set W , a dataset D might consist of
CC, 1 , AA, 1 , EE, 1 , GA, 0 , EH, 0 , RA, 0 . (1)
This dataset says that CC, AA, and EE have rating 1 (and thus are good words) and that GA, EH, and RA have rating 0 (and thus are bad words). Alternatively, in a training task where only good words are given to the learner, D might consist only of good words paired with the rating 1. But there are other possibilities: words could be paired with a rating given by their prevalence in a corpus, for example.
To formally define a learning algorithm, consider what a learning algorithm such as the UCLA Phonotactic Learner (Hayes & Wilson, 2008) does. First, a collection of data D is input to the algorithm and used to choose a set of parameters p in a model. We can formalize this as p = A (D). Once we have p, given any new input w the algorithm outputs a score, which we can formalize as f (p, w). Typically, the computation of p from D is computationally intensive whereas once we have p, the score f (p, w) is cheap to evaluate. This matches our experience of human behaviour where learning a language occurs over long periods of time, whereas judgements of the well-formedness of novel words are readily produced by adult speakers.
Here we will abstract away issues of parameter setting and computational effort and just view an algorithm as a map that takes a set of training data D and an input w and outputs a rating. We consider the map L given by
L (D, w) = f (A (D), w).
Specifically, a learning algorithm L is a map that takes training data D and word w and outputs a score L (D, w). The interpretation is that this score is what you would get if you used the data D to train the algorithm and then used the resulting computational model to evaluate the word w.
We note that we interpret both the ratings coupled with words in the training data D and the scores output by the algorithm L as measures of the goodness of a word. This is natural, since we expect the algorithm to give good scores to words that have high ratings in the training data. However, ratings and scores are distinct in general; for example, ratings in D could be how common a word is in a corpus and scores from L could be intended to model how well-formed a word is on a scale from 0 to 1.
We define a symmetry σ to be a bijective map from the set of words W to itself; in other words, a map such that σ (w) is in W for all w in W , and for all v in W there is an u in W such that σ (u) = v. As an example of a symmetry, letσ be the map from W to itself given bỹ
σ (XY) = YX,(2)
for any letters X, Y. Thus the symmetryσ reverses the order of letters in two-letter words. We introduce symmetries in order to analyze algorithms: we are not claiming that they have any psychological or linguistic reality. Indeed, as far as we know all maps that are naturally occurring phonological processes are not symmetries. For one thing, most phonological maps satisfy σ (σ (x)) = σ (x) for all x (also known as idempotency (Magri, 2015)). But this can only happen with a symmetry if σ (x) = x for all x, meaning that σ does nothing.
A word w is invariant under a symmetry σ if σ (w) = w. To apply a symmetry to a set of training data, we say that σ just acts on each word in every word-rating pair in the data set, but does not change the rating of that word. So if the wordrating pair w, r is in D, then the pair σ (w), r is in σ (D). For example, if we appliedσ (as defined in (2)) to the dataset in (1) we would get the dataset
CC, 1 , AA, 1 , EE, 1 , AG, 0 , HE, 0 , AR, 0 .
We say that a dataset D is invariant under a symmetry σ if σ (D) has precisely the same word-rating pairs as D. The simplest way for data D to be invariant under a symmetry σ is if each word in each word-rating pair in D is invariant under σ . But there are other ways. For example, the symmetryσ leaves the data BB, 1 , GG, 2 , EE, 0 invariant because the words BB, GG, EE are all invariant underσ . On the other hand the data BG, 1 , GB, 1 , EA, 2 , AE, 2 is also invariant underσ , but in this case the individual words are not invariant, it is just that w and σ (w) always have the same rating in this data set.
We say an algorithm L is invariant under σ if L (σ (D), σ (w)) = L (D, w) for all D and w. In words, the rating that the algorithm gives to w when trained on D is the same that the algorithm gives to σ (w) when trained on σ (D).
Our main result is a simple consequence of these definitions.
Theorem 1 If algorithm L and training data D are invariant under symmetry σ then L (D, w) = L (D, σ (w)), for all w in W . In other words, the algorithm L gives the same rating to w and σ (w) when trained on D.
Proof. We have
L (D, w) = L (σ (D), σ (w)) = L (D, σ (w))
where the first equality follows from the invariance of L under σ , and the second inequality follows from the invariance of D under σ .
Example: Consider a language containing 10 letters, each letter having a sonority value between 1 and 5 according to the following table. (Sonority is an abstract phonological variable, roughly corresponding to how close a segment is to a vowel.) Words in the language consist of only two let- ters. Suppose that all words in the language have increasing or constant sonority. So, BA, MO, ZW, BD could all be words in the language, but AD, AN, and WV could not be. Consider the letter reversing symmetryσ given in (2). If you applyσ to an ungrammatical word (e.g. AB) you get a grammatical word (BA). If you applyσ to a grammatical word with increasing sonority you get an ungrammatical word. Words with two letters of the same sonority give you back another word with letters of the same sonority. Now suppose you have a learning algorithm L that is invariant underσ . This means that if you take a data set D, train the algorithm on it, and then use the algorithm to evaluate word w, you will get the same result if you train the algorithm onσ (D) (in which all the words are reversed) and then use the algorithm to evaluateσ (w), which is just the reversal of w.
Suppose we give the algorithm data D that is invariant underσ . For simplicity we assume that D consists only of grammatical words each assigned the rating 1. In this case, the only way D can be invariant underσ is if all the words in D have constant sonority, and for every such word XY in D, YX is also in D. Can the algorithm correctly learn the generalization that words in the language must have increasing or level sonority from this data set?
Theorem 1 shows that it cannot, as follows. According to the theorem, L (D, w) = L (D,σ (w)). All we need to do is let w equal a word of increasing sonority, such as BA, to see that the algorithm with training data D gives the same score to BA and AB. Since the first is grammatical and the second is ungrammatical, the algorithm clearly has not learned the correct rule governing grammaticality in the language. This is pretty commonsensical: one way to think of it is that there is nothing in the algorithm or the training data to make the algo-rithm prefer AB to BA, since both the algorithm and the training data are invariant underσ , and BA =σ (AB). Of course, this is not necessarily a defect of the algorithm L ; if some words with increasing or decreasing sonority were included in D, then D would not be invariant underσ , and L could learn the grammar.
In the next section we will give a less straightforward example, allowing us to formalize the idea of identity-based rules for learning algorithms.
Identity-Based Rules
We now use the above result to show that certain algorithms cannot learn identity-based rules unless trained on words containing virtually all letters in the alphabet. That is, the algorithm cannot extend the identity-based rules to words containing letters that it has not explicitly been trained on. This is in sharp contrast to human learners who are able to generalize identify-based rules (in the phonological context, for example) to segments they have not encountered before (Berent et al., 2002).
We return to the example at the beginning of the paper: W is the set of all words consisting of two letters. We stipulate that grammatical words are those consisting of two identical letters and all other words are ungrammatical. Suppose we want the algorithm to learn this grammar, but train it on data omitting any words containing the letters Y and Z. What algorithms will not be able to learn the correct grammar under these conditions?
Define the symmetry σ of W by the following:
σ (X 1 Y ) = X 1 Z, σ (X 1 Z) = X 1 Y, σ (X 1 X 2 ) = X 1 X 2 ,
for all letters X 1 , X 2 , with X 2 not equal to Y or Z. In other words, if the second segment is Y , σ changes it to Z, if the second segment is Z, σ changes it to Y , and if the second segment is neither, then the word is unchanged. Now suppose our training data D contains no words with either segments Y or Z as the second segment. D may contain both grammatical words (e.g. CC) with rating 1 and ungrammatical words (e.g. CE) with rating 0. Then D is invariant under σ . Theorem 1 shows that if the algorithm L is also invariant under σ then it will give the same rating to w and σ (w) for any word W when trained on D. In that case we would have that it gives the same rating to the words YY and Y Z, showing that it cannot learn the identity based grammar.
Below we provide an example of an algorithm invariant under this symmetry. But in general we informally argue that any algorithm that does not in some way explicitly check for identity between the letters, or somehow enforce a similar treatment of those two letters in processing, cannot correctly learn that YY is a more well formed word than Y Z, if it is never given words with a second letter Y or Z as training data.
Randomized Algorithms
Many algorithms for learning use randomness at some point in their operation. It may either be in the computation that takes the input data to the parameters p (for example, by which order the input words are used) or in the map from the parameters and a new input word to a word score s. In the former case p = A (D) is a random function of D; in the latter s = f (p, w) is a random function of p and w. In either case, this leads to L (D, w) being random for any fixed D and w.
Under these conditions, it is unlikely that invariance of the form described above will hold. Instead we now define invariance of L under σ to be
EL (σ (D), σ (w)) = EL (D, w),
where E denotes expectation. (If X is a random variable, EX is approximately what we would get if we took the average of a large number of samples of X.)
We now get the same result as before. This is a strictly stronger result than Theorem 1, since a deterministic algorithm is just a special case of a randomized algorithm.
Theorem 2 If random algorithm L and training data D are invariant under symmetry σ then
EL (D, w) = EL (D, σ (w)),
for all w in W . In other words, the algorithm L gives on average the same rating to w and σ (w) when trained on D.
Proof. We have
EL (D, w) = EL (σ (D), σ (w)) = EL (D, σ (w))
where the first equality follows from the invariance of L under σ , and the second inequality follows from the invariance of D under σ .
Experiments
We demonstrate the consequences of our theorems in a computational experiment where we use a deep neural network to learn the grammar described in our introduction. The networks are trained using data in which two-letter words with two identical letters are good, and two-letter words with two different letters are bad. The network is then asked to assess novel words containing segments it has not seen in the training set. Randomness enters into the training of these networks in various places and so Theorem 2 is the relevant result in this case. Consequently, we do not compare individual trainings of the network on the novel stimuli. For each novel stimulus we train the network numerous times and take the average score over all the trainings. It is these scores that are compared between stimuli.
Task and Dataset
Before discussing the neural network learners that were tested, we describe the dataset and task that was required of them. As before, our set of words W consisted of all two letter words with letters running from A to Z. The training set consisted of the 24 words AA, BB, . . ., XX paired with rating 1, along with 48 randomly generated words with mismatched segments taken from the list A, . . . , X, each paired with rating 0.
To assess the ability of the learner to generalize to novel inputs, after training we tested it on the words YY, ZZ, XY, YZ, XZ, ZY, where X ∈ {A, B, . . . , X} were randomly selected. For each learner, the experiment was independently repeated 40 times with different random seeds.
Encodings. We distinguish two different representations for the segments A to Z, namely the localist and distributed encodings. Both of these representations use a fixed length bit string. However, while localist codes (also known as 1-ofk encoding) are constrained to include a single non-zero bit, distributed codes can be any arbitrary combination of k bits, for some fixed k. Distributed encodings are a much more compact representation of data; indeed, for the same stringlength k, we can represent an exponentially large number of segments 2 k . The experiment was run on both types of encoding with k = 26. When distributed encoding was used, codes for each letter were randomly generated each repetition, so the exact encoding of the segment X, for instance, is almost certainly different between two repetitions of a given run.
Neural Network Learners
We tested our theoretical findings on the most popular model in the machine learning literature today: the artificial neural network. The words were fed into the neural network by simply concatenating the two 26-bit codes of their letters. We experimented with many different architectures, ranging from one to three hidden layers, and from 256 to 1024 units per layer, with tanh nonlinearities for all hidden units. We trained the models via backpropagation using an iterative quasi-Newton gradient descent algorithm called the limited memory Broyden-Fletcher-Goldfarb-Shanno method (L-BFGS), with a maximum of 100 iterations. Both the neural network and its optimization are implemented in torch (Collobert, Bengio, & Mariéthoz, 2002).
Results
We present results for the case of each hidden layer having 256 units, as the results are similar for more units per hidden layer. In Figure 1, for the localist encoding, we plot the average score output by the neural network for each of the test words above, for 1, 2, and 3 hidden layers. In addition, the averaged training scores are reported in the top two bars of each panel. Looking at the top plot in the figure, showing the results for one hidden layer, the words YY and ZZ get scores of around 0.3 in contrast to the score of near 1 given for the wellformed input AA. The networks are unable to determine that YY and ZZ are grammatical. Likewise, the other test words with differing segments and containing the segments Y or Z have scores ranging from approximately 0.3 to 0.5. The networks are not able to distinguish between grammatical and ungrammatical words in this case.
The ability of the networks to generalize to novel inputs is not improved by adding further hidden layers. The second and third plots in Figure 1, corresponding to two and three hidden layers, show very similar results to the first. To within statistical accuracy, the scores for YY, ZZ, YZ, and ZY are all the same. The networks are not able to discriminate between grammatical and ungrammatical words when the words included the novel segments Y and Z.
This poor performance is perhaps not surprising for the localist encoding, as observed by Marcus (Marcus, 2003): in the localist encoding, introducing new segments correspond to activating new input units that were never active during training, and therefore whose weights never changed from their random initializations. However, in Figure 2 we show that the poor performance remains true in the case of distributed representations. In the first plot, we show the results for a single hidden layer. The networks give a rating higher than 0.5 for both YY and ZZ, which is higher than the score given by the localist networks, but the same high rating is given to the words YZ and ZY. A similar pattern is repeated for the two and three-layer case. The networks are not able to discriminate between grammatical and ungrammatical words containing novel segments, even when distributed representations are used.
Discussion
That connectionist networks are unable to generalize what are sometimes called "algebraic" rules to novel inputs is not a new observation (Marcus, 2003;Berent, 2012). Our contribution has been to give a formalized description and proof of this phenomenon. Furthermore, our results and computer experiments reinforce that Deep Learning, in the form of the ability to train connectionist networks with multiple hidden layers, does not alone overcome these limitations.
Figure 1 :
1Scores for various words for the network with localist encoding for 1, 2, and 3 hidden layers.
Figure 2 :
2Scores for various words for the network with distributed encoding for 1, 2, and 3 hidden layers.
Table 1 :
1Segments in a Hypothetical Languagesegments sonority
A O
5
W Y
4
M N
3
V Z
2
B D
1
AcknowledgmentsThe authors thank Nilima Nigam for comments on an earlier draft of this manuscript. PT was supported by an NSERC Discovery Grant, a Research Accelerator Supplement, and held a Tier II Canada Research Chair. BS was supported by an NSERC Discovery Grant.
The phonological mind. I Berent, Cambridge University PressBerent, I. (2012). The phonological mind. Cambridge Uni- versity Press.
The scope of linguistic generalizations: evidence from Hebrew word formation. I Berent, G F Marcus, J Shimron, A I Gafos, Cognition. 832Berent, I., Marcus, G. F., Shimron, J., & Gafos, A. I. (2002). The scope of linguistic generalizations: evidence from He- brew word formation. Cognition, 83(2), 113 -139.
Modeling OCPplace in Amharic with the Maximum Entropy phonotactic learner. R Colavin, R Levy, S Rose, Proceedings volume of the 46th meeting of the Chicago Linguistics Society. volume of the 46th meeting of the Chicago Linguistics SocietyColavin, R., Levy, R., & Rose, S. (2010). Modeling OCP- place in Amharic with the Maximum Entropy phonotactic learner. In Proceedings volume of the 46th meeting of the Chicago Linguistics Society.
Torch: a modular machine learning software library. R Collobert, S Bengio, J Mariéthoz, IDIAP-RR 02-46Technical ReportCollobert, R., Bengio, S., & Mariéthoz, J. (2002). Torch: a modular machine learning software library. Technical Re- port IDIAP-RR 02-46.
Learning the identity effect as an artificial language: bias and generalisation. Phonology. G Gallagher, 10.1017/S095267571300013430Gallagher, G. (2013, 8). Learning the identity effect as an artificial language: bias and generalisation. Phonology, 30, 253-295. doi: 10.1017/S0952675713000134
/01/09). A maximum entropy model of phonotactics and phonotactic learning. B Hayes, C Wilson, Linguistic Inquiry. 393Hayes, B., & Wilson, C. (2008, 2016/01/09). A maximum en- tropy model of phonotactics and phonotactic learning. Lin- guistic Inquiry, 39(3), 379-440.
Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks. Y Lecun, Y Bengio, 3361LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10).
Idempotency and chain shifts. G Magri, Proceedings of WCCFL 33: the 33rd annual West Coast Conference in Formal Linguistics. Cascadilla Proceedings Project. K.-m. KimWCCFL 33: the 33rd annual West Coast Conference in Formal Linguistics. Cascadilla Proceedings ProjectMagri, G. (2015). Idempotency and chain shifts. In K.- m. Kim (Ed.), Proceedings of WCCFL 33: the 33rd annual West Coast Conference in Formal Linguistics. Cascadilla Proceedings Project.
The algebraic mind: Integrating connectionism and cognitive science. G F Marcus, MIT pressMarcus, G. F. (2003). The algebraic mind: Integrating con- nectionism and cognitive science. MIT press.
D E Rumelhart, J L Mcclelland, Parallel distributed processing. IEEE1Rumelhart, D. E., & McClelland, J. L. (1988). Parallel dis- tributed processing (Vol. 1). IEEE.
| [] |
[
"Matroids Hitting Sets and Unsupervised Dependency Grammar Induction",
"Matroids Hitting Sets and Unsupervised Dependency Grammar Induction"
] | [
"Nicholas Harvey \nDept of Computer Science\nUniv of British Columbia\nVancouverCanada\n",
"David Karger \nDept of Computer Science\nMIT\nBostonUSA\n",
"Vahab Mirrokni ",
"Virginia Savova \nSystems Biology Dept\nHarvard Medical School\nBostonUSA\n",
"Leonid Peshkin \nSystems Biology Dept\nHarvard Medical School\nBostonUSA\n"
] | [
"Dept of Computer Science\nUniv of British Columbia\nVancouverCanada",
"Dept of Computer Science\nMIT\nBostonUSA",
"Systems Biology Dept\nHarvard Medical School\nBostonUSA",
"Systems Biology Dept\nHarvard Medical School\nBostonUSA"
] | [] | This paper formulates a novel problem on graphs: find the minimal subset of edges in a fully connected graph, such that the resulting graph contains all spanning trees for a set of specified subgraphs. This formulation is motivated by an unsupervised grammar induction problem from computational linguistics. We present a reduction to some known problems and algorithms from graph theory, provide computational complexity results, and describe an approximation algorithm. | null | [
"https://arxiv.org/pdf/1705.08992v2.pdf"
] | 20,903,138 | 1705.08992 | 398530e81d6c9351cce2be60df42f088edd2db33 |
Matroids Hitting Sets and Unsupervised Dependency Grammar Induction
Nicholas Harvey
Dept of Computer Science
Univ of British Columbia
VancouverCanada
David Karger
Dept of Computer Science
MIT
BostonUSA
Vahab Mirrokni
Virginia Savova
Systems Biology Dept
Harvard Medical School
BostonUSA
Leonid Peshkin
Systems Biology Dept
Harvard Medical School
BostonUSA
Matroids Hitting Sets and Unsupervised Dependency Grammar Induction
This paper formulates a novel problem on graphs: find the minimal subset of edges in a fully connected graph, such that the resulting graph contains all spanning trees for a set of specified subgraphs. This formulation is motivated by an unsupervised grammar induction problem from computational linguistics. We present a reduction to some known problems and algorithms from graph theory, provide computational complexity results, and describe an approximation algorithm.
Introduction
Linguistic representations of natural language syntax arrange syntactic dependencies among the words in a sentence into a tree structure, of which the string is a one dimensional projection. We are concerned with the task of analyzing a set of several sentences, looking for the most parsimonious set of corresponding syntactic structures, solely on the basis of co-occurrence of words in sentences. We proceed by first presenting an example, then providing a general formulation of dependency structure and grammar induction.
Consider a sentence "Her immediate predecessor suffered a nervous breakdown." A dependency grammar representation of this sentence shown in Figure 1 captures dependency between the subject, the object and the verb, as well as dependency between the determiner and the adjectives and their respective nouns. In this sentence, the subject predecessor and the object breakdown are related to the verb her immediate predecessor suffered a nervous breakdown . suffered. The verb suffered is the root of the dependency structure, that is illustrated in the diagram by a link to the period. Figure 2 left represents the same dependency structure in a different way by ignoring the direction. Instead the dependence is related to the relative depth in the tree.
In a dependency tree, each word is the mother of its dependents, otherwise known as their HEAD. To linearize the dependency tree in Figure 2.left into a string, we introduce the dependents recursively next to their heads: iteration 1: suffered iteration 2: predecessor suffered breakdown iteration 3: her predecessor suffered a breakdown.
Dependency and the related link grammars have received a lot of attention in the field of computational linguistics in recent years, since these grammars enable much easier parsing than alternatives that are more complex lexicalized parse structures. There are applications to such popular tasks as machine translation and information retrieval. However, all of the work is concerned with parsing, i.e. inducing a parse structure given a corpus and a grammar, rather than with grammar induction. Some work is concerned with inducing parameters of the grammar from annotated corpora, for example see work by Eisner on dependency parsing [1] or more recent work by McDonald et al. [5] and references therein. It has been pointed out [5] that parsing with dependency grammars is related to Minimal Spanning Tree algorithms in general and in particular Chu-Liu-Edmonds MST algorithm was applied to dependency parsing.
An established computational linguistics textbook has the following to say on the subject [3]: "... doing grammar induction from scratch is still a difficult, largely unsolved problem, and hence much emphasis has been placed on learning from bracketed corpora." If grammar is not provided to begin with, parsing has to be done concurrently with learning the grammar. In the presence of grammar, among all the possibilities one needs to pick a syntactic structure consistent with the grammar. In the absence of grammar, it makes sense to appeal to Occam's razor principle and look for the minimal set of dependencies which are consistent among themselves.
More formally, a dependency grammar consists of a lexicon of terminal symbols (words), and an inventory of dependency relations specifying inter-lexical requirements. A string is generated by a dependency grammar if and only if:
• Every word but one (ROOT) is dependent on another word. Figure 2: left: A projective dependency structure for a sample sentence; right: an example of incorrect structure of a sample sentence, also illustrating non-projective structure.
• No word is dependent on itself either directly or indirectly. • No word is directly dependent on more than one word.
• Dependencies do not cross.
Unlike the first three constraints, the last constraint is a linearization constraint, usually introduced to simplify the structure and is empirically problematic. The structure in figure 2.left is an example of so-called projective parse, in which dependency links mapped onto the sentences word sequence do not cross. Figure 2.right illustrates an incorrect parse of the sentence with non-projective dependancies: "her"→"suffered" is crossing "a"→"predecessor"). While the vast majority of English sentences observe the projectivity constraint, other languages allow much more flexibility in word order. Non-projective structures include wh-relative clauses [7], parentheticals [4], cross-serial constructions of the type found in Dutch and Swiss-German [6], as well as free or relaxed word order languages [8]. Therefore, it is interesting whether grammar induction can be performed without regard to word order.
A truly cross-linguistic formulation of dependency parsing corresponds to finding a spanning tree (parse) in a completely connected subgraph of word nodes and dependency edges. The grammar induction problem in the same setting corresponds to inducing the minimal fully-connected subgraph which contains spanning trees for all sentences in a given corpus. Consider three sentences: "Her immediate predecessor suffered a nervous breakdown.", "Her predecessor suffered a stroke.", "It is a nervous breakdown." Intuitively, the repetition of a word cooccurrence informs us about grammatical co-dependence.
Here is a formulation of the grammar induction problem as an optimization problem: Given a lexicon V and a set of k sentences S 1 , . . . , S k s.t. S i ⊂ V (a.k.a. corpus) the objective is to find the most parsimonious combination of dependency structures. i.e. such set of spanning trees for all S i that has the minimal cardinality of a joint set of edges.
In section 2 of this paper, we formally introduce the related graph-theoretic problem. In section 2.1 we show that the problem is hard to approximate within a factor of c log n for weighted instances, and hard to approximate within some constant factor (APX-hard) for unweighed instances. In section 3, we generalize the problem to matroids. Here we prove that the problem is hard to approximate within a factor of c log n, even for unweighed instances. We conclude with a positive result -an algorithm for the matroid problem which constructs a solution whose cardinality is within O(log n) of optimal. Figure 3: top: An instance of a problem for a graph G consisting of two sub-graphs G 1 and G 2 ; bottom: examples of a correct solution on the left (green) and two incorrect solutions on the right (red).
+ G 1 G 2 G =
The Problem for Spanning-Trees
Let G = (V, E) be a graph and let S 1 , . . . , S k be arbitrary subsets of V . Our objective is to find a set of edges F ⊆ E such that
• F contains a spanning tree for each induced subgraph G[S i ], and
• |F | is minimized.
We call this the Min Spanning-Tree Hitting Set problem. Figure 3 illustrates one instance of this problem. A graph G consist of two sub-graphs G 1 and G 2 . We present one possible correct solution on the left (|F | = 4) and two sample incorrect solutions (|F | = 5) on the right. The Min Spanning-Tree Hitting Set problem may be generalized to include a weight function w on the edges of G. The objective for the weighted problem is the same as before, except that we seek to minimize w(F ). Notice that the problem initially appears similar to the group Steiner problem [2], since the objective is to connect certain subsets of the nodes. However, our condition on the subgraph is slightly different: we require that the given subsets of nodes are internally connected.
To develop some intuition for this problem, let's analyze a simple greedy ad-hoc solution: first, assign all the edges weight equivalent to the number of sub-graphs it is included into, i.e. count the frequency of node pairs in the input set; then fragment the graph into subgraphs, keeping the weights and run the standard MST algorithm, to find a spanning tree for each subgraph. Figure 4 presents a counterexample to simple heuristics approaches. The following sub-sets make up the input as indicated via edges of distinct color and pattern in the figure:
{1, 4, 5}, {2, 4, 5}, {3, 4, 5}, {1, 4}, {1, 5}, {2, 4}, {2, 5}, {3, 4}, {3, 5}.
The optimal solution to this instance does not contain the edge {4, 5}, yet this edge is a member of the most (namely three) sub-sets.
Hardness for Weighted Instances
We now show that the weighted problem is NP-hard to approximate within a factor of log n. To do so, we exhibit a Figure 4: A counterexample to simple heuristics approaches. reduction from Min Hitting Set, which is known to be hard to approximate within log n.
An instance of Min Hitting Set consists of a universe U = {u 1 , . . . , u n } and a collection of sets T = {T 1 , . . . , T m }, each of which is a subset of U . We construct a weighted instance of Min Spanning-Tree Hitting Set as follows. Let r ∈ V be a new vertex. We set
V = U + r E = K U ∪ { {r, u i } : for all u i ∈ U } S {i,j} = {u i , u j } for 1 ≤ i < j ≤ n S i = T i + r for 1 ≤ i ≤ m,
where K U denotes (the edges of) the complete graph on vertex set U . The edges belonging to K U have weight 1 and the edges incident with r have weight n 3 . Let h denote the minimum weight of a Spanning-Tree Hitting Set in G. Let h denote the minimum cardinality of a Hitting Set for T .
Claim 1 h = h · n 3 + n 2 .
Proof: First we show that h ≤ h − n 2 /n 3 Let F be a spanning-tree hitting set. Clearly K U ⊆ F , because of the sets S {i,j} . So all edges in F \ K U are of the form {r, u i }. Now define C = { u i : {r, u i } ∈ F }. We now show that C is a hitting set. Consider a set T i . Since F contains a spanning tree for S i , it must contain some edge {r, u i }. This shows that C hits the set T i . Now we show that h ≤ h · n 3 + n 2 . Let C ⊆ U be a hitting set for T . Let F = K U ∪{ {r, u i } : for all u i ∈ C }. We now show that F is a spanning-tree hitting set. Each set S {i,j} is clearly hit by the set K U . So consider a set S i = T i + r. All edges {u a , u b } with a, b ∈ T i are contained in K U . Furthermore, since C is a hitting set, there exists an element u a ∈ T i ∩ C. This implies that {r, u a } ∈ F , and hence F contains a spanning tree for G[S i ].
Given an instance T of Hitting Set, it is NP-hard to decide whether OP T (T ) ≤ f (n) or OP T (T ) > α log n·f (n) for some constant α > 0 and some function f . To prove log n-hardness of Min Spanning-Tree Hitting Set, we must similarly show that for any instance y, there exists a constant β > 1 and a function g such that it is NP-hard to decide whether OP T (G) ≤ g(y) or OP T (G) > β log n · g(y).
From our reduction, we know that it is NP-hard to distinguish between OP T (G) ≤ f (n) · n 3 + n 2 or OP T (G) > α log n · f (n) · n 3 + n 2 . Now note that α log n · f (n) · n 3 + n 2 f (n) · n 3 + n 2 = α log n · f (n) · n 3 + n 2 /(α log n) f (n) · n 3 + n 2 = α log n · 1 − n 2 · 1 − 1/α log n) f (n) · n 3 + n 2 ≥ β log n for some constant β > 0. Letting g(y) = f (n) · n 3 + n 2 , it follows that Min Spanning-Tree Hitting Set is NP-hard to approximate within log n.
Hardness for Unweighted Instances
We show APX-hardness for the unweighted problem via a reduction from Vertex Cover. The approach is similar to the construction in Section 2.1. Suppose we have an instance G = (V , E ) of the Vertex Cover problem. We use the fact that Vertex Cover is equivalent to Min Hitting Set where U = E and T = E . The construction differs only in that E is used in place of the edge set K U ; the sets S {i,j} are adjusted accordingly. Let h denote the minimum cardinality of a Spanning-Tree Hitting Set in G. Let c denote the minimum cardinality of a Vertex Cover in G . A claim identical to Claim 1 shows that h = c + |E |.
Recall that Vertex Cover is APX-hard even for constantdegree instances; see, e.g., Vazirani [11, §29]. So we may assume that |E | ≤ d 2 |V |. Given an instance G = (V , E ) of Vertex Cover with degree at most some constant d, it is NPhard to decide whether OP T (G ) ≤ α |V | or OP T (G ) > β |V | for some constant α < β . To prove APX-hardness of Min Spanning-Tree Hitting Set, we must similarly show that for any instance G, there exists a constant γ > 1 such that it is NP-hard to decide whether OP T (G) ≤ f (G) or OP T (G) > γf (G). From our reduction, we know that it is NP-hard to distinguish between
OP T (G) ≤ α (|V | − 1) + (|E| − |V | + 1) or OP T (G) > β (|V | − 1) + (|E| − |V | + 1). Now note that β (|V | − 1) + (|E| − |V | + 1) α (|V | − 1) + (|E| − |V | + 1) = 1 + (β − α )(|V | − 1) α (|V | − 1) + (|E| − |V | + 1) ≥ 1 + (β − α )(|V | − 1) (d/2 − 1 + α )|V | + 1 − α = 1 + β − α d/2 + α · |V | − 1 |V | − 1 ,
which is a constant greater than 1. Letting γ be this constant, and letting f (y) = α (|V | − 1) + (|E| − |V | + 1), it follows that Min Spanning-Tree Hitting Set is APX-hard.
The Problem for Matroids
The Min Spanning-Tree Hitting Set can be rephrased as a question about matroids. Let E be a ground set. Let M i = (E, I i ) be a matroid for 1 ≤ i ≤ k. Our objective is to find F ⊆ E such that
• F contains a basis for each M i , and
• |F | is minimized.
We call this the Minimum Basis Hitting Set problem.
Connection to Matroid Intersection
Suppose we switch to the dual matroids. Note that F contains a basis for M i if and only E \ F ∈ I * i . Then our objective to find F ⊆ E such that • F ∈ I * i for each i, and
• |F | is maximized.
Suppose that such a set is found, and let F := E \ F . The first property implies that F contains a basis for each M i . The second property implies that |F | is minimized. Stated this way, it is precisely the Matroid k-Intersection problem. So, from the point of view of exact algorithms, Min Basis Hitting Set and Matroid k-Intersection problems are equivalent. However, this reduction is not approximationpreserving, and implies nothing about approximation algorithms.
Hardness
Theorem 2 Min Basis-Hitting Set is NP-hard.
Proof: We do a reduction from the well-known problem Minimum Hitting Set. An instance of this problem consists of a family of sets C = {C 1 , . . . , C k }. The objective is to find a set F ⊆ E such that F ∩ C i = ∅ for each i. This problem is NP-complete. Now we reduce it to Minimum Basis Hitting Set. For
each set C i , set M i = (E, I i ) be the matroid where I i = { {c} : c ∈ C i } ∪ {∅}.
That is, M i is the rank-1 uniform matroid on C i . So a basis hitting set for these matroids corresponds precisely to a hitting set for the the sets C. Proof: It is well-known that Min Hitting Set is equivalent to Set Cover, and is therefore NP-hard to approximate within c log n for some positive constant c. Since reduction given in Theorem 3.2 is approximation preserving, the same hardness applies to Min Basis Hitting Set.
An Approximation Algorithm
We consider the greedy algorithm for the Min Basis Hitting Set problem. Let O ⊆ E denote an optimum solution. Let rank j denote the rank function for matroid M j and let r j be the rank of M j , i.e., r j = rank j (E). Let F i denote the set that has been chosen after the i th step of the algorithm. Initially, we have F 0 = ∅. For S ⊆ E, let P (S, e) = k j=1 rank j (S + e) − rank j (S) ; intuitively, this is the total "profit" obtained, or rank that is hit, by adding e to S. Let R i denote k j=1 r j − rank j (F i ) ; intuitively, if the algorithm has chosen a set F i , then R i is the total amount of "residual rank" that remains to be hit.
Consider the i th step of the algorithm. Let's denote the profit obtained by choosing e i by p i = max e ∈Fi−1 P (F i−1 , e). The greedy algorithm chooses an element e i ∈ F i−1 achieving the maximum profit. We now analyze the efficiency of this algorithm. Let O i be a minimum-cardinality set that contains F i and is a basis hitting set.
For any set S ⊇ F i and any e ∈ S, we have (by submodularity):
rank j (S + e) + rank j (F i ) ≤ rank j (F i + e) + rank j (S) rank j (S + e) − rank j (S) ≤ rank j (F i + e) − rank j (F i ) P (S, e) ≤ P (F i , e) ≤ p i
This implies that each edge in O i \ F i has profit at most p i .
Since O i must ultimately hit all of the residual rank, but each element hits at most
p i , we have R i−1 ≤ p i · |O i \ F i |. Now, note that |O i \ F i | ≤ |O|.
This is is because of the non-decreasing property of rank j : if O is a basis hitting set then so is O ∪ F i . This observation yields the inequality 1 ≤ p i · |O|/R i−1 . Suppose that the greedy algorithm halts with a solution of cardinality s. Then we have
s ≤ s i=1 p i · |O| R i−1 ≤ |O| · s i=1 p i R i−1 ≤ |O| · s i=1 0≤j<pi 1 R i−1 − j ≤ |O| · log R 0 .
Here, the last inequality follows from the fact that R i = R i−1 − p i for 1 ≤ i ≤ s. Note that R 0 = k j=1 r j is the total rank of the given matroids. The preceding argument shows that the greedy algorithm has approximation ratio O(log n), where n is the length of the input. Table 1 presents description of the algorithm. Informally speaking, the algorithm could be explain as follows: Estimate potential number of sub-graphs each edge would contribute to if used. Loop through all edges, adding in (greedily) the edge which contributes to the most spanning trees, then re-calculate potential contributions.
Contrast with Matroid Union
Consider the matroid union problem for matroids M * i . The matroid union problem is:
max | i S i | : S i ∈ I * i But note that S i ∈ I * i iff r i (V \S i ) = r i (V )
. In other words, S i ∈ I * i iffS i contains a basis for M i . And maximizing the size of the union is the same as minimizing the size of the complement of the union. So an equivalent problem is:
min | i S i | : S i contains a basis for M i
The minimum does not change if we assume that S i in fact is a basis. So, letting T i denoteS i , we obtain the equivalent problem:
min | i T i | : T i is a basis for M i
This problem is solvable in polynomial time, because it is just matroid union in disguise. It is quite similar to the Minimum Basis Hitting Set problem, except that it has an "intersection" rather than an "union".
Empirical study
We ran preliminary experiments with the approximation algorithm on adult child-directed speech from the CHILDES corpus [9]. These experiments demonstrated that the algorithm performs better than the baseline adjacency heuristic because of its ability to pick out non-adjacent dependencies. For example, the sentence "Is that a woof?" is parsed into the following set of links: woof-is, that-is, a-woof. The links correspond to the correct parse tree of the sentence, In contrast, the baseline adjacency heuristic would parse the sentence into is-that; that-a; and a-woof, which fails to capture the dependence between the predicate noun "woof" and the verb, and postulates a non-existent dependency between the determiner "a" and the subject "that". However, more work is needed to thoroughly assess the performance. In particular, one problem for direct application is the presence of repeated words in the sentence. The current implementation avoids the issue of repeated words in its entirety, by filtering the input text. An alternative approach is to erase the edges among repeated words from the original fully connected graph. This assumes that no word can be a dependent of itself, which might be a problem in some contexts (e.g. "I know that you know"). Related work which was not completed at the time of writing this manuscript seeks to incorporate adjacency as a soft linguistic constraint on the graph by increasing initial weight edges of adjacent words.
Figure 1 :
1An illstration of a dependency structure.
Corollary 3
3Min Basis Hitting Set is NP-hard to approximate with c log n for some positive constant c.
Table 1 :
1Greedy algorithmInput: lexicon V , set of k sentences S1, . . . , S k s.t. Si ⊂ V ; Initialize: Assign each pair of words a count of sentences it appears in;Sort word pairs (edges) in decreasing order; Loop: through edges; Add top edge e into the list of edges; Adjust edge weights in sentences which contained edge e; Until each sub-graph has a spanning tree (i.e. sentence has a parse); Output: a set of spanning trees for all Si;
DiscussionWe presented some theoretical results for a problem on graphs which is inspired by the unsupervised link grammar induction problem from linguistics. Numerous possible directions for the future work would include searching for more efficient approximation algorithms under various additional constraints on admissible spanning trees, as well as characterizing instances of the problem which could be solved efficiently. Another possible direction is allowing "ungrammatical" corpus as input, e.g. searching efficiently for partial solutions, where several sentences remain unparsed or not fully parsed. Another direction is to look for a solution to a directed graph analog of the problem considered here, which would require finding minimal set of arborescences and relate to the directed dependency parsing. One other question which remains open is an edge weighing scheme which would reflect syntactic consideration and particular language-related constraints, as in the so-called Optimality Theory[10].Exploring relation of this problem to other application would be interesting. One such example could be an autonomous network design, where an objective is to efficiently design a network that must connect joint units of organizations which do not necessarily trust each other and want to maintain their own skeletal sub-network in case their partner's links fail.
Three new probabilistic models for dependency parsing: An exploration. J Eisner, Proceedings of the 16th International Conference on Computational Linguistics (COLING-96). the 16th International Conference on Computational Linguistics (COLING-96)CopenhagenJ. Eisner. Three new probabilistic models for depen- dency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Lin- guistics (COLING-96), pages 340-345, Copenhagen, August 1996.
A compendium on steiner tree problems. M Hauptmann, M Karpinski, M. Hauptmann and M. Karpinski. A compendium on steiner tree problems. 2013.
Foundations of Statistical Natural Language Processing. C D Manning, H Schütze, The MIT PressCambridge, MassachusettsC. D. Manning and H. Schütze. Foundations of Sta- tistical Natural Language Processing. The MIT Press, Cambridge, Massachusetts, 1999.
Parentheticals and discontinuous constituent structure. J Mccawley, Linguistic Inquiry. 13J. McCawley. Parentheticals and discontinuous con- stituent structure. Linguistic Inquiry, 13:91-106, 1982.
Non-projective dependency parsing using spanning tree algorithms. R Mcdonald, F Pereira, K Ribarov, J Hajic, HLT/EMNLP. R. McDonald, F. Pereira, K. Ribarov, and J. Ha- jic. Non-projective dependency parsing using spanning tree algorithms. In HLT/EMNLP, 2005.
A linear precedence account of cross-serial dependencies. A Ojeda, Linguistics and Philosophy. 11A. Ojeda. A linear precedence account of cross-serial dependencies. Linguistics and Philosophy, 11:457- 492, 1988.
Taxemes and immediate constituents. K Pike, Language. 19K. Pike. Taxemes and immediate constituents. Lan- guage, 19:65-82, 1943.
Free word order and phrase structure rules. G Pullum, Proceedings of NELS. NELS12G. Pullum. Free word order and phrase structure rules. In Proceedings of NELS, volume 12, 1982.
Parsing the CHILDES database: Methodology and lessons learned. K Sagae, A Lavie, B Macwhinney, K. Sagae, A. Lavie, and B. MacWhinney. Parsing the CHILDES database: Methodology and lessons learned. 2001.
Structures and Strings. V Savova, Johns Hopkins UniversityPhD thesisV. Savova. Structures and Strings. PhD thesis, Johns Hopkins University, 2006.
Approximation Algorithms. V Vazirani, SpringerV. Vazirani. Approximation Algorithms. Springer, 2001.
| [] |
[
"Data-driven Summarization of Scientific Articles",
"Data-driven Summarization of Scientific Articles"
] | [
"Nikola I Nikolov \nInstitute of Neuroinformatics\nUniversity of Zürich and ETH Zürich\nSwitzerland\n",
"Michael Pfeiffer pfeiffer@ini.ethz.ch \nInstitute of Neuroinformatics\nUniversity of Zürich and ETH Zürich\nSwitzerland\n",
"Richard H R Hahnloser \nInstitute of Neuroinformatics\nUniversity of Zürich and ETH Zürich\nSwitzerland\n"
] | [
"Institute of Neuroinformatics\nUniversity of Zürich and ETH Zürich\nSwitzerland",
"Institute of Neuroinformatics\nUniversity of Zürich and ETH Zürich\nSwitzerland",
"Institute of Neuroinformatics\nUniversity of Zürich and ETH Zürich\nSwitzerland"
] | [] | Data-driven approaches to sequence-to-sequence modelling have been successfully applied to short text summarization of news articles. Such models are typically trained on input-summary pairs consisting of only a single or a few sentences, partially due to limited availability of multi-sentence training data. Here, we propose to use scientific articles as a new milestone for text summarization: large-scale training data come almost for free with two types of high-quality summaries at different levels -the title and the abstract. We generate two novel multi-sentence summarization datasets from scientific articles and test the suitability of a wide range of existing extractive and abstractive neural network-based summarization approaches. Our analysis demonstrates that scientific papers are suitable for data-driven text summarization. Our results could serve as valuable benchmarks for scaling sequence-to-sequence models to very long sequences. | 10.3929/ethz-b-000321352 | [
"https://arxiv.org/pdf/1804.08875v1.pdf"
] | 13,746,087 | 1804.08875 | 997c3e6d482f73da035d5bda692b16ad3a96ebb5 |
Data-driven Summarization of Scientific Articles
Nikola I Nikolov
Institute of Neuroinformatics
University of Zürich and ETH Zürich
Switzerland
Michael Pfeiffer pfeiffer@ini.ethz.ch
Institute of Neuroinformatics
University of Zürich and ETH Zürich
Switzerland
Richard H R Hahnloser
Institute of Neuroinformatics
University of Zürich and ETH Zürich
Switzerland
Data-driven Summarization of Scientific Articles
Data-driven approaches to sequence-to-sequence modelling have been successfully applied to short text summarization of news articles. Such models are typically trained on input-summary pairs consisting of only a single or a few sentences, partially due to limited availability of multi-sentence training data. Here, we propose to use scientific articles as a new milestone for text summarization: large-scale training data come almost for free with two types of high-quality summaries at different levels -the title and the abstract. We generate two novel multi-sentence summarization datasets from scientific articles and test the suitability of a wide range of existing extractive and abstractive neural network-based summarization approaches. Our analysis demonstrates that scientific papers are suitable for data-driven text summarization. Our results could serve as valuable benchmarks for scaling sequence-to-sequence models to very long sequences.
Introduction
The goal of automatic text summarization is to produce a shorter, informative version of an input text. While extractive summarization only consists of selecting important sentences from the input, abstractive summarization generates content without explicitly re-using whole sentences (Nenkova et al., 2011). Text summarization is an area with much promise in today's age of information overflow. In the domain of scientific literature, the rate of publications grows exponentially (Hunter and Cohen, 2006), which calls for efficient automatic summarization tools. Recent state-of-the-art summarization methods learn to summarize in a data-driven way, relying on large collections of input-summary training examples. The majority of previous work focused on short summarization of news articles, such as to generate a title (Rush et al., 2015;Nallapati et al., 2016). One major challenge is to scale these methods to process long input/output sequence pairs. Currently, availability of large-scale high-quality training data is scarce. In this paper, we explore the suitability of scientific journal articles as a new benchmark for data-driven text summarization. The typical well-structured format of scientific papers makes them an interesting challenge, and provides plenty of freely available training data, because every article comes with a summary in the form of its abstract, and, in even more compressed form, its title. We make a fist step towards summarization of whole scientific articles, by composing two novel large datasets for scientific summarization: title-abstract pairs (title-gen), composed of 5 million papers in the biomedical domain, and abstractbody pairs (abstract-gen) composed of 900k papers 1 . The second dataset is particularly challenging, because it is intended for summarizing the full body of the paper in terms of the abstract (the lengths of input/output sequences are substantially longer than what has been considered so far in previous research, see Table 1). We evaluate a range of existing state-of-the-art approaches on these datasets: extractive approaches based on word embeddings, as well as word, subword, and character-level 1 Both datasets are available at https://github.com/ ninikolov/data-driven-summarization, including versions with and without preprocessing.
encoder-decoder models that use recurrent as well as convolutional modules. We perform a quantitative and qualitative analysis of the models' outputs.
Background
Extractive Summarization
Given an input document consisting of T s sentences s s s = {s 1 , ..., s Ts }, the goal of extractive summarization is to select the K most salient sentences as the output summary. Extractive summarization typically involves a sentence representation module e, that represents each input sentence s i in a common space as r i = e(s i ), e.g. as a vector of real numbers; as well as a ranking module score, that weights the salience w i = score(r i ) of each sentence. A typical approach to unsupervised extractive summarization is to implement w i as the similarity between r i and a document representation (or a document centroid) r d = e(d) . Alternatively, one can compute w i as the sentence centrality, which is an adjacencybased measure of sentence importance (Erkan and Radev, 2004). In this work, we propose two simple unsupervised baselines for extractive summarization, both of which rely on word embeddings (Mikolov et al., 2013). The first, tfidfemb, represents each sentence in the input document as the weighted sum of its constituent word embeddings, similar to (Rossiello et al., 2017):
r i = e(s i ) = 1 t i x∈si t(x) · E(x),(1)
where E(x) is the embedding of word x, t(x) is an (optional) weighting function that weighs the importance of a word, and t i = x∈si t(x) is a normalization factor. As a weighing function, we use the term-frequency inverse document frequency (TF-IDF) score, similar to (Brokos et al., 2016). Each sentence embedding r i can then be ranked by computing its cosine similarity sim(r d , r i ) to a document centroid r d , computed similarly as r i . The summary consists of the top K sentences with embeddings most similar to the document embedding. The second baseline, rwmd-rank, ranks the salience of a sentence in terms of its similarity to all the other sentences in the document. All similarities are stored in an intrasentence similarity matrix W . We use the Relaxed Word Mover's Distance (RWMD) to compute this matrix (Kusner et al., 2015):
W ij = rwmd(s i , s j ) = max(rwmd p (s i , s j ), rwmd p (s j , s i )) rwmd p (s i , s j ) = x∈si min x ∈sj dist(E(x), E(x )),(2)
where s i and s j are two sentences and dist(E(x), E(x )) is the Euclidean distance between the embeddings of words x and x in the sentences. To rank the sentences, we apply the graph-based method from the LexRank system (Erkan and Radev, 2004). LexRank represents the input as a highly connected graph, in which vertices represent sentences, and edges between sentences are assigned weights equal to their similarity from W . The centrality of a sentence is then computed using the PageRank algorithm (Page et al., 1999).
Abstractive Summarization
Given an input sequence of T x words x x x = {x 1 , ..., x Tx } coming from a fixed-length input vocabulary V x of size K x , the goal of abstractive summarization is to produce a condensed sequence of T y summary words y y y = {y 1 , ..., y Ty } from a summarization vocabulary V y of size K y , where T x T y . Abstractive summarization is a structured prediction problem that can be solved by learning a probabilistic mapping p(y y y|x x x, θ) for the summary y y y, given the input sequence x x x (Dietterich et al., 2008):
p(y y y|x x x, θ) = Ty i p(y i |{y 0 , ..., y i−1 }, x x x, θ).
(
The encoder-decoder architecture is a recently proposed general framework for structured prediction (Cho et al., 2015), in which the distribution arg max y y y p(y y y|x x x, θ) is learned using two neural networks: an encoder network e, which produces intermediate representations of the input, and a decoder language modelling network d, which generates the target summary. The decoder is conditioned on a context vector c, which is recomputed from the encoded representation at each decoding step. The encoderdecoder was first implemented using Recurrent Neural Networks (RNNs) (Sutskever et al., 2014;Cho et al., 2014) that process the input sequentially. Recent studies have shown that convolutional neural networks (CNNs) (LeCun et al., 1998) can outperform RNNs in sequence transduction tasks (Kalchbrenner et al., 2016;Gehring et al., 2017). Unlike RNNs, CNNs can be efficiently implemented on parallel GPU hardware. This advantage is particularly important when working with very long input and output sequences, such as whole paragraphs or documents. CNNs create hierarchical representations over the input in which lower layers operate on nearby elements and higher layers implement increasing levels of abstraction.
In this work, we investigate the performance of three existing systems that operate on different levels of sequence granularity. The first, lstm, is a recurrent Long Short Term Memory (LSTM) encoder-decoder model (Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2014) that operates on the word level, processing the input sequentially. The second system, fconv, is a convolutional encoder-decoder model from (Gehring et al., 2017). fconv works on the subword level and segments words into smaller units using the byte pair encoding scheme. Using subword units improves the generation quality when dealing with rare or unknown words (Sennrich et al., 2015).
The third system, c2c, is a character-level encoder-decoder model from that models x x x and y y y as individual characters, with no explicit segmentation between tokens. c2c first builds representations of groups of characters in the input using a series of convolutional layers. It then applies a recurrent encoder-decoder, similar to the lstm system.
Scientific Articles
Previous research on summarization of scientific articles has focused almost exclusively on extractive methods (Nenkova et al., 2011). In (Lloret et al., 2013), the authors develop an unsupervised system for abstract generation of biomedical papers, that first selects relevant content from the body, following which it performs an abstractive information fusion step. More recently, (Kim et al., 2016) consider the problem of supervised generation of sentencelevel summaries for each paragraph of the introduction of a paper. They construct a training dataset of computer science papers from arXiv, selecting the most informative sentence as the summary of each paragraph using the Jaccard similarity. Thus, their target summary is fully contained in the input. In (Collins et al., 2017), they develop a supervised extractive summarization framework which they apply to a dataset of 10k computer science papers. To the best of our knowledge, our work is the first on abstractive title generation of scientific articles, and is the first to consider supervised generation of the absctract directly from the full body of the paper. The datasets we utilize here are also substantially larger than in previous work on scientific summarization. Scientific articles are potentially more challenging to summarize than news articles because of their compact, inexplicit discourse style (Biber and Gray, 2010). While the events described by news headlines frequently recur in related articles, a scientific title focuses on the unique contribution that sets a paper apart from previous research (Teufel and Moens, 2002). Furthermore, while the first two sentences of a news article are typically sufficiently informative to generate its headline (Nallapati et al., 2016;Teufel and Moens, 2002), the first sentences of the abstract or introduction of a paper typically contain background information on the research topic. Constructing a good scientific title thus requires understanding and integrating concepts from multiple sentences of the abstract.
Datasets
To investigate the performance of encoder-decoder neural networks as generative models of scientific text, we constructed two novel datasets for scientific summarization. We processed the XML files 4 to pair the abstract of a paper to its title (title-gen dataset) or the full body (abstractgen), skipping any figures, tables or section headings in the body. We then apply several preprocessing steps from the MOSES statistical machine translation pipeline 5 , including tokenization and conversion to lowercase. Any URLs were removed, all numbers replaced with #, and any pairs with abstract lengths not in the range of 150-370 tokens, title lengths not within 6-25 tokens, and body lengths not within 700-10000 tokens were excluded. The Overlap o(x, y) = |{y}∩{x}| |{y}| is the fraction of unique output (summary) tokens y that overlap with an input token x (excluding punctuation and stop words). As can be seen in Table 1, the overlaps are large in our datasets, indicating frequent reuse of words. The Repeat e(s) = i o(si,si) |s| is the average overlap of each sentence s i in a text with the remainder of the text (where s i denotes the complement of sentence s i ). Repeat measures the redundancy of content within a text: a high value indicates frequent repetition of content. Whereas in abstracts there are only moderate levels of repetition, in the bodies the repetition rates are much higher, possibly because concepts and ideas are reiterated in multiple sections of the paper.
Evaluation Set-up
We evaluated the performance of several state-of-the-art approaches on our scientific summarization datasets. The extractive systems we consider are: lead, lexrank, tfidf-emb, and rwmd-rank. The lead baseline returns the first sentence of the abstract for title-gen, or the first 10 sentences of the body for abstract-gen. lexrank (Erkan and Radev, 2004) is a graph-based centrality approach frequently used as a baseline in the literature. emb-tfidf uses sentence embeddings 6 to select the most salient sentences from the input, while rwmd-rank uses the Relaxed Word Mover's Distance (as described in Section 2.1.). oracle estimates an upper bound for the extractive summarization task by finding the most similar sentence in the input document for each sentence in the original summary. We use the Relaxed Word Mover's Distance to compute the output of the oracle. The abstractive systems we consider are: lstm, fconv, and c2c, described in Section 2.2.. For lstm, we set the input/output vocabularies to 80000, use two LSTM layers of 1000 hidden units each, and word embedding of dimension 500 (we found no improvement from additionally increasing the size of this model). For c2c and fconv, we use the default hyper-parameters that come with the public implementations provided by the authors of the systems. The title-gen lstm, c2c, and fconv were trained for 11, 8, and 20 epochs, respectively, until convergence. We were unable to train lstm and c2c on abstract-gen because of the very high memory and time requirements associated with the recurrent layers in these models. We found fconv to be much more efficient to train, and we succeeded in training a default model for 17 epochs. For title-gen, we used beam search with beam size 20, while for abstract-gen we found a beam size of 5 to perform better.
Quantitative Evaluation
In Tables 2 and 3, we evaluate our approaches using the ROUGE metric (Lin, 2004), which is a recall-based metric frequently used for summarization, and METEOR (Denkowski and Lavie, 2014), which is a precision-based metric for machine translation. Overlap can be interpreted as the tendency of the model to directly copy input content instead of generating novel correct or incorrect words; whereas Repeat measures a model's tendency to repeat itself, which is a frequent issue with encoder-decoder models (Suzuki and Nagata, 2017). On title generation, rwmd-rank achieved the best performance in terms of selecting a sentence as the title. In overall, the abstractive systems significantly outperformed the extractive systems, as well as the extractive oracle. c2c and fconv performed much better than lstm, with a very high rate of overlap. The ROUGE performance of c2c and fconv is similar, despite the difference of a few R-2 points in favour of fconv (that model is evaluated on a subwordlevel ground truth file, where we observe a slight increase of 1-2 ROUGE points on average due to the conversion). On abstract generation, the lead-10 baseline remained tough to beat in terms of ROUGE, and only the extractive systems managed to surpass it by a small margin. All extractive systems achieved similar results, with rwmd-rank having a minor edge, while the abstractive fconv performed poorly, even though it performed best in terms of ME-TEOR. We observed a much higher repeat rate in the output summaries than the observed 44% average in the original abstracts (
Qualitative Evaluation
In Tables 4 and 5, we present two shortened inputs from our title-gen and abstract-gen test sets, along with original and system-generated summaries. In Figure 1, we show a histogram of the locations of input sentences, that estimates which locations were most preferred on average when producing a summary. We observe a large variation in the sentence locations selected by the extractive systems on title-gen (Figure 1a), with the first sentence having high importance. Based on our inspection, it is rare that a sentence from the abstract will match the title exactly -the title is also typically shorter than an average sentence from the abstract (Table 1). A good title seems to require the selection, combination and paraphrasing of suitable parts from multiple sentences, as also shown by the original titles in our examples. Many of the titles generated by the abstractive systems sound faithful, and at first glance can pass for a title of a scientific paper. The abstractive models are good at discerning important from unimportant content in the abstract, at extracting long phrases, or sometimes whole sentences, and at abstractively combining the information to generate a title. lstm is more prone to generate novel words, whereas c2c and fconv mostly rely on direct copying of content from the abstract, as also indicated by their overlap scores. Closer inspection of the titles reveals occasional subtle mistakes: for example, in the first example in Table 4, the fconv model incorrectly selected "scopolamine-and cisplatin-induced" which was investigated in the previous work of the authors and is not the main focus of the article. The model also copied the incorrect genus, "mouse" instead of "rat".
Sometimes the generated titles sound too general, and fail to communicate the specifics of the paper: in the second example, all models produced "a model of basal ganglia", missing to include the keyword "reinforcement learning": "a model of reinforcement learning in the basal ganglia". These mistakes highlight the complexity of the task, and show that there is still much room for further improvement.
As shown in Figure 1b, the introductory and concluding sections are often highly relevant for abstract generation, however relevant content is spread across the entire paper. Interestingly, in the example in Table 5, there is a wide range of content that was selected by the extractive systems, with little overlap across systems. For instance, rwmd-rank overlaps with oracle by 3 sentences, and only by 1 sentence with emb-tfidf. The outputs of the abstractive fconv system on abstract generation are poor in quality, and many of the generated abstracts lack coherent structure and content flow. There is also frequent repetition of entire sentences, as shown by the last sentences produced by fconv in Table 5. fconv also appears to only use the first 60 sentences of the paper to construct the abstract (Figure 1c).
Conclusion
We evaluated a range of extractive and abstractive neural network-based summarization approaches on two novel datasets constructed from scientific journal articles. While the results for title generation are promising, the models struggled with generating the abstract. This difficulty highlights the necessity for developing novel models capable of efficiently dealing with long input and output sequences, while at the same time preserving the quality of generated sentences. We hope that our datasets will promote more work in this area. A direction to explore in future work is hybrid extractive-abstractive end-to-end approaches that jointly select content and then paraphrase it to produce a summary. For (b), we normalize the sentence locations by the length of each paper, to get a better uniform view (there is a large variation in the length of a paper, as shown in Table 1). For the abstractive systems, we search for the closest sentences in the input using relaxed word mover's distance (see Section 2.1.). For the abstractive systems, we manually highlighted the text of the concepts that are relevant for the task (errors are highlighted in red). Example 1 (Giridharan et al., 2015) Abstract: Amyloid (A)-induced neurotoxicity is a major pathological mechanism of Alzheimers disease (AD). Our previous studies have demonstrated that schisandrin B (Sch B), an antioxidant lignan from Schisandra chinensis, could protect mouse brain against scopolamine-and cisplatin-induced neuronal dysfunction. In the present study, we examined the protective effect of Sch B against intracerebroventricular (ICV)-infused A-induced neuronal dysfunction in rat cortex and explored the potential mechanism of its action. Our results showed that 26 days co-administration of Sch B significantly improved the behavioral performance of A (140)-infused rats in step-through test. At the same time, Sch B attenuated A-induced increases in oxidative and nitrosative stresses (...) The aforementioned effects of Sch B suggest its protective role against A-induced neurotoxicity through intervention in the negative cycle of RAGE-mediated A accumulation during AD patho-physiology.
Original title: schisandrin b ameliorates icv-infused amyloid induced oxidative stress and neuronal dysfunction through inhibiting rage / nf-b / mapk and up-regulating hsp / beclin expression lstm: schisandrin b , an antioxidant lignan from schisandra chinensis , protects against amyloid -induced neurotoxicity c2c: schisandra chinensis b protects against intracerebroventricular-infused amyloid induced neuronal dysfunction in rat cortex fconf: schisandrin b protects mouse brain against scopolamine-and cisplatin-induced neurotoxicity in rats Example 2 (Fee, 2012) Abstract: In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. (...) Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. (...) The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. (...) The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources.
Original title: oculomotor learning revisited : a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions . lstm: a model of basal ganglia function . c2c: a general model of vertebrate basal ganglia function . fconf: a model of basal ganglia function in the songbird .
Acknowledgments
We thank the reviewers for their useful comments, and NVIDIA for the donation of a TITAN X graphics card.
References
Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Biber, D. and Gray, B. (2010). Challenging stereotypes about academic writing: Complexity, elaboration, explicitness. Journal of English for Academic Purposes, 9(1):2-20. Brokos, G.-I., Malakasiotis, P., and Androutsopoulos, I.
(2016). Using centroids of word embeddings and word mover's distance for biomedical document retrieval in question answering. arXiv preprint arXiv:1608.03905. Chiu, B., Crichton, G., Korhonen, A., and Pyysalo, S. (2016). How to train good word embeddings for biomedical nlp. Proceedings of BioNLP16, page 166. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Cho, K., Courville, A., and Bengio, Y. (2015). Describing multimedia content using attention-based encoderdecoder networks. arXiv preprint arXiv:1507.01053. Collins, E., Augenstein, I., and Riedel, S. (2017). A supervised approach to extractive summarisation of scientific Table 5: Two examples from the test set of abstract-gen. The outputs of the extractive systems are highlighted as: tfidf-emb and rwmdrank, whereas gray denotes overlap between the two. In bold we mark the content that was selected by the fconv system (next page in full), and in underline we mark the selection of the oracle.
Example 1 (Pyysalo et al., 2011) Body: In recent years, there has been a significant shift in focus in biomedical information extraction from simple pairwise relations representing associations such as protein-protein interactions (PPI) toward representations that capture typed, structured associations of arbitrary numbers of entities in specific roles, frequently termed event extraction [1]. Much of this work draws on the GENIA Event corpus (...) This resource served also as the source for the annotations in the first collaborative evaluation of biomedical event extraction methods, the 2009 BioNLP shared task on event extraction (BioNLP ST) [6] as well as for the GENIA subtask of the second task in the series [7,8]. Another recent trend in the domain is a move toward the application of extraction methods to the full scale of the existing literature, with results for various targets covering the entire PubMed literature database of nearly 20 million citations being made available [9,10,11,12]. As event extraction methods initially developed to target the set of events defined in the GENIA / BioNLP ST corpora are now being applied at PubMed scale, it makes sense to ask how much of the full spectrum of gene/protein associations found there they can maximally cover. (...) By contrast, we will assume that associations not appearing in this data cannot be extracted: as the overwhelming majority of current event extraction methods are based on supervised machine learning or hand-crafted rules written with reference to the annotated data, it reasonable to assume as a first approximation that their coverage of associations not appearing in that data is zero. In this study, we seek to characterize the full range of associations of specific genes/proteins described in the literature and estimate what coverage of these associations event extraction systems relying on currently available resources can maximally achieve. To address these questions, it is necessary not only to have an inventory of concepts that (largely) covers the ways in which genes/proteins can be associated, but also to be able to estimate the relative frequency with which these concepts are used to express gene/protein associations in the literature. (...) Here, as we are interested in particular in texts describing associations between two or more gene/protein related entities, we apply a focused selection, picking only those individual sentences in which two or more mentions co-occur. While this excludes associations in which the entities occur in different sentences, their relative frequency is expected to be low: for example, in the BioNLP ST data, all event participants occurred within a single sentence in 95% of the targeted biomolecular event statements. (...) Here, we follow the assumption that when two entities are stated to be associated in some way, the most important words expressing their association will typically be found on the shortest dependency path connecting the two entities (cf. the shortest path hypothesis of Bunescu and Mooney [30]). The specific dependency representation (...) Table 3 shows the words most frequently occurring on these paths. This list again suggests an increased focus on words relating to gene/protein associations: expression is the most frequent word on the paths, and binding appears in the top-ranked words. (...) Finally, to make this pair data consistent with the TPS event spans, tokenization and other features, we aligned the entity annotations of the two corpora. To evaluate the capability of the presented approach to identify new expressions of gene/protein associations, we next performed a manual study of candidate words for stating gene/protein associations using the E w ranking. (...) We then selected the words ranked highest by E w that were not known, grouped by normalized and lemmatized form, and added for reference examples of frequent shortest dependency paths on which any of these words appear (see example in Table 5). (...) If static relations and experimental observations and manipulations are excluded as (arguably) not in scope for event extraction, this estimate suggests that currently available resources for event extraction cover over 90% of all events involving gene/protein entities in PubMed. Discussion.
We found that out of all gene/protein associations in PubMed, currently existing resources for event extraction are lacking in coverage of a number of event types such as dissociation, many relatively rare (though biologically important) protein post-translational modifications, as well as some high-level process types involving genes/proteins such as apoptosis. (...) This suggests that for practical applications it may be important to consider also this class of associations. (...) While these results are highly encouraging, it must be noted that the approach to identifying gene/protein associations considered here is limited in a number of ways: it excludes associations stated across sentence boundaries and ones for which the shortest path hypothesis does not hold, does not treat multi-word expressions as wholes, ignores ambiguity in implicitly assuming a single sense for each word, and only directly includes associations stated between exactly two entities. The approach is also fundamentally limited to associations expressed through specific words and thus blind to e.g. part-of relations implied by statements such as CD14 Sp1-binding site. (...) Conclusions.
We have presented an approach to discovering expressions of gene/protein associations from PubMed based on named entity co-occurrences, shortest dependency paths and an unlexicalized classifier to identify likely statements of gene/protein associations. Drawing on the automatically created full-PubMed annotations of the Turku PubMed-Scale (TPS) corpus and using the BioNLP09 shared task data to define positive and negative examples of association statements, we distilled an initial set of over 30 million protein mentions into a set of 46,000 unique unlexicalized paths estimated likely to express gene/protein associations. These paths were then used to rank all words in PubMed by the expected number of times they are predicted to express such associations, and 1200 candidate association-expressing words not appearing in the BioNLP09 shared task data evaluated manually. Study of these candidates suggested 18 new event classes for the GENIA ontology and indicated that the majority of statements of gene/protein associations not covered by currently available resources are not statements of biomolecular events but rather statements of static relations or experimental manipulation. (...) It could thus be assumed that the event types and the specific statements annotated in GENIA would have only modest coverage of all gene/protein association types and statements in PubMed. (...) Example 1 Original abstract: Background: Event extraction following the GENIA Event corpus and BioNLP shared task models has been a considerable focus of recent work in biomedical information extraction. This work includes efforts applying event extraction methods to the entire PubMed literature database, far beyond the narrow subdomains of biomedicine for which annotated resources for extraction method development are available. Results: In the present study, our aim is to estimate the coverage of all statements of gene/protein associations in PubMed that existing resources for event extraction can provide. We base our analysis on a recently released corpus automatically annotated for gene/protein entities and syntactic analyses covering the entire PubMed, and use named entity co-occurrence, shortest dependency paths and an unlexicalized classifier to identify likely statements of gene/protein associations. A set of high-frequency/high-likelihood association statements are then manually analyzed with reference to the GENIA ontology. Conclusions: We present a first estimate of the overall coverage of gene/protein associations provided by existing resources for event extraction. Our results suggest that for event-type associations this coverage may be over 90%. We also identify several biologically significant associations of genes and proteins that are not addressed by these resources, suggesting directions for further extension of extraction coverage. Example 1 fconv: background in recent years , there has been a significant shift in focus in biomedical information extraction from simple pairwise relations representing associations such as protein-protein interactions ( ppi ) toward representations that capture typed , structured associations of arbitrary numbers of entities in specific roles , frequently termed event extraction . as event extraction methods are based on supervised machine learning or hand-crafted rules written with reference to the annotated data , it is necessary not only to have an inventory of concepts that ( largely ) covers the ways in which genes / proteins can be associated , but also to be able to estimate the relative frequency with which these concepts are used to express gene / protein associations . results we apply a focused selection , picking king only those individual sentences in which at least one named entity has been tagged . we apply a focused selection , picking only those individual sentences in which at least one named entity has been tagged . we apply a focused selection , picking only those individual sentences in which at least one named entity has been tagged . we apply a probabilistic approach Example 2 (Kumar et al., 2013) Body:
Both alopecia areata (AA) and vitiligo are autoimmune diseases, and their coexistence in the same patient is not uncommon, as vitiligo has been reported to occur in 4.1% of patients of AA and is about 4 times more common in patients with AA than in the general population.
[1] However, their colocalization over the same site is exceedingly rare, with less than five cases being reported in the literature. [2,3,4] We present a case of a 15-year-old male child who had vitiligo and later developed AA over the existing lesions of vitiligo over face and scalp and have attempted to elucidate the current understanding of mechanisms of coexistence of these two diseases. A 12-year-old boy presented to the skin outpatient department with history of depigmented areas on the scalp, face, neck, arms and legs for 5 years. He also gave a history of development of patchy loss of hair over some of these lesions for 3 years. There was no previous history of any trauma or medications. Family history was not relevant. On examination, there were depigmented macules over the scalp, forehead, eyebrows, eyebrows, perioral, preauricular regions, neck, elbows, hands, feet, shins, nose, chin, hands, knees and feet. Patches of hair loss were seen, limited to some of these depigmented areas over the vertex and occipital region of the scalp and eyebrows [ Figure 3]. Other body areas were not affected by patchy hair loss. Clinically, the diagnosis of vitiligo with AA was made. (..) Additionally, the basal layer of the epidermis was almost devoid of pigment, [ Figure 5] confirming the diagnosis of vitiligo over the same site. (..) Both AA and vitiligo are clubbed under the spectrum of autoimmune disorders. (..) Our case lends support to the hypothesis that AA and vitiligo share a common pathogenic pathway including autoimmune response against some common antigens like those derived from the bulb melanocytes. Melanocytes-derived peptide antigens released during vitiligo pathogenesis could act as auto-antigens not only for vitiligo, but also for AA, and autoimmune Th-cells against them could also trigger a response against the hair follicle melanocytes, thus, pre-disposing to AA. (..text omitted..) Example 2 Original abstract: Both alopecia areata (AA) and vitiligo are autoimmune diseases, and their coexistence in the same patient is not uncommon, as vitiligo has been reported to occur in 4.1% of patients of AA. We present a case of a 15-year-old male child who had vitiligo and later developed AA over the existing lesions of vitiligo over face and scalp and have attempted to elucidate the current understanding of mechanisms of coexistence of these two diseases. Our case lends support to the hypothesis that AA and vitiligo share a common pathogenic pathway including autoimmune response against some common antigens like those derived from the bulb melanocytes. Stimulation of proinflammatory T-helper(Th)-1 cell mediated immunological response or inactivation of a suppressor T-cell mediated response could be the common underlying phenomenon. However, the striking rarity of colocalization of these two diseases has led to the recent debate over current understanding of their pathogenesis, and whether this association is merely a coincidence. As both AA and vitiligo are frequent and chronic dermatological disorders, it is of utmost importance to gain more understanding into their pathogenic mechanisms so that more definitive treatment modalities may be devised and the quality of life of these patients can be improved. Example 2 fconv: alopecia areata ( aa ) and vitiligo are autoimmune diseases , and their coexistence in the same patient is not uncommon , as vitiligo has been reported to occur in 4.1 % of patients of aa and is about 4 times more common in patients with aa than in the general population . we present a case of a 15-year-old male child who had vitiligo and later developed aa over the scalp , forehead , eyebrows , eyebrows , perioral , preauricular regions , nose and chin depigmented macules over the scalp , forehead , eyebrows , eyebrows , perioral , preauricular regions , nose and chin depigmented macules over the scalp , forehead , eyebrows , periorbital , perioral , preauricular regions , nose and chin depigmented macules over the scalp , forehead , eyebrows , periorbital , perioral , preauricular regions , nose and chin depigmented macules over the scalp , forehead , eyebrows , periorbital
Figure 1 :
1Sentence selection (normalized) histograms computed on the test set, showing the input locations that were most preferred on average by the systems on title-gen (a) and abstract-gen (b), (c).
(...) This processing was applied to the BioNLP ST training set, creating a corpus of 6889 entity pairs of which 1119 (16%) were marked as expressing an association (positive). (...) Evaluation. We first evaluated each of the word rankings discussed in the section on Identification of Gene/Protein Associations by comparing the ranked lists of words against the set of single words marked as trigger expressions in the BioNLP ST development data. (...)
Table 1 :
1Statistics (mean and standard deviation) of the two scientific summarization datasets: title-gen and abstract-gen. Token/sentence counts are computed with NLTK.title-gen
Abstract
Title
Token count
245 ± 54
15 ± 4
Sentence count
14 ± 4
1
Sent. token count
26 ± 14
-
Overlap
73% ± 18%
Repeat
44% ± 11%
-
Size (tr/val/test)
5 000 000/6844/6935
abstract-gen
Body
Abstract
Token count
4600 ± 1987
254 ± 54
Sentence count
172 ± 78
10 ± 3
Sent. token count
26 ± 17
26 ± 14
Overlap
68% ± 10%
Repeat
74% ± 7%
44% ± 11%
Size (tr/val/test)
893 835/10 916/10 812
For title-gen we used MEDLINE 2 , whereas for abstract-
gen we used the PubMed open access subset 3 . MEDLINE
contains scientific metadata in XML format of ∼ 25 mil-
lion papers in the biomedical domain, whereas the PubMed
open access subset contains metadata and full text of ∼ 1.5
million papers.
Table 2 :
2Metric results for the title-gen dataset. R-1, R-2, R-L represent the ROUGE-1/2/L metrics.Model
R-1
R-2
R-L
METEOR
Overlap
Token count
oracle
0.386 0.184 0.308
0.146
-
29 ± 14
lead-1
0.218 0.061 0.169
0.077
-
28 ± 14
lexrank
0.26 0.089 0.201
0.089
-
32 ± 14
emb-tfidf
0.252 0.081 0.193
0.082
-
35 ± 17
rwmd-rank 0.311 0.13 0.245
0.116
-
28 ± 13
lstm
0.375 0.173 0.329
0.204
78% ± 20%
12 ± 3
c2c
0.479 0.264 0.418
0.237
93% ± 10%
14 ± 4
fconv
0.463 0.277 0.412
0.27
95% ± 9%
15 ± 7
Table 3 :
3Metric results for the abstract-gen dataset. R-1, R-2, R-L represent the ROUGE-1/2/L metrics.Model
R-1
R-2
R-L
METEOR
Overlap
Repeat
Token count
oracle
0.558 0.266 0.316
0.214
-
42% ± 10%
327 ± 99
lead-10
0.385 0.111 0.18
0.138
-
20% ± 4%
312 ± 88
lexrank
0.45 0.163 0.213
0.157
-
52% ± 10%
404 ± 131
emb-tfidf
0.445 0.159 0.216
0.159
-
52% ± 10%
369 ± 117
rwmd-rank 0.454 0.159 0.216
0.167
-
50% ± 10%
344 ± 93
fconv
0.354 0.131 0.209
0.212
98% ± 2% 52% ± 28%
194 ± 15
Table 1 )
1. As revealed by the large Repeat standard deviation for fconv, some examples are affected by very frequent repetitions.
Table 4 :
4Examples from the test set of title-gen. The outputs of the extractive systems are highlighted as: oracle, tfidf-emb, rwmd-rank.
https://nlm.nih.gov/databases/download/
Meteor universal: Language specific translation evaluation for any target language. M Denkowski, A Lavie, Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. the EACL 2014 Workshop on Statistical Machine TranslationDenkowski, M. and Lavie, A. (2014). Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation.
Structured machine learning: the next ten years. T G Dietterich, P Domingos, L Getoor, S Muggleton, P Tadepalli, Machine Learning. 73Dietterich, T. G., Domingos, P., Getoor, L., Muggleton, S., and Tadepalli, P. (2008). Structured machine learning: the next ten years. Machine Learning, 73(1):3-23.
Lexrank: Graph-based lexical centrality as salience in text summarization. G Erkan, D R Radev, Journal of Artificial Intelligence Research. 22Erkan, G. and Radev, D. R. (2004). Lexrank: Graph-based lexical centrality as salience in text summarization. Jour- nal of Artificial Intelligence Research, 22:457-479.
Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions. M S Fee, Frontiers in neural circuits. 638Fee, M. S. (2012). Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorpo- rating an efference copy of motor actions. Frontiers in neural circuits, 6:38.
Convolutional sequence to sequence learning. J Gehring, M Auli, D Grangier, D Yarats, Y N Dauphin, arXiv:1705.03122arXiv preprintGehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. (2017). Convolutional sequence to se- quence learning. arXiv preprint arXiv:1705.03122.
. V V Giridharan, R A Thandavarayan, S Arumugam, M Mizuno, H Nawa, K Suzuki, K M Ko, P Krishnamurthy, K Watanabe, T Konishi, Giridharan, V. V., Thandavarayan, R. A., Arumugam, S., Mizuno, M., Nawa, H., Suzuki, K., Ko, K. M., Krish- namurthy, P., Watanabe, K., and Konishi, T. (2015).
Schisandrin b ameliorates icv-infused amyloid β induced oxidative stress and neuronal dysfunction through inhibiting rage/nf-κb/mapk and up-regulating hsp/beclin expression. PLoS One. 1011142483Schisandrin b ameliorates icv-infused amyloid β induced oxidative stress and neuronal dysfunction through in- hibiting rage/nf-κb/mapk and up-regulating hsp/beclin expression. PLoS One, 10(11):e0142483.
Biomedical language processing: what's beyond pubmed? Molecular cell. L Hunter, K B Cohen, 21Hunter, L. and Cohen, K. B. (2006). Biomedical language processing: what's beyond pubmed? Molecular cell, 21(5):589-594.
. N Kalchbrenner, L Espeholt, K Simonyan, A Van Den Oord, A Graves, K Kavukcuoglu, Neural machine translation in linear time. CoRR, abs/1610.10099Kalchbrenner, N., Espeholt, L., Simonyan, K., van den Oord, A., Graves, A., and Kavukcuoglu, K. (2016). Neural machine translation in linear time. CoRR, abs/1610.10099.
Towards abstraction from extraction: Multiple timescale gated recurrent unit for summarization. M Kim, M D Singh, M Lee, arXiv:1607.00718arXiv preprintKim, M., Singh, M. D., and Lee, M. (2016). To- wards abstraction from extraction: Multiple timescale gated recurrent unit for summarization. arXiv preprint arXiv:1607.00718.
Colocalization of vitiligo and alopecia areata: coincidence or consequence?. S Kumar, J Mittal, B Mahajan, International journal of trichology. 5150Kumar, S., Mittal, J., and Mahajan, B. (2013). Colocaliza- tion of vitiligo and alopecia areata: coincidence or con- sequence? International journal of trichology, 5(1):50.
From word embeddings to document distances. M J Kusner, Y Sun, N I Kolkin, K Q Weinberger, ICML. 15Kusner, M. J., Sun, Y., Kolkin, N. I., Weinberger, K. Q., et al. (2015). From word embeddings to document dis- tances. In ICML, volume 15, pages 957-966.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recogni- tion. Proceedings of the IEEE, 86(11):2278-2324, Nov.
Fully characterlevel neural machine translation without explicit segmentation. J Lee, K Cho, T Hofmann, arXiv:1610.03017arXiv preprintLee, J., Cho, K., and Hofmann, T. (2016). Fully character- level neural machine translation without explicit segmen- tation. arXiv preprint arXiv:1610.03017.
Rouge: A package for automatic evaluation of summaries. C.-Y Lin, Text summarization branches out: Proceedings of the ACL-04 workshop. 8Lin, C.-Y. (2004). Rouge: A package for automatic evalua- tion of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, volume 8.
Compendium: A text summarization system for generating abstracts of research papers. E Lloret, M T Romá-Ferri, M Palomar, Data & Knowledge Engineering. 88Lloret, E., Romá-Ferri, M. T., and Palomar, M. (2013). Compendium: A text summarization system for gener- ating abstracts of research papers. Data & Knowledge Engineering, 88:164-175.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintMikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Abstractive text summarization using sequence-to-sequence rnns and beyond. R Nallapati, B Zhou, Ç Gulçehre, B Xiang, arXiv:1602.06023arXiv preprintNallapati, R., Zhou, B., Gulçehre, Ç ., and Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023.
Automatic summarization. A Nenkova, S Maskey, Y Liu, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts of ACL. the 49th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts of ACLAssociation for Computational LinguisticsNenkova, A., Maskey, S., and Liu, Y. (2011). Automatic summarization. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Tutorial Abstracts of ACL 2011, page 3. Association for Computational Linguistics.
The pagerank citation ranking: Bringing order to the web. L Page, S Brin, R Motwani, T Winograd, Stanford InfoLabTechnical reportPage, L., Brin, S., Motwani, R., and Winograd, T. (1999). The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab.
An analysis of gene/protein associations at pubmed scale. S Pyysalo, T Ohta, J Tsujii, Journal of biomedical semantics. 255Pyysalo, S., Ohta, T., and Tsujii, J. (2011). An analysis of gene/protein associations at pubmed scale. Journal of biomedical semantics, 2(5):S5.
Centroid-based summarization of multiple documents. D R Radev, H Jing, M Styś, Tam , D , Information Processing & Management. 406Radev, D. R., Jing, H., Styś, M., and Tam, D. (2004). Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919-938.
. G Rossiello, P Basile, G Semeraro, Rossiello, G., Basile, P., and Semeraro, G. (2017).
Centroid-based text summarization through compositionality of word embeddings. 12Centroid-based text summarization through composi- tionality of word embeddings. MultiLing 2017, page 12.
A neural attention model for abstractive sentence summarization. A M Rush, S Chopra, Weston , J , arXiv:1509.00685arXiv preprintRush, A. M., Chopra, S., and Weston, J. (2015). A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685.
R Sennrich, B Haddow, A Birch, arXiv:1508.07909Neural machine translation of rare words with subword units. arXiv preprintSennrich, R., Haddow, B., and Birch, A. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in neural information processing systems. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104- 3112.
Cutting-off redundant repeating generations for neural abstractive summarization. J Suzuki, M Nagata, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational Linguistics2Suzuki, J. and Nagata, M. (2017). Cutting-off redundant repeating generations for neural abstractive summariza- tion. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, volume 2, pages 291- 297.
Summarizing scientific articles: experiments with relevance and rhetorical status. S Teufel, M Moens, Computational linguistics. 284Teufel, S. and Moens, M. (2002). Summarizing scientific articles: experiments with relevance and rhetorical sta- tus. Computational linguistics, 28(4):409-445.
| [] |
[
"Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling",
"Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling"
] | [
"Yuanhang Yang \nUniversity of Warwick\n\n",
"Shiyi Qi shiyiqi@stu.hit.edu.cncuiyungao \nUniversity of Warwick\n\n",
"Cuiyun Gao \nUniversity of Warwick\n\n",
"Zenglin Xu \nUniversity of Warwick\n\n",
"Chuanyi Liu chuanyiliu@hit.edu.cnyulan.he@warwick.ac.ul \nUniversity of Warwick\n\n",
"Yulan He \nMeta AI\n\n",
"Qifan Wang ",
"\nHarbin Institute of Technology\n\n"
] | [
"University of Warwick\n",
"University of Warwick\n",
"University of Warwick\n",
"University of Warwick\n",
"University of Warwick\n",
"Meta AI\n",
"Harbin Institute of Technology\n"
] | [] | Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational cost. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of crossattention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm MixEncoder for efficient sentence pair modeling. MixEncoder involves a light-weight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our Mix-Encoder can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models. | 10.48550/arxiv.2210.05261 | [
"https://export.arxiv.org/pdf/2210.05261v2.pdf"
] | 252,815,795 | 2210.05261 | 93e2501e3a489490af9ff34601df73954f860fd0 |
Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling
Yuanhang Yang
University of Warwick
Shiyi Qi shiyiqi@stu.hit.edu.cncuiyungao
University of Warwick
Cuiyun Gao
University of Warwick
Zenglin Xu
University of Warwick
Chuanyi Liu chuanyiliu@hit.edu.cnyulan.he@warwick.ac.ul
University of Warwick
Yulan He
Meta AI
Qifan Wang
Harbin Institute of Technology
Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling
Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational cost. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of crossattention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm MixEncoder for efficient sentence pair modeling. MixEncoder involves a light-weight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our Mix-Encoder can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.
Introduction
Transformer-based models (Vaswani et al., 2017;Devlin et al., 2019) have shown promising performance on sentence pair modeling tasks, such as natural language inference, question answering, information retrieval, etc (Nogueira and Cho, 2019;. Most pair modeling tasks can be depicted as a procedure of scoring the candidates given a query. A fundamental component of these models is the pre-trained cross-encoder, which models the interaction between the query and the candidates. As shown in Figure 1(a), the cross-encoder takes a pair of query and candidate as input, and calculates the interaction between them at each layer by the input-wide self-attention mechanism. This interaction will be calculated N times if there are N candidates. Despite the effective text representation power, it leads to exhaustive computation cost especially when the number of candidates is very large. This computation cost therefore restricts the use of these crossencoder models in many real-world applications .
Extensive studies, including dual-encoder (Huang et al., 2013;Reimers and Gurevych, 2019) and late interaction models (MacAvaney et al., 2020;Gao et al., 2020;Khattab and Zaharia, 2020), have been proposed to accelerate the transformer inference on sentence pair modeling tasks. As shown in Figure 1(b), the query and candidates are processed separately in dualencoders, thus the candidates can be pre-computed and cashed for online inference, resulting in fast inference speed. However, this speedup is built upon sacrificing the expressiveness of cross-attention (Luan et al., 2021;Hu et al., 2021). Alternatively, late-interaction models adjust dual-encoders by appending an interaction component, such as a stack of Transformer layers (Cao et al., 2020;Nie et al., 2020), for modelling the interaction between the query and the cashed candidates, as illustrated in Figure 1(c). Although this interaction components better preserve the effectiveness of cross-attention than dual-encoders, they still suffer from the heavy costs of the interaction component. Clearly, the computation cost of late-interaction models will still be dramatically increased as the number of candidates grows Zhang et al., 2021).
To tackle the above issues, we propose a new paradigm named MixEncoder to speed up the inference while maintaining the expressiveness of cross-attention. In particular, MixEncoder involves a light-weight cross-attention mechanism which mostly disentangles the query encoding from querycandidate interaction. Specifically, MixEncoder encodes the query along with pre-computed candidates during runtime, and conducts the light-weight cross-attention at each interaction layer (named as interaction layer), as illustrated in Figure 1 Figure 1: Architecture illustration of three popular sentence pair approaches and proposed MixEncoder, where N denotes the number of candidates and s denotes the relevance score of candidate-query pairs. The cache is used to store the pre-computed embeddings.
This design of light-weight cross-attention allows the interaction layer to process all the candidates in parallel. Thus, MixEncoder is able to encode the query only once, regardless of the number of candidates.
MixEncoder accelerates the online inference from two aspects. Firstly, MixEncoder processes each candidate into k dense context embeddings offline and cache them, where k is a hyper-parameter. This setup speeds up the online inference using precomputed representations. Secondly, our interaction layer performs attention only from candidates to the query. This disentangles the query encoding from query-candidate interaction, thus avoiding repeatedly query encoding and supporting processing multiple candidates in parallel.
We evaluate the capability of MixEncoder for sentence pair modeling on four benchmark datasets, related to tasks of natural language inference, dialogue and information retrieval. The results demonstrate that MixEncoder better balances the effectiveness and efficiency. For example, MixEncoder achieves substantial speedup more than 113x over the cross-encoder and provides competitive performance.
In summary, our main contributions can be summarized as follows:
• A novel framework MixEncoder is proposed for fast and accurate sentence pair modeling. MixEncoder involves a light-weight crossattention mechanism which allows us to encode the query once and process all the candidates in parallel.
• Extensive experiments on four public datasets demonstrate that the proposed MixEncoder provides better trade-offs between effectiveness and efficiency than state-of-the-art models.
Background and Related Work
Neural ranking models. These models focus on measuring the relevance of sentence pairs. A common practice is to map each sentence to a dense vector separately, and then measure their relevance with a similarity function (Huang et al., 2013;Karpukhin et al., 2020;. These models are known as dual-encoder models. Dualencoder models can pre-compute the candidate representations offline, since the candidate encoding is conducted independent of the query. Recently, pretrained Transformer-based models (cross-encoder) have achieved great success on many sentence pair tasks (Li et al., 2022;Guo et al., 2022). These models take the concatenation of one sentence pair as input and perform cross-attention at each layer. This brings deep interactions between the input query and the candidate. Despite the promising performance, cross-encoder models will face significant latency in online inference since all the candidates are encoded online. Late-interaction models. Various late interaction models have been proposed to combine the advantages of the dual-encoder and the cross-encoder. Specifically, these models disentangle the sentence pair modeling into separate encoding followed by a late interaction. They can pre-compute candidate representations offline, and model the relationship of query-candidate pairs by cross-attention online. For instance, the late-interaction models, including Deformer and PreTTR (MacAvaney et al., 2020), is based on a decomposed transformer, where low-level layers encode the query and candidate separately and the higher-level layers process them jointly. As shown in Figure 1(c), given N candidates, the late Transformer layers have to encode the query N times. It results in extensive computation costs. Other models propose to adopt a light weight interaction mechanism, such as polyattention (Humeau et al., 2020) and MaxSim (Khattab and Zaharia, 2020), instead of Transformer layers to speed up the online inference.
Our MixEncoder can behave as a late interaction model by replacing the upper Transformer layers of a dual-encoder with our interaction layer. The novelty of the MixEncoder lies in the light-weight cross-attention mechanism and pre-computed context embeddings.
Method
In this section, we first introduce the details of proposed MixEncoder, which mainly includes two stages, i.e., candidate pre-computation stage and query encoding stage. Figure 2 provides the architecture of MixEncoder. We then describe how to apply MixEncoder for different tasks such as classification task and ranking task.
Problem Statement
Given a sentence pair, models are required to generate either a prediction or a ranking score. The former is known as a linear-probe classification task (Conneau and Kiela, 2018) and the latter is a multi-candidate ranking task (Nguyen et al., 2016). For classification task, the training set consists of paired samples,
{q i , p i , y i } N i=1 ,
where y i is the label of the sentence pair, N is the size of the dataset, and q i , p i denotes the query and the candidate, respectively. For ranking task, the samples in the training set can be denoted as
{q i , p i , C i } N i=1 ,
where p i is the positive candidate for the q i while C i is a set of negative candidates.
Candidate Pre-computation
We describe how MixEncoder pre-compute each existing candidate into several context embeddings offline. Let the token embeddings of one candidate be T i = [t 1 , · · · , t d ]. We experiment with two strategies to obtain k context embeddings from these token embeddings: (1) prepending k special tokens {S i } k i=1 to T i before feeding T i into an Transformer encoder (Vaswani et al., 2017;Devlin et al., 2019), and using the output at these special tokens as context embeddings (S-strategy);
(2) maintaining k context codes (Humeau et al., 2020) to extract global features from the last layer output of the encoder by attention mechanism (Cstrategy). The default configuration is S-strategy as it provides slightly better performance.
Suppose there are N candidates; we use E 0 ∈ R N ×k×d to denote the pre-computed context embeddings of these candidates, where d indicates the embedding size.
Query Encoding
During the online inference stage, for a query with N candidates, models have to measure the relevance of N query-candidate pairs. A typical crossencoder repeatedly concatenates the query with each candidate and encodes it N times. It leads to prohibitive computation costs. One of the most effective ways to reduce the computation is to reduce the encoding times of the query.
In this section, we first depict the overview of the query encoder. Then, we introduce the core component of our MixEncoder: interaction layer. It performs a light-weight candidate-to-query crossattention to estimate relevance scores in a single pass of the query encoding, no matter how many candidates the query has.
Overview of Encoder
Take an encoder that consists of five Transformer layers L 1 , L 2 , . . . , L 5 as an example. When encoding the incoming query online, we replace the second and fifth Transformer layers L 2 , L 5 with two interaction layers, denoted as I 1 2 , I 2 5 . Now the encoder can be depicted as {L 1 , I 1 2 , L 3 , L 4 , I 2 5 }, shown in Figure 2(b) . These layers are applied to the incoming query sequentially to produce contextualized representations of the query and the candidates.
Formally, each Transformer layer L i (·) takes the query token representations q i−1 ∈ R m×d from the previous layer and produces a new representation matrix q i = L i (q i−1 ), where m denotes the query length and q i ∈ R m×d .
Each interaction layer I j i (·) takes the query token representations q i−1 from the previous layer as input, along with the context embeddings E j−1 and a set of state vectors H j−1 ∈ R N ×d from the previous interaction layer (or cache):
q i , E j , H j = I j i (q i−1 , E j−1 , H j−1 ).(1)
The output E, H of the last interaction layer are fed into a classifier to generate predictions for each query-candidate pair.
Interaction Layer
This section describes the details of how the interaction layer generates candidate and query representations.
Candidate Representation. Given q i−1 and E j−1 , layer I j i performs a self-attention over q i−1 , and a candidate-to-query cross-attention over q i−1 , E j−1 simultaneously, as shown in Figure 2(b). Formally, the query self-attention is conducted as
Q i−1 , K i−1 , V i−1 = LN(q i−1 ), (2) q i = FFN(Att(Q i−1 , K i−1 , V i−1 )),(3)
where we write LN(·) for a linear transformation, FFN(·) for a feed-forward network and Att(Q, K, V ) for a self-attention operation (Vaswani et al., 2017). The cross-attention is formulated as
Q j−1 , K j−1 , V j−1 = LN(E j−1 ),(4)E j = FFN(Att(Q j−1 , [K j−1 ; K i−1 ], [V j−1 ; V i−1 ])).(5)
By simply concatenating K i−1 , V i−1 generated from the query with K j−1 , V j−1 generated from the candidates, the cross-attention operation dominated by Q j−1 aggregates the semantics for each query-candidate pair and produces new context embeddings E j ∈ R N ×k×d .
As shown in Eq. (3) and (5), the interaction layer separates the query encoding and the crossattention, thus the candidates embeddings are transparent to query. This design allows encoding the query only once regardless of the number of its candidates.
Query Representation. As shown in Eq.(5), the context embedding matrix E contains the semantics from both the query and the candidates. It can be used to estimate the relevance score of candidate-query pairs as
s = LN((Avg(E))),(6)
where s ∈ R N . Since E may not be sufficient to represent semantics for each candidate-query pair, we choose to maintain a separate embedding h to represent the query. Concretely, we conduct an attention operation at each interaction layer and obtain a unique query state for each candidate.
We first employ a pooling operation followed by a linear transformation on E j−1 and obtain Q * ∈ R N ×d . Then, the query semantics w.r.t. the candidates are extracted as
H * = FFN(Att(Q * , K i−1 , V i−1 )),(7)
where K i−1 , V i−1 are generated by Eq.
(2).
Next, the gate proposed by (Cho et al., 2014) is utilized to fuse H * with the query states H j−1 :
H j = Gate(H * , H j−1 ),(8)
where H j ∈ R N ×d . Each row of H j stands for the representation of the incoming query with respect to one candidate.
Classifier
Let H and E denote the query states and the candidate context embeddings generated by the last interaction layer, respectively. For the i-th candidate, the representation of the query is the i-th row of H, denoted as h i . The representation of this candidate is the mean of the i-th row of context embeddings E, denoted as e i . Classification Task: For a classification task such as NLI, we concatenate the embeddings h i and e i with the element-wise difference |h i − e i | (Reimers and Gurevych, 2019) and feed them into a feed-forward network:
logits = FFN(h i , e i , |h i − e i |).(9)
The network is trained to minimize a cross entropy loss.
Ranking Task: For ranking tasks such as passage retrieval, we estimate the relevance score of candidate-query pairs as:
s i = h i · e i ,(10)
where · denotes dot product. The network is optimized by minimizing a cross-entropy loss in which the logits are s i , · · · , s N . Table 1 presents the time complexity of the Dual-BERT, Cross-BERT, and our proposed MixEncoder. We can observe that the dual-encoder and MixEncoder support offline pre-computation to reduce the online time complexity. During the online inference, the query encoding cost term (hq 2 + h 2 q) of both Dual-BERT and MixEncoder does not increase with the number of candidates, since they conduct query encoding only once. Moreover, the MixEncoder's query-candidate term N c (c + q + h)hc can be reduced by setting c as a small value, which can further speed up the inference.
Time Complexity
EXPERIMENTS
Datasets
To fully evaluate the proposed MixEncoder, we conduct an empirical evaluation on four pairedinput datasets, including natural language inference(NLI), information retrieval, and utterance selection for dialogue. MNLI (Multi-Genre Natural Language Inference) (Williams et al., 2018) is a crowd-sourced classification dataset. It contains sentence pairs annotated with textual entailment information.
MS MARCO Passage Reranking (Nguyen et al., 2016) is a large collection of passages collected from Bing search logs. Given a query, the goal is to rank provided 1000 passages. We use a subset of training data 1 . Following previous work, we evaluate the models on the 6980 development queries (Khattab and Zaharia, 2020;Gao et al., 2020).
DSTC7 (Yoshino et al., 2019) is a chat log corpus contained in the DSTC7 challenge (Track 1). It consists of multi-turn conversations where one partner seeks technical support from the other.
Ubuntu V2 (Lowe et al., 2015) is a popular corpus similar to DSTC7. It is proposed earlier and contains more data than DSTC7.
These four datasets have the same form that every sample in the dataset contains one text and several candidates. The statistics of these datasets are detailed in Table 2. We use accuracy to evaluate the classification performance on MNLI. For other datasets, MRR and recall are used as evaluation metrics.
Baselines
MixEncoder is compared to following baselines: Cross-BERT refers to the original BERT (Devlin et al., 2019). We take the output at CLS token as the representation of the pair. This embedding is fed into a feedforward network to generate logits for either classification tasks or matching tasks. Dual-BERT (Sentence-BERT) is proposed by Reimers et al. (Reimers and Gurevych, 2019). This model uses siamese architecture and encodes text pairs separately. Deformer (Cao et al., 2020) is a decomposed Transformer, which utilizes lower layers to encode query and candidates separately and then uses upper layers to encode text pairs together. We followed the settings reported in the original Table 1: Time Complexity of the attention module in MixEncoder, Dual-BERT and Cross-BERT. We use q, d to denote query and candidate length, respectively. h indicates the hidden layer dimension, N c indicates the number of candidates for each query and k indicates the number of context embeddings for each candidate.
Model
Total paper and split BERT-base into nine lower layers and three upper layers. Poly-Encoder (Humeau et al., 2020) encodes the query and its candidates separately and performs a light-weight late interaction. Before the interaction layer, the query is compressed into several context vectors. We set the number of these context vectors as 64 and 360 respectively. ColBERT (Khattab and Zaharia, 2020) is a late interaction model for information retrieval. It adopts the MaxSim operation to obtain relevance scores after encoding the sentence pairs separately. Note that the design of ColBERT prohibits the utilization on classification tasks.
(N c = 1) Pre-computation (N c = 1) Online Dual-BERT h(d 2 + q 2 ) + h 2 (d + q) hd 2 + h 2 d hq 2 + h 2 q Cross-BERT h(d + q) 2 + h 2 (d + q) 0 N c (h(q + d) 2 + h 2 (q + d)) MixEncoder h(d 2 + q(q + k) + k 2 ) + h 2 (d + q + k) hd 2 + h 2 d h(q(q + kN c ) + k 2 N c ) + h 2 (q + kN c ) = hq 2 + h 2 q + N c (k + q + h)hk
Training Details
While training models on MNLI, we follow the conventional practice that uses the labels provided in the dataset. While training models on other three datasets, we use in-batch negatives (Karpukhin et al., 2020; that considers the positive candidate of other queries in a training batch as negative candidates. For Cross-BERT and Deformer, that require exhaustive computation, we set batch size as 16 due to the limitation of computation resources. For other models, we set batch size as 64. All the models use one BERT (based, uncased) with 12 layers and fine-tune it for up to 50 epochs with a learning rate of 1e-5 and a linear scheduling. All experiments are conducted on a server with 4 Nvidia Tesla A100 GPUs which has 40 GB graphic memory. Table 4 shows the experimental results of Dual-BERT, Cross-BERT, existing late interaction models and three variants of MixEncoder on four datasets. We measure the inference time of all the baseline models and present the results in Table 3.
Performance Comparison
Variants of MixEncoder. To study the effect of the number of interaction layers and that of the number of context embeddings per candidate, we present three variants in the tables, denoted as MixEncoder-a, -b and -c, respectively. Specifically, MixEncoder-a and -b set k as 1. The former has interaction layer I 1 12 and the latter has layers {I 1 10 , I 2 11 , I 3 12 }. MixEncoder-c has the same layers as MixEncoder-b but with k = 2. Inference Speed. We conduct speed experiments to measure the online inference speed for all the baselines. Concretely, we samples 100 samples from MS MARCO. Each of samples has roughly 1000 candidates. We measure the time for computations on the GPU and exclude time for text reprocessing and moving data to the GPU. Dual-BERT and Cross-BERT. The performance of the dual-BERT and cross-BERT are reported in the first two rows of Table 4. We can observe that the MixEncoder consistently outperforms the Dual-BERT. The variants with more interaction layers or more context embeddings generally yield more improvement. For example, on DSTC7, MixEncodera and MixEncoder-b achieves an improvement by 1.0% (absolute) and 2.4% over the Dual-BERT, respectively. Moreover, MixEncoder-a provides comparable performance to the Cross-BERT on both Ubuntu and DSTC7. MixEncoder-b can even outperform the Cross-BERT on DSTC7 (+0.9), since MixEncoder can benefit from a large batch size (Humeau et al., 2020). On MNLI, MixEncoder-a retains 92.6% of effectiveness of the Cross-BERT and MixEncoder-c can retain 93.7% of that. However, the effectiveness of the MixEncoder on MS MARCO is slight. We can find that the difference of the inference time for processing samples with 1k candidates between the Dual-BERT and MixEncoder is minimal. Cross-BERT is 2 orders of magnitude slower than these models. Late Interaction Models. From table 3, 4, we can have following observations. First, among all the late interaction models, Deformer that adopts a stack of Transformer layers as the late interaction component consistently shows the best performance on all the datasets. This demonstrates the effectiveness of cross-attention in transformer layers. In exchange, Deformer shows limited speedup (1.9x) compared to Cross-BERT. Compared to the ColBERT and Poly-Encoder, our MixEncoder outperforms them on the datasets except MS MARCO. Although ColBERT consumes more computation than MixEncoder, it shows worse performance than MixEncoder on DSTC7 and Ubuntu. This demonstrates the effectiveness of the light-weight crossattention, which can achieve a trade-off between the efficiency and effectiveness. However, on MS MARCO, our MixEncoder and poly-encoder lag behind the ColBERT with a large margin. We conjecture that our MixEncoder falls short of handling token-level matching. We will elaborate it in section 5.5 .
Effectiveness of Interaction Layer
Representations. We conduct ablation studies to quantify the impact of two key components (E and H) utilized in MixEncoder. The results are shown in Table 5. Every component results in a gain in performance compared to Dual-BERT. It demonstrates that our simplified cross-attention can produce effective representations for both the candidate and query. An interesting observation is that removing E can lead to a slight improvement on DSTC7. Moreover, we also implement MixEncoder based on Eq. 6 that a linear transformation is applied to E to estimate relevance scores, which leads to a drop in performance. Varying the Interaction Layers. To verify the impact of the interaction layer, we perform ablation studies by varying the number and the position of layers. First, we use two interaction layers {I 1 i , I 2 12 }, and choose i from the set {1, 2, 4, 6, 8, 10, 11}. The results are shown in Figure 3(b). We can find that MixEncoder on Ubuntu is insensitive to i while MixEncoder on DSTC7 can be enhanced with i = 11. Moreover, Figure 3(c) shows the results when MixEncoder has interaction layers {I 1 i , I 2 i+1 , · · · , I 13−i 12 }. Increasing interaction layers cannot always improve the ranking quality. On Ubuntu, replacing all the transformer layers provides close performance to that with the last layer replaced. On DSTC7, the performance of MixEncoder achieves a peak with last three layer replaced by our interaction layers.
Candidate Pre-computation
We study the effect of the number of candidate embeddings, denoted as k, and the pre-computation strategies introduced in section 3.2. Specifically, We choose the value of k from the set {1, 2, 3, 10} with one interaction layer I 1 12 . From Figure 3(c), we can observe that as k gets larger, the performance of the MixEncoder increases first, and then descends. Moreover, two pre-computation strategies have different impacts on the model performance. S-strategy generally outperforms Cstrategy with the same k.
In-batch Negative Training
We change the batch size and show the results in Figure 3(d). It can be observed that increasing batch size contributes to good performance. Moreover, we have the observation that models may fail to diverge with a small batch size. Due to the limitation of computation resources, we set the batch size as 64 for our training.
Error Analysis
In this section, we take a sample from MS MARCO to analyze our errors. We observe that MixEncoder falls short of detecting token overlapping. Given the query "foods and supplements to lower blood sugar", MixEncoder fails to pay attention to the keyword "supplements" which appears in both the query and the positive candidate. We conjecture that this drawback is due to the pre-computation that represents each candidate into k context embeddings. It lose the token-level feature of the candidates. On the contrary, ColBERT caches all the token embeddings of the candidates and estimate relevance scores based on token-level similarity.
Conclusion
In this paper, we propose MixEncoder which provides a good trade-off between the performance and efficiency. MixEncoder involves a light-weight cross-attention mechanism which allows us to encode the query once and process all the candidates in parallel. We evaluate MixEncoder with four datasets. The results demonstrate that MixEncoder can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.
Although MixEncoder is demonstrated effective, we recognize that MixEncoder does not performs well on MS MARCO. It indicates that our Mix-Encoder falls short of detecting token overlapping, since it may lose token level semantics of candidates during pre-computation. Moreover, the Mix-Encoder is not evaluated on a large scale evaluation dataset, such as an end-to-end retrieval task, which requires model to retrieve top-k candidates from millions of candidates Khattab and Zaharia, 2020).
(d).
(
Figure 2 :
2Overview of proposed MixEncoder.
batch size of in-batch negative training.
Figure 3 :
3Parameter analysis on the number and the position of interaction layers, the length and the strategy of context embeddings and the batch size.
Table 2 :
2Statistics of experimental datasets.Dataset
MNLI MS MACRO DSTC7 Ubuntu V2
Train
# of queries
392,702
498,970
200,910
500,000
Avg length of queries
27
9
153
139
Avg length of candidates
14
76
20
31
Test
# of queries
9,796
6,898
1,000
50,000
# of candidates per query
1
1000
100
10
Avg length of queries
26
9
137
139
Avg length of candidates
14
74
20
31
Table 3 :
3Time to evaluate 100 queries with 1k candi-
dates. The Space used to cache the pre-computed em-
beddings for 1k candidates are shown.
Model
Time (ms) Space (GB)
1k
1k
Dual-BERT
7.2
0.3
PolyEncoder-64
7.3
0.3
PolyEncoder-360
7.5
0.3
ColBERT
27.0
8.6
Deformer
488.7
52.7
Cross-BERT
949.4
-
MixEncoder-a
8.4
0.3
MixEncoder-b
10.6
0.3
MixEncoder-c
11.2
0.6
5 Results
Table 4 :
4Performance of Dual-BERT, Cross-BERT and three variants of MixEncoder on four datasets.Model
MNLI
Ubuntu
DSTC7
MS MARCO
Speedup
Accuracy R1@10 MRR R1@100 MRR R1@1000 MRR(dev)
Times
Cross-BERT
83.7
83.1
89.4
66.0
75.2
23.3
36.0
1.0x
Dual-BERT
75.2
81.6
88.5
65.8
73.7
20.3
32.2
132x
PolyEncoder-64
76.8
82.3
88.9
67.5
75.2
20.3
32.3
130x
PolyEncoder-360
77.3
81.8
88.6
65.7
73.4
20.5
32.4
127x
ColBERT
×
82.8
89.3
67.2
74.8
22.8
35.4
35.2x
Deformer
82.0
83.2
89.5
66.3
75.3
23.0
35.7
1.9x
MixEncoder-a
77.5
83.2
89.5
67.3
74.7
19.8
31.6
113x
MixEncoder-b
77.8
83.2
89.5
68.7
76.1
20.7
32.5
89.6x
MixEncoder-c
78.4
83.3
89.5
66.8
74.9
19.3
31.0
84.8x
Table 5 :
5Ablation analysis for MixEncoder-a and
MixEncoder-b. (MRR)
Ubuntu
DTSC7
Variants
-a
-b
-a
-b
Original 89.5 89.5 74.7 76.1
w/o H
88.9 89.1 74.0 73.9
w/o E
89.2 89.3 74.8 75.2
Eq. 6
89.1 89.2 72.3 74.4
https://github.com/UKPLab/sentencetransformers/tree/master
A AppendixThis is an appendix.
De-Former: Decomposing pre-trained transformers for faster question answering. Qingqing Cao, Harsh Trivedi, Aruna Balasubramanian, Niranjan Balasubramanian, 10.18653/v1/2020.acl-main.411Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsQingqing Cao, Harsh Trivedi, Aruna Balasubrama- nian, and Niranjan Balasubramanian. 2020. De- Former: Decomposing pre-trained transformers for faster question answering. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4487-4497, Online. As- sociation for Computational Linguistics.
DiPair: Fast and accurate distillation for trillion-scale text matching and pair modeling. Jiecao Chen, Liu Yang, Karthik Raman, Michael Bendersky, Jung-Jung Yeh, Yun Zhou, Marc Najork, Danyang Cai, Ehsan Emadzadeh, 10.18653/v1/2020.findings-emnlp.264Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsJiecao Chen, Liu Yang, Karthik Raman, Michael Ben- dersky, Jung-Jung Yeh, Yun Zhou, Marc Najork, Danyang Cai, and Ehsan Emadzadeh. 2020. DiPair: Fast and accurate distillation for trillion-scale text matching and pair modeling. In Findings of the As- sociation for Computational Linguistics: EMNLP 2020, pages 2925-2937, Online. Association for Computational Linguistics.
On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Dzmitry Bart Van Merriënboer, Yoshua Bahdanau, Bengio, 10.3115/v1/W14-4012Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationDoha, QatarAssociation for Computational LinguisticsKyunghyun Cho, Bart van Merriënboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111, Doha, Qatar. Asso- ciation for Computational Linguistics.
Senteval: An evaluation toolkit for universal sentence representations. Alexis Conneau, Douwe Kiela, Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018. the Eleventh International Conference on Language Resources and Evaluation, LREC 2018Miyazaki, JapanEuropean Language Resources Association (ELRAAlexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. Eu- ropean Language Resources Association (ELRA).
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Modularized transfomer-based ranking framework. Luyu Gao, Zhuyun Dai, Jamie Callan, 10.18653/v1/2020.emnlp-main.342Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsLuyu Gao, Zhuyun Dai, and Jamie Callan. 2020. Mod- ularized transfomer-based ranking framework. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4180-4190, Online. Association for Computa- tional Linguistics.
Semantic models for the first-stage retrieval: A comprehensive review. Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, Xueqi Cheng, ACM Transactions on Information Systems (TOIS). 404Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, and Xueqi Cheng. 2022. Semantic models for the first-stage retrieval: A comprehensive review. ACM Transactions on Information Systems (TOIS), 40(4):1-42.
Context-aware interaction network for question matching. Zhe Hu, Zuohui Fu, Yu Yin, Gerard De Melo, 10.18653/v1/2021.emnlp-main.312Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsZhe Hu, Zuohui Fu, Yu Yin, and Gerard de Melo. 2021. Context-aware interaction network for ques- tion matching. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Processing, pages 3846-3853, Online and Punta Cana, Dominican Republic. Association for Compu- tational Linguistics.
Learning deep structured semantic models for web search using clickthrough data. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, Larry P Heck, 22nd ACM International Conference on Information and Knowledge Management, CIKM'13. San Francisco, CA, USAACMPo-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013. Learning deep structured semantic models for web search us- ing clickthrough data. In 22nd ACM International Conference on Information and Knowledge Manage- ment, CIKM'13, San Francisco, CA, USA, October 27 -November 1, 2013, pages 2333-2338. ACM.
Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, Jason Weston, 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa. EthiopiaOpenReview.netSamuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architec- tures and pre-training strategies for fast and accurate multi-sentence scoring. In 8th International Confer- ence on Learning Representations, ICLR 2020, Ad- dis Ababa, Ethiopia, April 26-30, 2020. OpenRe- view.net.
Dense passage retrieval for open-domain question answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-Tau Yih, 10.18653/v1/2020.emnlp-main.550Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.
Colbert: Efficient and effective passage search via contextualized late interaction over BERT. Omar Khattab, Matei Zaharia, Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SI-GIR 2020, Virtual Event. the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SI-GIR 2020, Virtual EventChinaACMOmar Khattab and Matei Zaharia. 2020. Colbert: Ef- ficient and effective passage search via contextual- ized late interaction over BERT. In Proceedings of the 43rd International ACM SIGIR conference on re- search and development in Information Retrieval, SI- GIR 2020, Virtual Event, China, July 25-30, 2020, pages 39-48. ACM.
MINER: Multi-interest matching network for news recommendation. Jian Li, Jieming Zhu, Qiwei Bi, Guohao Cai, Lifeng Shang, Zhenhua Dong, Xin Jiang, Qun Liu, 10.18653/v1/2022.findings-acl.29Findings of the Association for Computational Linguistics: ACL 2022. Dublin, IrelandAssociation for Computational LinguisticsJian Li, Jieming Zhu, Qiwei Bi, Guohao Cai, Lifeng Shang, Zhenhua Dong, Xin Jiang, and Qun Liu. 2022. MINER: Multi-interest matching network for news recommendation. In Findings of the Asso- ciation for Computational Linguistics: ACL 2022, pages 343-352, Dublin, Ireland. Association for Computational Linguistics.
The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau, Proceedings of the SIGDIAL. the SIGDIALRyan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. In Proceedings of the SIGDIAL 2015
Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Prague, Czech RepublicThe Association for Computer LinguisticsConference, The 16th Annual Meeting of the Spe- cial Interest Group on Discourse and Dialogue, 2- 4 September 2015, Prague, Czech Republic, pages 285-294. The Association for Computer Linguis- tics.
Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins, 10.1162/tacl_a_00369Sparse, dense, and attentional representations for text retrieval. Transactions of the Association for Computational Linguistics. 9Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, dense, and atten- tional representations for text retrieval. Transac- tions of the Association for Computational Linguis- tics, 9:329-345.
Efficient document re-ranking for transformers by precomputing term representations. Sean Macavaney, Maria Franco, Raffaele Nardini, Nicola Perego, Nazli Tonellotto, Ophir Goharian, Frieder, Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event. the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual EventChinaACMSean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020. Efficient document re-ranking for transformers by precomputing term representa- tions. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 49-58. ACM.
MS MARCO: A human generated machine reading comprehension dataset. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng, Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 colocated with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016)Barcelona, Spain1773CEUR Workshop Proceedings. CEUR-WS.orgTri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Proceedings of the Workshop on Cognitive Computation: Inte- grating neural and symbolic approaches 2016 co- located with the 30th Annual Conference on Neu- ral Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, volume 1773 of CEUR Workshop Proceedings. CEUR-WS.org.
DC-BERT: decoupling question and document for efficient contextual encoding. Ping Nie, Yuyu Zhang, Xiubo Geng, Arun Ramamurthy, Le Song, Daxin Jiang, Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event. the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual EventChinaACMPing Nie, Yuyu Zhang, Xiubo Geng, Arun Rama- murthy, Le Song, and Daxin Jiang. 2020. DC-BERT: decoupling question and document for efficient con- textual encoding. In Proceedings of the 43rd Inter- national ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1829- 1832. ACM.
. Rodrigo Frassetto Nogueira, Kyunghyun Cho, Passage re-ranking with BERT. CoRR, abs/1901.04085Rodrigo Frassetto Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. CoRR, abs/1901.04085.
RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, Haifeng Wang, 10.18653/v1/2021.naacl-main.466Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsYingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An opti- mized training approach to dense passage retrieval for open-domain question answering. In Proceed- ings of the 2021 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. Association for Computational Linguistics.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking. Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, Ji-Rong Wen, 10.18653/v1/2021.emnlp-main.224Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOnline and Punta CanaDominican RepublicRuiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, QiaoQiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen. 2021. RocketQAv2: A joint training method for dense passage retrieval and passage re-ranking. In Proceedings of the 2021 Conference on Empiri- cal Methods in Natural Language Processing, pages 2825-2835, Online and Punta Cana, Dominican Re- public. Association for Computational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, 10.18653/v1/N18-1101Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.
. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando, D' Haro, Lazaros Polymenakos, R Chulaka Gunasekara, Walter S Lasecki, Jonathan K Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv BatraDialog system technology challenge 7. CoRR, abs/1901.03461Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fer- nando D'Haro, Lazaros Polymenakos, R. Chu- laka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jian- feng Gao, Bill Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2019. Dialog system technology challenge 7. CoRR, abs/1901.03461.
Do it once: An embarrassingly simple joint matching approach to response selection. Linhao Zhang, Dehong Ma, Sujian Li, Houfeng Wang, 10.18653/v1/2021.findings-acl.430Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational LinguisticsOnlineLinhao Zhang, Dehong Ma, Sujian Li, and Houfeng Wang. 2021. Do it once: An embarrassingly simple joint matching approach to response selection. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 4872-4877, On- line. Association for Computational Linguistics.
SPARTA: Efficient open-domain question answering via sparse transformer matching retrieval. Tiancheng Zhao, Xiaopeng Lu, Kyusong Lee, 10.18653/v1/2021.naacl-main.47Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesTiancheng Zhao, Xiaopeng Lu, and Kyusong Lee. 2021. SPARTA: Efficient open-domain question an- swering via sparse transformer matching retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 565-575, Online. Association for Computa- tional Linguistics.
| [
"https://github.com/UKPLab/sentencetransformers/tree/master"
] |
[
"Language with Vision: a Study on Grounded Word and Sentence Embeddings",
"Language with Vision: a Study on Grounded Word and Sentence Embeddings",
"Language with Vision: a Study on Grounded Word and Sentence Embeddings",
"Language with Vision: a Study on Grounded Word and Sentence Embeddings"
] | [
"Hassan Shahmohammadi \nUniversity of Tübingen\n\n",
"Maria Heitmeier \nUniversity of Tübingen\n\n",
"Elnaz Shafaei-Bajestan \nUniversity of Tübingen\n\n",
"Hendrik P A Lensch \nUniversity of Tübingen\n\n",
"Harald Baayen \nUniversity of Tübingen\n\n",
"Hassan Shahmohammadi \nUniversity of Tübingen\n\n",
"Maria Heitmeier \nUniversity of Tübingen\n\n",
"Elnaz Shafaei-Bajestan \nUniversity of Tübingen\n\n",
"Hendrik P A Lensch \nUniversity of Tübingen\n\n",
"Harald Baayen \nUniversity of Tübingen\n\n"
] | [
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n",
"University of Tübingen\n"
] | [] | Language grounding to vision is an active field of research aiming to enrich text-based representations of word meanings by leveraging perceptual knowledge from vision. Despite many attempts at language grounding, it is still unclear how to effectively inject visual knowledge into the word embeddings of a language in such a way that a proper balance of textual and visual knowledge is maintained. Some common concerns are the following. Is visual grounding beneficial for abstract words or is its contribution only limited to concrete words? What is the optimal way of bridging the gap between text and vision? How much improvements do we gain by visually grounding textual embeddings? The present study addresses these questions by proposing a simple yet very effective grounding approach for pre-trained word embeddings. Our model aligns textual embeddings with vision while largely preserving the distributional statistics that characterize word use in text corpora. By applying a learned alignment, we are able to generate visually grounded embeddings for unseen words, including abstract words. A series of evaluations on word similarity benchmarks shows that visual grounding is beneficial not only for concrete words, but also for abstract words. We also show that our method for visual grounding offers advantages for contextualized embeddings as for example generated by BERT (Devlin et al, 2018), but only when these are trained on corpora of relatively modest size. Code and grounded embeddings for English are available at https: //github.com/Hazel1994/Visually Grounded Word Embeddings 2. Abstract Words Language with Vision: a Study on Grounded Word and Sentence EmbeddingsFig. 1Our model constructs visually grounded embeddings (right) from textual embeddings (left) by applying a learned alignment (M ) trained on a subset of 10,000 words in image-caption pairs. It then generates zero-shot grounded embeddings at the inference phase for a total of 2,000,000 words, including not only concrete words but also abstract words. For each query word (in black), the grounded embeddings (right) retrieve more similar words compared to the purely textual embeddings (left) and alleviate the bias toward dissimilar words with high co-occurrence frequencies such as (many, people). Out of the top 10 nearest neighbors for each query word, only the differing neighbors between the textual embeddings and the grounded embeddings are shown in the right-hand panel. | 10.48550/arxiv.2206.08823 | [
"https://export.arxiv.org/pdf/2206.08823v2.pdf"
] | 249,847,904 | 2206.08823 | c2150d7e1077765e0249b1b1da3867b615b38eec |
Language with Vision: a Study on Grounded Word and Sentence Embeddings
14 Jul 2022
Hassan Shahmohammadi
University of Tübingen
Maria Heitmeier
University of Tübingen
Elnaz Shafaei-Bajestan
University of Tübingen
Hendrik P A Lensch
University of Tübingen
Harald Baayen
University of Tübingen
Language with Vision: a Study on Grounded Word and Sentence Embeddings
14 Jul 2022Springer Nature 2021 L A T E X template 1Visual GroundingMulti-modal Word EmbeddingsGrounded CognitionGrounding Abstract Words
Language grounding to vision is an active field of research aiming to enrich text-based representations of word meanings by leveraging perceptual knowledge from vision. Despite many attempts at language grounding, it is still unclear how to effectively inject visual knowledge into the word embeddings of a language in such a way that a proper balance of textual and visual knowledge is maintained. Some common concerns are the following. Is visual grounding beneficial for abstract words or is its contribution only limited to concrete words? What is the optimal way of bridging the gap between text and vision? How much improvements do we gain by visually grounding textual embeddings? The present study addresses these questions by proposing a simple yet very effective grounding approach for pre-trained word embeddings. Our model aligns textual embeddings with vision while largely preserving the distributional statistics that characterize word use in text corpora. By applying a learned alignment, we are able to generate visually grounded embeddings for unseen words, including abstract words. A series of evaluations on word similarity benchmarks shows that visual grounding is beneficial not only for concrete words, but also for abstract words. We also show that our method for visual grounding offers advantages for contextualized embeddings as for example generated by BERT (Devlin et al, 2018), but only when these are trained on corpora of relatively modest size. Code and grounded embeddings for English are available at https: //github.com/Hazel1994/Visually Grounded Word Embeddings 2. Abstract Words Language with Vision: a Study on Grounded Word and Sentence EmbeddingsFig. 1Our model constructs visually grounded embeddings (right) from textual embeddings (left) by applying a learned alignment (M ) trained on a subset of 10,000 words in image-caption pairs. It then generates zero-shot grounded embeddings at the inference phase for a total of 2,000,000 words, including not only concrete words but also abstract words. For each query word (in black), the grounded embeddings (right) retrieve more similar words compared to the purely textual embeddings (left) and alleviate the bias toward dissimilar words with high co-occurrence frequencies such as (many, people). Out of the top 10 nearest neighbors for each query word, only the differing neighbors between the textual embeddings and the grounded embeddings are shown in the right-hand panel.
Introduction
Where do symbolic representations of language get their meaning from? It has been argued both from a theoretical and an empirical perspective that knowledge is grounded in perceptual experience (Barsalou, 2008;Lakoff, 1987;Langacker, 1999;Zwaan and Madden, 2005). Evidence for this embodied view of knowledge comes from a range of scientific domains such as neuroimaging (e.g. Simmons et al, 2005;Martin, 2007) and behavioural studies (e.g. Goldstone, 1995;Barsalou, 2001, 2004), showing that knowledge is grounded in sensory, but also interoceptive perception and motor action (overview in Barsalou, 2008). However, this view is not uncontested. For example, Louwerse and Connell (2011) argue that linguistic information suffices for more shallow processing of meaning and that perceptual, embodied information is only accessed when deeper knowledge of a word is required. This debate has been stimulated further by the success of meaning representations which are based on linguistic information alone. They build on the notion of Harris (1954) that similar words occur in similar contexts and represent each word as numerical vectors, with similarities between these vectors reflecting similarities in words' meanings. By now, many different methods have been devised to generate such vectors, beginning with Hyperspace Analogue of Language (HAL; Lund and Burgess, 1996) and Latent Semantic Analysis (LSA; Landauer and Dumais, 1997), and later, mainly in the realm of Natural Language Processing (NLP) and machine learning, Word2Vec (Mikolov et al, 2013), Fasttext (Bojanowski et al, 2017) or GloVe (Pennington et al, 2014). Today, word embeddings are employed successfully in many different areas and tasks within NLP, such as POS-tagging, named-entity recognition, and sentiment analysis (Wang et al, 2019).
As an easily obtained representation of semantics, word embeddings are also used in many areas of cognitive science, such as AI research, psychology or psycholinguistics, with encouraging results (see Günther et al, 2019). From a cognitive perspective, word embeddings have been evaluated in two ways. A relatively direct method is to compare them to metrics obtained from brain imaging such as fMRI or EEG. Bulat et al (2017); Hollenstein et al (2019) showed that a variety of word embeddings (e.g. GloVe, Word2Vec, Fasttext) correlate relatively well with such metrics. A second, more indirect, approach uses behavioural data such as reaction times or ratings as evaluation criteria. Mandera et al (2017) showed that word embeddings can be used to predict semantic priming as well as word associations, similarity/relatedness ratings and even perform well in a multiple-choice task. Further evidence in favour of the cognitive plausibility of word embeddings has been provided by Westbury (2014); Westbury and Hollis (2019) who predicted familiarity and humour ratings respectively, Abdou et al (2021) who showed that even color relations are accurately represented by purely textual embeddings, as well as Louwerse and Zwaan (2009) ;Avery et al (2021) who demonstrated that geographical locations of cities are reflected in purely textual embeddings. However, the cognitive plausibility of mechanisms generating word embeddings such as Word2Vec has not gone unchallenged (Mannering and Jones, 2021).
While the success of textual embeddings has nevertheless led some researchers to believe that meaning can be fully, or at least to a large extent, be derived from language alone (Landauer, 1999), the wide range of empirical evidence in favour of a grounded view of knowledge representation and cognition has sparked the search for representations that are informed not only by text, but also by vision and other modalities.
Therefore, a number of previous studies have tried to improve word embeddings by using available data similar to text corpora. Some studies have tried to extract meaning representations exclusively from visual information (usually images). The resulting models have been found to be very good models of human perceptual behaviour (e.g. Zhang et al, 2018), but success at predicting other behavioural data was more mixed, with some reporting positive (Lüddecke et al, 2019;Bulat et al, 2017) and others negative results (Peterson et al, 2017;De Deyne et al, 2021). The more promising approach has been to include vision information with textual embeddings. This is especially promising because textual and visual representations seem to carry different kinds of information (Petilli et al, 2021).
Multimodal embeddings have been successful in a range of areas. In the realm of NLP they have been shown to correlate better than purely textual embeddings with human judgments on tasks such as text similarity/relatedness, concept categorization (Bruni et al, 2014;Shahmohammadi et al, 2021), and many downstream classification tasks (Bordes et al, 2019). Recently, they have also been subject to first evaluations in cognitive science. Bulat et al (2017) found that they are better at predicting brain activity than purely textual embeddings. Moreover, they are useful in modelling the learning of novel words' meanings in both children and adults (Lazaridou et al, 2016(Lazaridou et al, , 2017 and generally perform well at a range of similarity benchmarks (De Deyne et al, 2021).
Several approaches to obtaining multimodal embeddings are available. Some approaches apply feature-level fusion, where they combine image features with textual word embeddings (after obtaining both separately) using methods such as simple concatenation, SVD, or GRU gating mechanisms (Bruni et al, 2014;Kiela and Bottou, 2014;Kiros et al, 2018). Others on the other hand, learn multi-modal word representations in a joint feature space defined by a specific criterion (known as loss function) between modalities, for example by using auto-encoders (Silberer and Lapata, 2014;Hasegawa et al, 2017) or LSTM (Hochreiter and Schmidhuber, 1997) networks (Kiela et al, 2018;Chrupa la et al, 2015). Recently, new approaches based on modality alignment have emerged. Here, vision and language are treated separately (as opposed to having both in a shared space) but the textual embeddings are aligned with image features (Shahmohammadi et al, 2021;Bordes et al, 2019). Our grounding approach falls into this latter category (see Figure 1). Although similar to Shahmohammadi et al (2021), the present approach is not only simpler but also achieves better results.
While a plethora of studies have proposed various grounding mechanisms, evaluated on both NLP and cognitive tasks, what exactly the role of visual information in multimodal meaning representations is and what it should be from a cognitive perspective is still unclear. Our contribution to this debate is two-fold: First, we propose a new technique for grounding textual word embeddings in vision in a very simple manner. It is able to generalise to new words without a visual representation, which allows it to improve performance not only on concrete but also abstract words. We show that our method improves performance on a range of tasks, both compared to ungrounded embeddings and also compared to other grounding methods. Our pretrained sets of grounded embeddings are made available to the community. 1 Secondly, with our grounded embeddings as point of departure, we explore various questions which arise from previous work on grounding and generating distributed meaning representations in general, and are crucial when aiming to model cognitively plausible meaning representations:
1. On the one hand, many studies have shown that combining visual information and textual information improves the quality of word embeddings (e.g. Bruni et al, 2014;Shahmohammadi et al, 2021;Lazaridou et al, 2016). On the other hand, purely textual embeddings are very successful even on tasks related to vision and spatial relations (Louwerse and Zwaan, 2009;Abdou et al, 2021), and purely visual embeddings do not perform well at predicting human similarity judgments (e.g. De Deyne et al, 2021). A fine balance therefore has to be struck between too much and too little visual information in grounding. This raises the following question: how much vision information should be allowed to fuse into textual embeddings? 2. Traditionally, embeddings are grounded on a single word basis (e.g. Günther et al, 2020;Kiela and Bottou, 2014;Bruni et al, 2014). However, visual scenes are complex, and are usually best described not by single words, but rather by entire sentences. How much does grounding performance depend on whether the entire context of an image description is taken into account? 3. Newly proposed large-scale contextualized language models rely on enormous amounts of data (e.g., BERT: Devlin et al, 2018). While this leads to good performance, it is cognitively implausible, as humans encounter only a much smaller number of words over their lifetimes (Brysbaert et al, 2016). Our third question therefore relates to whether visual grounding is equally helpful when large amounts, or only small amounts, of training data are available: How much does the amount of training data influence the improvement of visual grounding on downstream tasks such as sentiment analysis? We will demonstrate that on corpora sizes closer to human-scale training data, visual grounding improves the quality of embeddings.
To this end, our paper is structured as follows. Sections 2 and 3 introduce our method, which is evaluated in Section 4. In Section 5, we explore the question of how much vision should be allowed to fuse into textual embeddings. Section 6 discusses whether grounding should take place at the level of words or sentences, and Section 7 analyses how much grounding improves task performance depending on the amount of available training data.
Visually Grounded Word Embeddings
In this section, we explain our visual grounding model and how it computes the visually grounded version of all the textual word vectors. For (S j , I j ) ∈ D, let S j = [w 1 , w 2 · · · w n ] be a textual caption with n words describing its corresponding image with the image vector I j (extracted from a pre-trained CNN model) in the dataset D. Let t i ∈ R d be a textual embedding of the word w i , which has been obtained by a pre-trained word embedding model T e : w i → t i (e.g., Fasttext). The goal is to learn the mapping M to visually ground any textual word vector t i in its corresponding image vector I j and obtain the grounded embedding g i ∈ R c of the word w i . The mapping M ideally should: a) preserve the abstract knowledge from co-occurrence statistics captured by textual embeddings, and b) align the textual embeddings with their real world relations available in images. This way, the grounded embeddings will benefit both concrete and abstract words (Shahmohammadi et al, 2021). While it may Fig. 2 Our zero-shot grounding model encodes each caption word by word, using an LSTM given the task to predict the corresponding image vector. A mapping M is set up that takes textual vectors and maps them into the grounded space. This mapping is trained on a limited number of words (those that occur in the captions) but is then applied to all the words, after the training is completed, to generate "zero-shot" grounded embeddings. The snowflake icon indicates the frozen weights during training. seem intuitive to learn both modalities in a shared feature space, we argue that such approaches, unfortunately, are more likely to cause the grounded embeddings to lose the abstract knowledge from textual co-occurrences and therefore suffer from a bias towards concrete words as reported by Park and Myaeng (2017).
In order for abstract knowledge to be preserved, individual words should still be aware of the context (other words in the sentence) during the grounding process. The grounding process should also respect the textual vector space as any random change to textual embeddings will distort the semantic information obtained by textual statistics (Shahmohammadi et al, 2021). Figure 2 lays out the architecture of our proposed grounding model. The grounded version of any word w i is obtained by mapping its textual embedding t i into the visually grounded space using the linear mapping M as g i = t i · M . In the grounded space word vectors are aligned with the images by using a one-layer LST M network (Hochreiter and Schmidhuber, 1997) that encodes the whole sentence S j as a single vector h n :
h n = LST M (G, c 0 , h 0 | θ),(1)
where G denotes the input -all the grounded word vectors (output of M )and θ the learning parameters. The LST M network includes a cell state c t and a hidden state h t where t denotes the current time-step. The network takes one word at each time-step (see Figure 2) and each time, for each successive word, it updates its memory by removing and adding information to the cell state. It then generates an output h t based on the current input g t and c t .
Both h t and c t are passed to the next time-step. We extract the output of the last time-step h n as a vector representing the whole sentence. The model is trained to match h n to the image vector I j for each particular training sample (S j , I j ) ∈ D. We optimize the parameters of the LST M and the mapping M (denoted as Θ) based on the following mean-squared-error (MSE) loss:
Θ = argmin Θ 1 N n t=1 (y t −ŷ t ) 2 ,(2)
where y andŷ denote the ground truth image vector (I j ) and the predicted image vector (h n ) respectively. By applying the LST M network, the model takes into account the context in which each word occurs. Therefore, the whole sentence is mapped to the image vector. Since the model tries to predict an image vector, it will change the textual-vector space such that the image vector is estimated as accurately as possible. Nonetheless, we restrict the influence of the images on the word vectors by keeping the mapping M linear. Naturally, the grounded word vectors (output of M ) will still respect the textual vector space but they will be aligned to the image representations.
After training the model on (caption, image) pairs, the mapping M can be used to visually ground any word vectors including out-of-vocabulary words. In this way, a visually grounded version of the textual embeddings is created in a zero-shot manner despite being exposed to only a limited number of words.
Implementation Details
We used the Microsoft COCO 2017 dataset (Lin et al, 2014) in our experiments. Each sample of this dataset includes a single image along with 5 different human-generated captions . The whole dataset was divided into 118k train and 5k validation samples. We set the batch size to 256 with each batch containing 256 image vectors (of dimension 2048) along with one of their corresponding captions. Image vectors were extracted from the penultimate layer of pre-trained Inception-V3 (Szegedy et al, 2016), based on ImageNet (Deng et al, 2009). We set the dimension of the grounded embeddings (output of M ) to 1024, following Shahmohammadi et al (2021). A one-layer LST M was applied with 2048 units. We removed the punctuation marks from the captions and converted all words to lower case. Only the top 10k most frequent words in the captions we used and the rest were ignored. We trained the model for 20 epochs with 5 epochs tolerance early stopping, using the NAdam optimizer (Dozat, 2016) with a learning rate of 0.001.
Both the pre-trained textual embedding T e and the Inception-V3 model are frozen during training. Two popular pre-trained textual word embeddings, GloVe (crawl−300d−2.2M −cased) and Fasttext (crawl−300d−2M −SubW ), were used to initialize the embedding T e .
Evaluation
General Evaluation
Despite many existing evaluation benchmarks for word embeddings, the question of what is the proper evaluation method is still open (Wang et al, 2019). For multi-modal embeddings, however, the common evaluation methods include lexical semantic similarity/relatedness benchmarks (Park and Myaeng, 2017;Kiros et al, 2018;Kiela et al, 2018;Collell Talleda et al, 2017). Here, the task is to estimate the similarity/relatedness score of a given pair of words with the Spearman correlation as evaluation metric. It is worth noting that some datasets do not distinguish between similarity and relatedness. For example, the pair (clothes, closet) comes with the score of 1.96 (out of 10) in SimLex999, but exactly the same pair receives a score of 8.00 in WordSim353, which does not distinguish between similarity and relatedness. We evaluate our visually grounded embeddings and compare the results with textual embeddings and related previous works. The following datasets have been used in our evaluation: MEN (Bruni et al, 2014), SimLex999 (Hill et al, 2015), Rare-Words (Luong et al, 2013), MTurk771 (Halawi et al, 2012), WordSim353 (Finkelstein et al, 2001), and SimVerb3500 (Gerz et al, 2016 Table 1 shows the evaluation results on lexical semantic benchmarks. Our zero-shot grounded embeddings are shown as ZSG-G and ZSG-F indicating the grounded versions of GloVe and Fasttext respectively. As shown in the first section of the table, our ZSG-G outperforms the textual GloVe on all of the benchmarks significantly. In case of Fasttext on the other hand, the improvement is less significant, probably because Fasttext takes into account sub-word information. Hence, it captures word similarity/relatedness better compared to GloVe. In the lower part of the table, we compare the performance of our best model (ZSG-G) with related visually grounded embeddings models. For a fair comparison, we limit our list to those who adopted pre-trained word embeddings. Shahmohammadi et al (2021) (shown as VGE-G in the table) proposed a similar grounding approach to ours where they employ multi-task training on top of the textual embeddings after mapping them into the grounded space. They use 3 different single-layer GRUs (Cho et al, 2014) and 3 different tasks (generating the caption in two different directions and learning to discriminate between matching and non-matching image-caption pairs). While inspired by their method, our approach is simpler, requires less computational power, and it performs slightly better on the same set of benchmarks. Kiela et al (2018) also learned visually grounded embeddings on top of pre-trained GloVe by using the MSCOCO data set. Even though they propose a number of tasks for training (Cap2Img: predicting the image vector from its caption, Cap2Cap: generate an alternative caption of the same image; Cap2Both: training by Cap2Cap and Cap2Img simultaneously) our model clearly outperforms them. Park and Myaeng (2017) dealt with this problem by a polymodal approach where they propose six different types of embeddings (linear and syntactic contexts, cognition, sentiment, emotion, and perception) for each word. Despite the fact that they used two pre-trained embeddings (Word2Vec, GloVe) and other resources, our simple model is superior on MEN and WSim353, albeit, worse on Simlex999. Their performance on SimLex999 can be attributed to the multi-modality training as employing only their visually grounded embeddings (P&M VG) performs much worse. This clearly shows that their visual embeddings do not benefit abstract words (cf. Park and Myaeng, 2017). Our approach while being quite simple and straightforward, aggregates visual information effectively without distorting the textual embeddings. In summary, the proposed embeddings show improved performance on all the mentioned benchmarks.
Fine-Grained Evaluation on Concrete and Abstract Words
In linguistics, concrete words 2 refer to physically real and perceptible entities such as tree, ball, or Chris, whereas abstract words have references that are not readily perceptible to the senses, and are more complex and variable in meaning, including mental states (e.g., happiness), events (e.g., encounter ), conditions (e.g., totalitarianism), relations (e.g., brotherhood ) and so forth (VandenBos, 2015;Borghi and Binkofski, 2014). Concreteness and abstractness are not binary properties of words (Wiemer-Hastings et al, 2001). Words become increasingly abstract as they are more separated from physical entities and more linked to mental states (Barsalou, 2003). Word concreteness indicates the degree to which a word denotes a perceptible entity and is measured on a numerical scale by subject ratings (Brysbaert et al, 2014). For example, the word pancake is ranked high on the scale as it is associated with many sensory properties such as smell, taste, shape, color, etc. Extensive evidence from behavioral experiments suggests that there is an advantage in cognitive processing of words for concrete over abstract wordsoften referred to as the "concreteness effect". It has been shown that concrete words, compared to abstract words, are processed faster in isolation (Schwanenflugel and Shoben, 1983) and non-supportive contexts (Schwanenflugel and Stowe, 1989), are remembered better in paired associative learning (Paivio, 1965) and free recall tasks (Schwanenflugel et al, 1992), and are learned faster (Mestres-Missé et al, 2014). Evidence has been put forward for this distinction in the brain. Case reports of patients with brain damage demonstrate differential impairments with regard to the abstract and concrete concepts (Breedin et al, 1994;Tyler et al, 1995;Warrington, 1975). Neuroimaging studies provide evidence for overlapping but distinct brain areas engaged in the processing of abstract and concrete concepts (see Montefinese, 2019, for a review).
To investigate the influence of grounding on abstract and concrete words, we leverage the SimLex999 dataset. It divides its words into different categories including adjectives, nouns, verbs, concreteness quartiles (from 1 to 4 increasing the degree of concreteness), and 'hard' sections. The hard section contains 333 pairs whose relatedness is hard to distinguish from similarity. Table 2 shows our fine-grained evaluation on SimLex999. Our best model (ZSG-G) not only outperforms others on the whole dataset but also generalizes across different word types. For instance, it not only boosts the performance for highly concrete (Conc-q4) words by 19.2 percentage points but also for the highly abstract words (Conc-q1) by 11.3 percentage points in comparison with the textual vectors of GloVe. Whereas PictureBook (Kiros et al, 2018), for example, highly benefits the more concrete words but adversely affects the more abstract category even when combined with GloVe embeddings. In comparison with VGE-G by Shahmohammadi et al (2021), our model again achieves better results while being much simpler and less computationally expensive.
Model
All Concreteness Separation: Our model clearly improves the quality of the embeddings for both concrete and abstract words. Intuitively, one might think that this is the case because the grounding process creates a better separation between concrete and abstract words. We carried out the following experiments to see if that idea holds true. We trained and evaluated two regression models using 10-fold cross validation on the concreteness rating dataset compiled by Brysbaert et al (2014). The dataset contains 37k words and 3k two-word phrases rated by over 4,000 subjects using the Amazon Mechanical Turk (MTurk) crowdsourcing platform. We denote this dataset as MTurk40k. We applied a simple linear regression and a multi-layer-perception (MLP(1024, 512, 100)) equipped with batch-normalization (Ioffe and Szegedy, 2015) and dropout (Srivastava et al, 2014). Reported in Table 3, the difference between GloVe and our grounded embeddings (ZSG-G) is very subtle. This shows that visual grounding does not necessarily cause stronger discrimination between concrete and abstract words.
Model
GloVe 10-fold-score ZSG-G 10-fold-score Linear regression 84.90 84.70 Multi-layer-perceptron 88.86 88.24 Table 3 Mean Spearman's correlation coefficient ×100 on MTurk40k using 10-fold-CV. Visually grounded embeddings (ZSG-G) do not seem to separate concrete and abstract words better in comparison to textual embeddings (GloVe).
Nearest Neighbors: For further exploration, we juxtapose a sample of differing nearest neighbors of our best embeddings (ZSG-G) with its purely textual version (GloVe). Figure 1 shows the results for two random samples of highly abstract and highly concrete words in SimLex999. While GloVe retrieves related words (shown on the left), our grounding shifts the focus toward similarity and retrieves highly similar words for both concrete and abstract queries (shown on the right). We can observe that GloVe suffers from a bias toward the dissimilar words that frequently co-occur such as (many, people) and (sorta, weird). Our embeddings on the other hand, alleviate this bias by aligning the words to their real-world relations. Applying this alignment to all the words seems to shift the abstract words to a better place. Moreover, different typos of the same word such as 'peope' and 'poeple' (for people) occur with different frequency in different contexts. Therefore, they are gradually pulled apart. Our model, however, puts them back into the same vicinity of space by applying the learnt alignment. Behavioural Evaluation: The performance of our embeddings on abstract and concrete words in a behavioural experiment has also been investigated in a previous study (Shahmohammadi et al, 2022). That study took as point of the departure the experiment and text-to-vision model reported in Günther et al (2020). Günther et al (2020) modelled visual grounding as a simple linear mapping from textual embeddings to visual embeddings, similar to our Word-Level model (see Section 6). They trained their model on concrete words with associated images in ImageNet (Deng et al, 2009). They tested model predictions for both concrete and abstract words in a behavioural experiment, where participants were presented with a target word, and two images: one predicted by their model (i.e. the image whose representation was closest to the predicted image representation) and a random control image. The participants were asked to choose which of the two presented images better described the target word. They found that the model's predicted image was chosen above chance both for abstract and concrete words. In Shahmohammadi et al (2022), our embeddings were used to predict participants' behaviour in this task, and it was found that the embeddings predicted human behaviour well above chance level, for both abstract and concrete words. This was taken as evidence that our visually grounded vectors may also be helpful for understanding human cognition at the interface of language and vision.
Alignment vs Fusion
In this section, we investigate different scenarios in which visual information could improve the textual word vectors. In other words, we are interested to see whether increasing the influence of images on word vectors results in better grounded word vectors. For this aim, we train our model (ZSG-G) with different activation functions for the mapping M . Using a non-linear activation function such as ReLU and Leaky-ReLU (Xu et al, 2015) and adding more non-linear layers will allow the model to drastically deform the textual vectorspace in a complex way, resulting in different grounded word vectors. Table 4 shows the results with different numbers of layers and non-linear activation functions. We measure similarity and relatedness by evaluating on MTurk771 and SimLex999 as they are compiled for similarity and relatedness respectively. Leveraging from different categories in SimLex999, we also evaluate on highly abstract and highly concrete words. Furthermore, for each case, we evaluate the obtained word vectors on all the available datasets mentioned in Table 1. As shown in Table 4, we observe a consistent pattern of losing abstractness and gaining concreteness when non-linear transformations are used. This is to be expected since word vectors are morphing into image vectors and hence, gain concrete properties. Employing two consecutive Leaky-ReLU is a prominent example of this case. Results on similarity and relatedness show that visual grounding shifts the focus toward similarity (same as Figure 1). However, both similarity and relatedness are improved by using a linear transformation which helps benefiting from vision while keeping the textual information preserved. Overall, the best results on all the datasets are achieved by the linear mapping. This shows that while textual embeddings highly benefit from visual information, allowing vision to dominate language is not the best option. Language seems to benefit from vision the most when it is aligned/informed with vision as opposed to being fused.
Bridging the Gap Between Language and Vision
While our model is relatively simple compared to many others (Shahmohammadi et al, 2021;Kiros et al, 2018;Kiela et al, 2018), there are other approaches that connect language with vision with even simpler methods (Collell Talleda et al, 2017;Günther et al, 2020;Hasegawa et al, 2017). This raises the question of how to properly fill the gap between language and vision. We, therefore, propose different ways to fill this gap and evaluate their efficacy. We constructed the following scenarios. In each scenario, similar as before, after the training, we use the trained mapping M to map all the textual embeddings into the grounded space to obtain grounded embeddings.
Word-Level (WL):
For each training (caption, image vector) pair (S j , I j ) ∈ D, we remove the stop words in caption S j and train a linear mapping M from each word to their corresponding image vector I j . For instance, the caption 'there is a dog on the floor' would be converted into 'dog floor'. Then, the textual embeddings of both dog and floor are mapped to their corresponding image one by one using the mapping M . Similar to Günther et al (2020), we employed PCA (Pearson, 1901) to match the dimensions of the image vectors (2048) to the output of the mapping M (1024).
Bag-of-Words (BoW): For each training (caption, image vector) pair (S j , I j ) ∈ D, after mapping all the words in S j into the grounded space using a linear mapping here denoted again as M , we average them to obtain the BoW sentence representation. The BoW vector is then mapped into the image vector I j using a hidden layer with T anh activation function. This model is more advanced (in comparison to 'Word-Level') given that it uses all the words in the captions and applies a non-linear transformation.
GRU: This is identical to our proposed model (see Section 2), however, a single-layer GRU (Cho et al, 2014) is used instead of the LSTM. A GRU is less complex compared to an LSTM and contains only a hidden-state as opposed to the LSTM that is equipped with both a cell-state and a hidden-state.
LSTM: This refers to the model proposed in Section 2. Vaswani et al (2017) are currently used in the state-of-the-art contextualized language models (Lan et al, 2019;Devlin et al, 2018). In short, these encoders, obtain contextualized embeddings based on learnable associations of words. For instance the word 'clip' plays different roles in 'I clip my nails' and 'I saw a video clip'. The contextualized representation of 'clip' is therefore computed based on its associations with its context words. For our experiments, we pass the textual embeddings through the mapping M as before. Then we train a different number of encoders on top of M , the output of the encoders is then projected to the image vector through a linear layer. We constructed the transformer encoders with 1024 hidden size, 16 attention heads and used NAdam with the learning of 0.0001 for training. Please refer to Vaswani et al (2017) for more details on transformers.
Transformer-Encoder (TE): Attention-based sequence encoders introduced in
The results of each scenario are reported in Table 5. The Word-Level mapping does not preserve enough of the textual information and produces embeddings that are distorted compared to the text-only embeddings. As a consequence, these visually grounded embeddings underperform on all tasks. We note here that a single image is very rich in information and cannot simply be described by a single word. Moreover, there might not be a straightforward linear relationship between language and vision. The BoW model, although enhancing the results on some datasets, has a performance very similar to (text-only) GloVe. Interestingly, the BoW model achieves substantial improvement on the SimLex999 dataset, which addresses the similarity between words rather than their relatedness. On the other hand, its performance is worse for the MTurk771 dataset, which focuses on relatedness. The reason for these ups and downs in the performance of the BoW model likely is that these representations do not take the orders of words into account and hence, lose the temporal statistics of how related words co-occur in their context (see Jones and Mewhort, 2007, for embeddings compositely representing word meaning and word order). Employing recurrent neural networks (GRU and LSTM) yields much better results. The LSTM is superior over the GRU as it captures long dependencies between words much better and more effectively encodes the whole sentence. Training with a single transformer encoder fails to produce better quality embeddings, perhaps unsurprisingly as these encoders are usually stacked on top of each other to achieve the desired outcome (Vaswani et al, 2017). We therefore also tested models with two and three layers of TE. While using a two-layer TE achieves a better score, we saw no improvement using more layers. We also employed multiple layers of LSTM and found that a single layer LSTM achieves the best results. Even though adding more layers in general creates a more powerful model, we argue that as the network gets deeper, less visual knowledge is propagated back to the mapping M . Put another way, the visual knowledge is distributed across different layers so it is harder to distill down the information into a single layer. Table 5 Results of using different textual encoders on intrinsic evaluation. WL and TE refer to Word-level and transformer-encoder respectively. We observe a consistent pattern of improvement from the simplest model (WL) to LSTM. Transformer-encoders however fail to effectively connect language with vision in our experiments.
Fig. 3
We construct a visually grounded version of BERT using image-caption pairs. In the training phase, the frozen pre-trained BERT encodes the caption, and an alignment M followed by an LSTM layer on top of BERT is trained to predict the corresponding image vector. In the fine-tuning phase, the learned alignment M is attached on top of BERT followed by a classifier. This alignment will force the BERT representations to follow the learned visual alignment during fine-tuning.
Contextualized Visual Grounding
While we successfully showed the benefit of visual grounding for word embeddings on a wide range of tasks, whether or not visual grounding helps deep attention-based contextualized language models (transformers) on current sentence-level language tasks is still under debate (Yun et al, 2021;Iki and Aizawa, 2021;Tan and Bansal, 2020). While some approaches report slight improvements (Sileo, 2021), it is mostly believed that visually grounded transformer models such as VL-BERT (Su et al, 2019) not only bring no improvements for language tasks but they might distort the linguistic knowledge obtained from textual corpora for solving the natural language understanding tasks (Tan and Bansal, 2020;Yun et al, 2021). The main backbone of all transformers is stacking multiple attention layers (Vaswani et al, 2017), briefly explained in Section 6. Transformer-based language models (e.g. BERT Devlin et al, 2018) are trained on a masked language modeling task: some words of the context are masked out and the model is trained to predict the masked tokens with correct choices. After this step (often called pre-training), the model is then fine-tuned on downstream tasks such as paraphrase detection (Dolan and Brockett, 2005) and sentiment classification . Given the large amount of training data, more textual context, and powerful transformers, one might argue that visual grounding does not bring new information for solving the current NLP tasks (Tan and Bansal, 2020). We are interested to see if our simple grounding approach could be beneficial for transformer-based language models. For this aim, we applied our grounding approach on BERT (Devlin et al, 2018 Figure 3) and outputs a fixed dimensional vector per token. We can, therefore, treat it as a word-embedding model: given the sentence S j = [w 1 , w 2 · · · w n ] with n words, it outputs T j = [t 1 , t 2 · · · t n ], where t i denotes the contextualized encoding of the word w i . The classifier is a multi-layer-perceptron network generating the final output following the BERT encoder. Shown in Figure 3, similar to our proposed model, we train a linear mapping M followed by an LSTM encoder to predict an image vector given its caption. After the training phase (see the lower box), for each classification task, the pre-trained model has to be fine-tuned. For this step, an MLP is added on top of the mapping M for fine-tuning on downstream tasks (see the upper box). In the fine-tuning phase, the '[cls]' tokens encode the given input through multiple attention layers and the rest of the tokens are discarded (Devlin et al, 2018). In a nutshell, our approach adds the learned alignment M between the pre-trained BERT encoder and its classifier. This alignment is applied to the BERT encoding to align its final representation to vision without deteriorating its textual information.
Evaluation: We fine-tuned and evaluated our pre-trained grounded BERT on the General Language Understanding Evaluation (GLUE) benchmark 4 (Wang et al, 2018) implemented in the Huggingface 5 library (Wolf et al, 2019). The GLUE benchmark consists of nine natural language understanding tasks: single-sentence tasks, SST-2 and CoLA (Warstadt et al, 2019); paraphrasing and similarity tasks, MRPC (Dolan and Brockett, 2005), QQP 6 , and STS-B (Cer et al, 2017); natural language inference tasks, RTE (Wang et al, 2018), QNLI (Rajpurkar et al, 2016), MNLI (Williams et al, 2017), and WNLI (Levesque et al, 2012). Please refer to Wang et al (2018) for full descriptions of the tasks.
Implementation Details: We used the bert-base-cased version of BERT (Devlin et al, 2018) in our experiments. For training, we used the Microsoft COCO 2017 dataset (Lin et al, 2014). The alignment M maps a BERT token t i ∈ R 768 to g i ∈ R 1024 . Each LSTM layer contains 1024 units. A single layer neural network with a linear activation function is applied on top of the LSTM to predict the image vector I j ∈ R 2048 . We trained the model on image-caption pairs for 10 epochs using the AdamW optimizer (Loshchilov and Hutter, 2017) with the learning rate set to 5e −5 and a batch size of 64. For fine-tuning on the GLUE benchmark, we followed the huggingface guidelines 7 and fine-tuned the model on each downstream task for 5 epochs with a batch size of 32 and a learning rate of 2e −5 . Results: Table 6 reports the validation scores across the datasets. The WNLI dataset was excluded from the list following Devlin et al (2018) due to inconsistent results. We carried out our grounding experiments with different numbers of LSTM layers. In Table 6, n-LFM-GBERT indicates the grounded BERT with n layers of LSTMs and frozen mapping M while fine-tuning on downstream tasks. The idea behind freezing the mapping (alignment) M while fine-tuning the BERT encoder and the classifier on a particular task is to guide (force) the output representations of BERT to follow the visual alignment. This might then guide the model to a better feature space for solving the task. Considering the mean score, the grounded model with 2-layer-LSTMs (2-LFM-GBERT ) outperforms the textual BERT by almost 1%. Moreover, we also fine-tuned the alignment M of the best model (2-LFM-GBERT ) for each particular task along with BERT encoder and the classifier, denoted as 2-LTM-GBERT, this model further improves the results. While the improvements are marginal compared to the grounded word embeddings, the table reveals interesting insights. For instance, for those datasets with small training size such as CoLA and MRPC, visual grounding obviously helps. Nonetheless, the results are almost identical for big datasets such as QQP and MNLI. This shows that visual grounding improves the generalization of transformers when training data is limited. However, it seems that a huge amount of textual training data and careful fine-tuning of the models compensate for our simple visual grounding approach on the current NLU tasks. Compared to human language acquisition, these textual language models are very inefficient given the amount of training data they need to be exposed to in order to generate such results. The BERT model for instance, while it has been pretrained on more than 3 billion tokens, still benefits from our relatively simple visual grounding approach when training data is limited. This opens up the research question of how efficient language models could become using visual grounding. Given the emergence of new NLP tasks and the growing cost of large computational resources for processing massive corpora (Strubell et al, 2019), equipping language models with other modalities such as vision might be a promising direction for efficient language learning.
A Cognitively More Plausible Setup
The performance of static word embedding models varies depending on the diversity and the size of the training corpus (Wang et al, 2019;Elekes et al, 2018;Johns and Jones, 2022). That raises the question of how robust our grounding approach is with respect to these measurements, especially in a more cognitively plausible setup where we have limited amount of training data available. For this aim, we trained the GloVe model from scratch on two small and different training corpora and measured the improvements of our grounding approach on each corpus using the word similarity benchmarks (see section 4). We obtained textual and grounded embeddings by training on TASA and Text8 corpora separately. TASA (Zeno et al, 1995) has served as training corpus for, e.g., Latent Semantic Analysis (Landauer, 1999). Text8 is a small corpus sampled from Wikipedia to allow quick testing of models. 8 Table 7 reports the comparison between textual embeddings and grounded embeddings for both corpora. Our grounded approach (TASA-G and Text8-G) consistently improves on top of textual embeddings (TASA-T and Text8-T) despite the small size and the diversity of the training corpora. The robustness of our grounding method for word-based embeddings across a wide range of tasks provides a firm basis for exploring their usefulness for experimental studies of human cognition. Table 7 Comparison of our grounded embeddings (*-G) to textual embeddings (*-T) on limited training data. GloVe algorithm was trained on TASA and Text8 corpus separately from scratch. Significant improvements are achieved by visual grounding despite limited training data.
Discussion and Conclusion
In this study, we designed a visual grounding framework that effectively produces grounded embeddings for all types of words from different embeddings. Our model, apart from its simplicity, shows excellent generalization performance across a wide range of NLP tasks covering unseen abstract and concrete words. We have made both the grounded embeddings and our framework publicly available. Moreover, we investigated the best strategy of bridging language (here crudely represented as word/sentence embeddings) with vision. Our experiments support the following conclusions.
• Textual word embeddings benefit from vision the most when they are aligned with vision as opposed to being merged. Our alignment strategy informs the textual embeddings about the real world (using images) without deteriorating the statistical knowledge obtained from textual corpora. We showed by example that allowing too much visual information will overwhelm the textual embeddings. Injecting too much visual knowledge into the embeddings benefits concrete words while diminishing the performance on modeling abstract words. • Textual context plays an important role in the process of grounding isolated word embeddings. We showed that linking textual word embeddings with vision without textual context will drastically distort the semantic space. We believe one reason is that word vectors still need to be aware of the textual context they occur in when they are being coupled with their corresponding visual information in images. Furthermore, an image is a rich source of information and cannot be simply described by a single word. Our grounding framework aligns the word vectors to their corresponding images while keeping the words informed about their textual context. • More textual context compensates for visual grounding on current downstream NLP tasks. We investigated whether our grounding approach could improve the performance of current deep contextualized language models (e.g. BERT) on downstream NLP tasks such as sentiment classification.
Our experiments show that visual grounding yields considerable improvements when training data is limited. However, when huge amounts of textual data are available, the performance of the visually grounded model and the textual model are almost identical.
In the future, we would like to further explore the potential benefits of visual grounding in current contextualized language models. In particular, we are interested to see how efficient (human-like) language model training could become using visual grounding. We as humans are never exposed to the amount of textual data digested by current language models but we still master our first language at a very early age. From this perspective, enriching current models for lexical semantics with vision is a first step forward in the direction of developing cognitively more plausible representations for word meaning.
).Table 1Comparison of our grounded embeddings (ZSG-*) to textual embeddings and other visually grounded embedding models. Our embedings clearly outperform others on most of the benchmarks. The metric is Spearman's ρ × 100.Model
RW MEN WSim MTurk SimVerb SimLex Mean
353
771
3500
999
GloVe
45.5
80.5
73.8
71.5
28.3
40.8
56.7
ZSG-G (ours)
53.2
85.1
78.8
73.2
38.5
52.6
63.6
Fasttext
56.1
81.5
72.2
75.1
37.8
47.1
61.6
ZSG-F (ours)
57
84.4
72.3
74.5
39.6
49.6
62.9
VGE-G
52.6
85.1
78.9
73.4
37.4
51.8
63.2
ZSG-G (ours)
53.2
85.1
78.8
73.2
38.5
52.6
63.6
Cap2Both
48.7
81.9
71.2
46.7
Cap2Img
52.3
84.5
75.3
51.5
Park & Myaeng
83.8
77.5
58.0
P&M VG.
15.7
Collell et al.
81.3
28.6
41.0
Table 2
2SimLex999 (Spearman's ρ × 100) results. Conc-q1 and Conc-q4 indicate the most abstract and concrete words respectively. Our model (ZSG-G) generalizes to different word types and outperforms all the others on many of the categories.
Table 4
4Effect of different activation functions and number of layers for the mapping M . Non-linear transformations lose the abstractness knowledge and gain concreteness. 'All' refers to the mean score across all the datasets in table 1.
Table 6
6Validation scores on the GLUE benchmark using textual BERT and visually grounded BERT (* GBERT ). Visual grounding improves the generalization of the model when training data is limited (e.g., MRPC and CoLA). However, large training data seems to compensate for visual grounding (see the scores of QQP and MNLI). accuracy/F1 scores are reported for QQP and MRPC, Pearson/Spearman correlations are reported for STS-B, and accuracy for matched/mismatched sets are reported for MNLI. For the other tasks, accuracy is reported. Numbers in bold indicate obvious improvements over textual BERT.
https://github.com/Hazel1994/Visually Grounded Word Embeddings 2
We assume individual words, as they are realized in English writing conventions, are the verbal expression of lexical concepts in language, and thus the terms "word" and "concept" are used interchangeably in this section.
https://en.wikipedia.org/wiki/English Wikipedia 4 https://gluebenchmark.com/ 5 https://huggingface.co/
https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs 7 https://github.com/huggingface/transformers/tree/master/examples/pytorch/textclassification
https://cs.fit.edu/ ∼ mmahoney/compression/textdata.html
Acknowledgements
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color. M Abdou, A Kulmizev, D Hershcovich, 10.18653/v1/2021.conll-1.9Proceedings of the 25th Conference on Computational Natural Language Learning. the 25th Conference on Computational Natural Language LearningStroudsburg, PA, USAAssociation for Computational LinguisticsAbdou M, Kulmizev A, Hershcovich D, et al (2021) Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color. In: Proceedings of the 25th Conference on Computational Natural Lan- guage Learning. Association for Computational Linguistics, Stroudsburg, PA, USA, pp 109-132, https://doi.org/10.18653/v1/2021.conll-1.9
Reconstructing maps from text. J E Avery, R L Goldstone, M N Jones, Cognitive Systems Research. 70Avery JE, Goldstone RL, Jones MN (2021) Reconstructing maps from text. Cognitive Systems Research 70:101-108
Abstraction in perceptual symbol systems. L W Barsalou, 10.1098/rstb.2003.1319Philosophical Transactions of the Royal Society B: Biological Sciences. 358Barsalou LW (2003) Abstraction in perceptual symbol systems. Philosophical Transactions of the Royal Society B: Biological Sciences 358(1435):1177- 1187. https://doi.org/10.1098/rstb.2003.1319
Grounded Cognition. L W Barsalou, 10.1146/annurev.psych.59.103006.093639Annual Review of Psychology. 591Barsalou LW (2008) Grounded Cognition. Annual Review of Psychology 59(1). https://doi.org/10.1146/annurev.psych.59.103006.093639
Enriching word vectors with subword information. P Bojanowski, E Grave, A Joulin, Transactions of the association for computational linguistics. 5Bojanowski P, Grave E, Joulin A, et al (2017) Enriching word vectors with subword information. Transactions of the association for computational linguistics 5:135-146
Incorporating visual semantics into sentence representations within a grounded space. P Bordes, E Zablocki, L Soulier, 10.18653/v1/D19-1064Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsBordes P, Zablocki E, Soulier L, et al (2019) Incorporating visual semantics into sentence representations within a grounded space. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pp 696-707, https://doi.org/10.18653/v1/D19-1064, URL https:// www.aclweb.org/anthology/D19-1064
The Problem of Definition. A M Borghi, F Binkofski, 10.1007/978-1-4614-9539-0_1SpringerBriefs in Psychology. SpringerBorghi AM, Binkofski F (2014) The Problem of Definition, Springer, New York, NY, pp 1-17. SpringerBriefs in Psychology, https://doi.org/10.1007/ 978-1-4614-9539-0 1, URL https://doi.org/10.1007/978-1-4614-9539-0 1
Reversal of the concreteness effect in a patient with semantic dementia. S D Breedin, E M Saffran, H B Coslett, 10.1080/02643299408251987Cognitive Neuropsychology. 116Breedin SD, Saffran EM, Coslett HB (1994) Reversal of the concreteness effect in a patient with semantic dementia. Cognitive Neuropsychology 11(6):617- 660. https://doi.org/10.1080/02643299408251987
Multimodal distributional semantics. E Bruni, N K Tran, M Baroni, Journal of Artificial Intelligence Research. 49Bruni E, Tran NK, Baroni M (2014) Multimodal distributional semantics. Journal of Artificial Intelligence Research 49:1-47
Concreteness ratings for 40 thousand generally known English word lemmas. M Brysbaert, A B Warriner, V Kuperman, 10.3758/s13428-013-0403-5Behavior Research Methods. 463Brysbaert M, Warriner AB, Kuperman V (2014) Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods 46(3):904-911. https://doi.org/10.3758/s13428-013-0403-5
How many words do we know? practical estimates of vocabulary size dependent on word definition, the degree of language input and the participant's age. M Brysbaert, M Stevens, P Mandera, Frontiers in psychology. 71116Brysbaert M, Stevens M, Mandera P, et al (2016) How many words do we know? practical estimates of vocabulary size dependent on word defini- tion, the degree of language input and the participant's age. Frontiers in psychology 7:1116
Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain. L Bulat, S Clark, E Shutova, 10.18653/v1/D17-1113Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsBulat L, Clark S, Shutova E (2017) Speaking, seeing, understanding: Cor- relating semantic models with conceptual representation in the brain. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Copen- hagen, Denmark, pp 1081-1091, https://doi.org/10.18653/v1/D17-1113, URL https://aclanthology.org/D17-1113
D Cer, M Diab, E Agirre, arXiv:170800055Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprintCer D, Diab M, Agirre E, et al (2017) Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:170800055
Microsoft coco captions: Data collection and evaluation server. X Chen, H Fang, T Y Lin, arXiv:150400325arXiv preprintChen X, Fang H, Lin TY, et al (2015) Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:150400325
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, C Gulcehre, Conference on Empirical Methods in Natural Language Processing. Cho K, van Merrienboer B, Gulcehre C, et al (2014) Learning phrase repre- sentations using rnn encoder-decoder for statistical machine translation. In: Conference on Empirical Methods in Natural Language Processing (EMNLP 2014)
Learning language through pictures. G Chrupa La, Kádárá, A Alishahi, 10.3115/v1/P15-2019Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, China2Short Papers). Association for Computational LinguisticsChrupa la G, KádárÁ, Alishahi A (2015) Learning language through pictures. In: Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computa- tional Linguistics, Beijing, China, pp 112-118, https://doi.org/10.3115/v1/ P15-2019, URL https://www.aclweb.org/anthology/P15-2019
Imagined visual representations as multimodal embeddings. Collell Talleda, G Zhang, T Moens, M F , Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), AAAI. the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), AAAICollell Talleda G, Zhang T, Moens MF (2017) Imagined visual representa- tions as multimodal embeddings. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), AAAI, pp 4378-4384
Visual and Affective Multimodal Models of Word Meaning in Language and Mind. S De Deyne, D J Navarro, G Collell, 10.1111/cogs.12922Cognitive Science. 451De Deyne S, Navarro DJ, Collell G, et al (2021) Visual and Affective Multi- modal Models of Word Meaning in Language and Mind. Cognitive Science 45(1). https://doi.org/10.1111/cogs.12922
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, 2009 IEEE conference on computer vision and pattern recognition. IeeeDeng J, Dong W, Socher R, et al (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, Ieee, pp 248-255
J Devlin, M W Chang, K Lee, arXiv:181004805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin J, Chang MW, Lee K, et al (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:181004805
Automatically constructing a corpus of sentential paraphrases. W B Dolan, C Brockett, Proceedings of the Third International Workshop on Paraphrasing (IWP2005). the Third International Workshop on Paraphrasing (IWP2005)Dolan WB, Brockett C (2005) Automatically constructing a corpus of senten- tial paraphrases. In: Proceedings of the Third International Workshop on Paraphrasing (IWP2005)
Incorporating nesterov momentum into adam. T Dozat, International Conference on Learning Representations. Dozat T (2016) Incorporating nesterov momentum into adam. International Conference on Learning Representations
Resources to examine the quality of word embedding models trained on n-gram data. A Elekes, A Englhardt, M Schäler, Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningElekes A, Englhardt A, Schäler M, et al (2018) Resources to examine the quality of word embedding models trained on n-gram data. In: Proceedings of the 22nd Conference on Computational Natural Language Learning, pp 423-432
Placing search in context: The concept revisited. L Finkelstein, E Gabrilovich, Y Matias, Proceedings of the 10th international conference on World Wide Web. the 10th international conference on World Wide WebFinkelstein L, Gabrilovich E, Matias Y, et al (2001) Placing search in context: The concept revisited. In: Proceedings of the 10th international conference on World Wide Web, pp 406-414
SimVerb-3500: A large-scale evaluation set of verb similarity. D Gerz, I Vulić, F Hill, 10.18653/v1/D16-1235Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsAustin, TexasGerz D, Vulić I, Hill F, et al (2016) SimVerb-3500: A large-scale evalu- ation set of verb similarity. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Com- putational Linguistics, Austin, Texas, pp 2173-2182, https://doi.org/10. 18653/v1/D16-1235, URL https://www.aclweb.org/anthology/D16-1235
Effects of Categorization on Color Perception. R L Goldstone, 10.1111/j.1467-9280.1995.tb00514.xPsychological Science. 65Goldstone RL (1995) Effects of Categorization on Color Perception. Psy- chological Science 6(5). https://doi.org/10.1111/j.1467-9280.1995.tb00514. x
Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions. F Günther, L Rinaldi, M Marelli, https:/journals.sagepub.com/doi/full/10.1177/1745691619861372?casa_token=1ExD6W7c-d8AAAAA%3AMsVbOpTHuyJAeDkB-Ezrl1okbzKD653dG9rfDxVSDWD8_sx5C8UEux73TV29m3Ep4qaix2OnvckPerspectives on Psychological Science. 146Günther F, Rinaldi L, Marelli M (2019) Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Mis- conceptions. Perspectives on Psychological Science 14(6):1006-1033. https: //doi.org/10.1177/1745691619861372, URL https://journals.sagepub.com/ doi/full/10.1177/1745691619861372?casa token=1ExD6W7c-d8AAAAA% 3AMsVbOpTHuyJAeDkB-Ezrl1okbzKD653dG9rfDxVSDWD8 sx5C8UEux73TV29m3Ep4qaix2Onvck
Images of the unseen: extrapolating visual representations for abstract and concrete words in a data-driven computational model. F Günther, M A Petilli, A Vergallito, 10.1007/s00426-020-01429-7Psychological Research. Günther F, Petilli MA, Vergallito A, et al (2020) Images of the unseen: extrapolating visual representations for abstract and concrete words in a data-driven computational model. Psychological Research https://doi.org/ 10.1007/s00426-020-01429-7
Large-scale learning of word relatedness with constraints. G Halawi, G Dror, E Gabrilovich, Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. the 18th ACM SIGKDD international conference on Knowledge discovery and data miningHalawi G, Dror G, Gabrilovich E, et al (2012) Large-scale learning of word relatedness with constraints. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 1406- 1414
. Z S Harris, 10.1080/00437956.1954.11659520Distributional Structure. WORD. 102-3Harris ZS (1954) Distributional Structure. WORD 10(2-3). https://doi.org/ 10.1080/00437956.1954.11659520
Incorporating visual features into word embeddings: A bimodal autoencoder-based approach. M Hasegawa, T Kobayashi, Y Hayashi, IWCS 2017 -12th International Conference on Computational Semantics -Short papers. Hasegawa M, Kobayashi T, Hayashi Y (2017) Incorporating visual features into word embeddings: A bimodal autoencoder-based approach. In: IWCS 2017 -12th International Conference on Computational Semantics -Short papers, URL https://www.aclweb.org/anthology/W17-6912
Simlex-999: Evaluating semantic models with (genuine) similarity estimation. F Hill, R Reichart, A Korhonen, Computational Linguistics. 414Hill F, Reichart R, Korhonen A (2015) Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics 41(4):665- 695
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural compu- tation 9(8):1735-1780
CogniVal: A Framework for Cognitive Word Embedding Evaluation. N Hollenstein, A De La Torre, N Langer, Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Hollenstein N, de la Torre A, Langer N, et al (2019) CogniVal: A Frame- work for Cognitive Word Embedding Evaluation. In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL).
. 10.18653/v1/K19-1050Association for Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics, Stroudsburg, PA, USA, https: //doi.org/10.18653/v1/K19-1050
Effect of visual extensions on natural language understanding in vision-and-language models. T Iki, A Aizawa, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingIki T, Aizawa A (2021) Effect of visual extensions on natural language understanding in vision-and-language models. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp 2189-2196
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, PMLRInternational conference on machine learning. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning, PMLR, pp 448-456
Content matters: Measures of contextual diversity must consider semantic content. B T Johns, M N Jones, Journal of Memory and Language Language with Vision: a Study on Grounded Word and Sentence Embeddings. 123313Johns BT, Jones MN (2022) Content matters: Measures of contextual diver- sity must consider semantic content. Journal of Memory and Language Language with Vision: a Study on Grounded Word and Sentence Embeddings 123:104,313
Representing word meaning and order information in a composite holographic lexicon. M N Jones, D J Mewhort, Psychological review. 11411Jones MN, Mewhort DJ (2007) Representing word meaning and order infor- mation in a composite holographic lexicon. Psychological review 114(1):1
Learning image embeddings using convolutional neural networks for improved multi-modal semantics. D Kiela, L Bottou, 10.3115/v1/D14-1005Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsKiela D, Bottou L (2014) Learning image embeddings using convolutional neural networks for improved multi-modal semantics. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP). Association for Computational Linguistics, Doha, Qatar, pp 36-45, https://doi.org/10.3115/v1/D14-1005, URL https://www.aclweb. org/anthology/D14-1005
Learning visually grounded sentence representations. D Kiela, A Conneau, A Jabri, 10.18653/v1/N18-1038Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana1Long Papers). Association for Computational LinguisticsKiela D, Conneau A, Jabri A, et al (2018) Learning visually grounded sen- tence representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computa- tional Linguistics, New Orleans, Louisiana, pp 408-418, https://doi.org/10. 18653/v1/N18-1038, URL https://www.aclweb.org/anthology/N18-1038
Illustrative language understanding: Largescale visual grounding with image search. J Kiros, W Chan, G Hinton, Proceedings of the 56th. the 56thKiros J, Chan W, Hinton G (2018) Illustrative language understanding: Large- scale visual grounding with image search. In: Proceedings of the 56th
Association for Computational Linguistics. 10.18653/v1/P18-1085Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia1Long Papers)Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Melbourne, Australia, pp 922-933, https://doi.org/10.18653/v1/P18-1085, URL https: //www.aclweb.org/anthology/P18-1085
Women, Fire, and Dangerous Things. G Lakoff, 10.7208/chicago/9780226471013.001.0001University of Chicago PressLakoff G (1987) Women, Fire, and Dangerous Things. University of Chicago Press, https://doi.org/10.7208/chicago/9780226471013.001.0001
Albert: A lite bert for selfsupervised learning of language representations. Z Lan, M Chen, S Goodman, International Conference on Learning Representations. Lan Z, Chen M, Goodman S, et al (2019) Albert: A lite bert for self- supervised learning of language representations. In: International Conference on Learning Representations
Latent Semantic Analysis (LSA), a disembodied learning machine, acquires human word meaning vicariously from language alone. T K Landauer, 10.1017/S0140525X99382145Behavioral and Brain Sciences. 224Landauer TK (1999) Latent Semantic Analysis (LSA), a disembodied learning machine, acquires human word meaning vicariously from lan- guage alone. Behavioral and Brain Sciences 22(4). https://doi.org/10.1017/ S0140525X99382145
A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. T K Landauer, S T Dumais, 10.1037/0033-295X.104.2.211Psychological Review. 1042Landauer TK, Dumais ST (1997) A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representa- tion of knowledge. Psychological Review 104(2). https://doi.org/10.1037/ 0033-295X.104.2.211
A view from cognitive linguistics. R W Langacker, 10.1017/S0140525X99392141Behavioral and Brain Sciences. 224Langacker RW (1999) A view from cognitive linguistics. Behavioral and Brain Sciences 22(4). https://doi.org/10.1017/S0140525X99392141
Multimodal Semantic Learning from Child-Directed Input. A Lazaridou, G Chrupa La, R Fernández, 10.18653/v1/N16-1043Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesStroudsburg, PA, USAAssociation for Computational LinguisticsLazaridou A, Chrupa la G, Fernández R, et al (2016) Multimodal Semantic Learning from Child-Directed Input. In: Proceedings of the 2016 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Stroudsburg, PA, USA, https://doi.org/10.18653/v1/N16-1043
Multimodal Word Meaning Induction From Minimal Exposure to Natural Text. A Lazaridou, M Marelli, M Baroni, 10.1111/cogs.12481Cognitive Science. 41Lazaridou A, Marelli M, Baroni M (2017) Multimodal Word Meaning Induc- tion From Minimal Exposure to Natural Text. Cognitive Science 41. https: //doi.org/10.1111/cogs.12481
The winograd schema challenge. H Levesque, E Davis, L Morgenstern, Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Levesque H, Davis E, Morgenstern L (2012) The winograd schema challenge. In: Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning
Microsoft coco: Common objects in context. T Y Lin, M Maire, S Belongie, European conference on computer vision. SpringerLin TY, Maire M, Belongie S, et al (2014) Microsoft coco: Common objects in context. In: European conference on computer vision, Springer, pp 740-755
. I Loshchilov, F Hutter, arXiv:171105101Decoupled weight decay regularization. arXiv preprintLoshchilov I, Hutter F (2017) Decoupled weight decay regularization. arXiv preprint arXiv:171105101
A Taste of Words: Linguistic Context and Perceptual Simulation Predict the Modality of Words. M Louwerse, L Connell, 10.1111/j.1551-6709.2010.01157.xCognitive Science. 352Louwerse M, Connell L (2011) A Taste of Words: Linguistic Context and Perceptual Simulation Predict the Modality of Words. Cognitive Science 35(2):381-398. https://doi.org/10.1111/j.1551-6709.2010.01157.x
Language Encodes Geographical Information. M M Louwerse, R A Zwaan, 10.1111/j.1551-6709.2008.01003.xCognitive Science. 331Louwerse MM, Zwaan RA (2009) Language Encodes Geographical Informa- tion. Cognitive Science 33(1):51-73. https://doi.org/10.1111/j.1551-6709. 2008.01003.x
Distributional semantics of objects in visual scenes in comparison to text. T Lüddecke, A Agostini, M Fauth, 10.1016/j.artint.2018.12.009Artificial Intelligence. 274Lüddecke T, Agostini A, Fauth M, et al (2019) Distributional semantics of objects in visual scenes in comparison to text. Artificial Intelligence 274. https://doi.org/10.1016/j.artint.2018.12.009
Producing high-dimensional semantic spaces from lexical co-occurrence. K Lund, C Burgess, 10.3758/BF03204766Behavior Research Methods, Instruments, & Computers. 282Lund K, Burgess C (1996) Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, & Computers 28(2). https://doi.org/10.3758/BF03204766
Better word representations with recursive neural networks for morphology. T Luong, R Socher, C Manning, Proceedings of the Seventeenth Conference on Computational Natural Language Learning. the Seventeenth Conference on Computational Natural Language LearningSofia, BulgariaAssociation for Computational LinguisticsLuong T, Socher R, Manning C (2013) Better word representations with recur- sive neural networks for morphology. In: Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Association for Computational Linguistics, Sofia, Bulgaria, pp 104-113, URL https://www. aclweb.org/anthology/W13-3512
Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation. P Mandera, E Keuleers, M Brysbaert, 10.1016/j.jml.2016.04.001Journal of Memory and Language. 92Mandera P, Keuleers E, Brysbaert M (2017) Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation. Journal of Memory and Language 92. https://doi.org/10.1016/j.jml.2016.04.001
Catastrophic interference in predictive neural network models of distributional semantics. W M Mannering, M N Jones, Computational Brain & Behavior. 41Mannering WM, Jones MN (2021) Catastrophic interference in predictive neu- ral network models of distributional semantics. Computational Brain & Behavior 4(1):18-33
The Representation of Object Concepts in the Brain. A Martin, 10.1146/annurev.psych.57.102904.190143Annual Review of Psychology. 581Martin A (2007) The Representation of Object Concepts in the Brain. Annual Review of Psychology 58(1):25-45. https://doi.org/10.1146/annurev.psych. 57.102904.190143
Mapping concrete and abstract meanings to new words using verbal contexts. Second Language Research. A Mestres-Missé, T F Münte, A Rodriguez-Fornells, 10.1177/026765831351266830Mestres-Missé A, Münte TF, Rodriguez-Fornells A (2014) Mapping concrete and abstract meanings to new words using verbal contexts. Second Language Research 30(2):191-223. https://doi.org/10.1177/0267658313512668
Efficient Estimation of Word Representations in Vector Space. T Mikolov, K Chen, G Corrado, International Conference on Learning Representations. Mikolov T, Chen K, Corrado G, et al (2013) Efficient Estimation of Word Representations in Vector Space. International Conference on Learning Representations
Semantic representation of abstract and concrete words: A minireview of neural evidence. M Montefinese, https:/journals.physiology.org/doi/full/10.1152/jn.00065.2019Journal of Neurophysiology. 1215Montefinese M (2019) Semantic representation of abstract and concrete words: A minireview of neural evidence. Journal of Neurophysiology 121(5):1585-1587. https://doi.org/10.1152/jn.00065.2019, URL https:// journals.physiology.org/doi/full/10.1152/jn.00065.2019
Abstractness, imagery, and meaningfulness in paired-associate learning. A Paivio, 10.1016/S0022-5371(65)80064-01016/S0022-5371(65)80064-0Journal of Verbal Learning and Verbal Behavior. 41Paivio A (1965) Abstractness, imagery, and meaningfulness in paired-associate learning. Journal of Verbal Learning and Verbal Behavior 4(1):32-38. https: //doi.org/10.1016/S0022-5371(65)80064-0
A computational study on word meanings and their distributed representations via polymodal embedding. J Park, Myaeng Sh, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, Taiwan1Long Papers). Asian Federation of Natural Language ProcessingPark J, Myaeng Sh (2017) A computational study on word meanings and their distributed representations via polymodal embedding. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Asian Federation of Natural Language Process- ing, Taipei, Taiwan, pp 214-223, URL https://www.aclweb.org/anthology/ I17-1022
Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science. K Pearson, 2Pearson K (1901) Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science 2(11):559-572
Glove: Global Vectors for Word Representation. J Pennington, R Socher, C Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational LinguisticsStroudsburg, PA, USAPennington J, Socher R, Manning C (2014) Glove: Global Vectors for Word Representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Com- putational Linguistics, Stroudsburg, PA, USA, https://doi.org/10.3115/v1/ D14-1162
Adapting Deep Network Features to Capture Psychological Representations: An Abridged Report. J C Peterson, J T Abbott, T L Griffiths, 10.24963/ijcai.2017/697Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization. the Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence OrganizationCaliforniaPeterson JC, Abbott JT, Griffiths TL (2017) Adapting Deep Network Fea- tures to Capture Psychological Representations: An Abridged Report. In: Proceedings of the Twenty-Sixth International Joint Conference on Arti- ficial Intelligence. International Joint Conferences on Artificial Intelligence Organization, California, https://doi.org/10.24963/ijcai.2017/697
Data-driven computational models reveal perceptual simulation in word processing. M A Petilli, F Günther, A Vergallito, 10.1016/j.jml.2020.104194Journal of Memory and Language. 117Petilli MA, Günther F, Vergallito A, et al (2021) Data-driven computational models reveal perceptual simulation in word processing. Journal of Memory and Language 117. https://doi.org/10.1016/j.jml.2020.104194
Squad: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, arXiv:160605250arXiv preprintRajpurkar P, Zhang J, Lopyrev K, et al (2016) Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:160605250
Differential context effects in the comprehension of abstract and concrete verbal materials. P J Schwanenflugel, E J Shoben, 10.1037/0278-7393.9.1.82Journal of Experimental Psychology: Learning, memory, and cognition. 91Schwanenflugel PJ, Shoben EJ (1983) Differential context effects in the comprehension of abstract and concrete verbal materials. Journal of Exper- imental Psychology: Learning, memory, and cognition 9(1):82-102. https: //doi.org/10.1037/0278-7393.9.1.82
Context Availability and the Processing of Abstract and Concrete Words in Sentences. P J Schwanenflugel, R W Stowe, 10.2307/748013Reading Research Quarterly. 241114Schwanenflugel PJ, Stowe RW (1989) Context Availability and the Processing of Abstract and Concrete Words in Sentences. Reading Research Quarterly 24(1):114. https://doi.org/10.2307/748013
Context availability and the recall of abstract and concrete words. P J Schwanenflugel, C Akin, W M Luh, 10.3758/BF03208259Memory & Cognition. 201Schwanenflugel PJ, Akin C, Luh WM (1992) Context availability and the recall of abstract and concrete words. Memory & Cognition 20(1):96-104. https://doi.org/10.3758/BF03208259
Learning zero-shot multifaceted visually grounded word embeddings via multi-task training. H Shahmohammadi, Hpa Lensch, R H Baayen, 10.18653/v1/2021.conll-1.12Proceedings of the 25th Conference on Computational Natural Language Learning. Association for Computational Linguistics, Online. the 25th Conference on Computational Natural Language Learning. Association for Computational Linguistics, OnlineShahmohammadi H, Lensch HPA, Baayen RH (2021) Learning zero-shot multifaceted visually grounded word embeddings via multi-task train- ing. In: Proceedings of the 25th Conference on Computational Natural Language Learning. Association for Computational Linguistics, Online, pp 158-170, https://doi.org/10.18653/v1/2021.conll-1.12, URL https:// aclanthology.org/2021.conll-1.12
H Shahmohammadi, M Heitmeier, E Shafaei-Bajestan, arXiv:220615381Visual grounding of abstract and concrete words: A response to günther et al. (2020). arXiv preprintShahmohammadi H, Heitmeier M, Shafaei-Bajestan E, et al (2022) Visual grounding of abstract and concrete words: A response to günther et al. (2020). arXiv preprint arXiv:220615381 URL https://arxiv.org/abs/2206. 15381
Learning grounded meaning representations with autoencoders. C Silberer, M Lapata, 10.3115/v1/P14-1068Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland1Long Papers). Association for Computational LinguisticsSilberer C, Lapata M (2014) Learning grounded meaning representations with autoencoders. In: Proceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Baltimore, Maryland, pp 721-732, https: //doi.org/10.3115/v1/P14-1068, URL https://www.aclweb.org/anthology/ P14-1068
arXiv:210313942Sileo D (2021) Visual grounding strategies for text-only natural language processing. arXiv preprintSileo D (2021) Visual grounding strategies for text-only natural language processing. arXiv preprint arXiv:210313942
Pictures of Appetizing Foods Activate Gustatory Cortices for Taste and Reward. W K Simmons, A Martin, L W Barsalou, 10.1093/cercor/bhi038Cerebral Cortex. 1510Simmons WK, Martin A, Barsalou LW (2005) Pictures of Appetizing Foods Activate Gustatory Cortices for Taste and Reward. Cerebral Cortex 15(10):1602-1608. https://doi.org/10.1093/cercor/bhi038
Language with Vision: a Study on Grounded Word and Sentence Embeddings. Language with Vision: a Study on Grounded Word and Sentence Embeddings
Recursive deep models for semantic compositionality over a sentiment treebank. R Socher, A Perelygin, J Wu, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingSocher R, Perelygin A, Wu J, et al (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1631- 1642
Representing Properties Locally. K O Solomon, L W Barsalou, 10.1006/cogp.2001.0754Cognitive Psychology. 432Solomon KO, Barsalou LW (2001) Representing Properties Locally. Cognitive Psychology 43(2):129-169. https://doi.org/10.1006/cogp.2001.0754
Perceptual simulation in property verification. K O Solomon, L W Barsalou, 10.3758/BF03196856Memory & Cognition. 322Solomon KO, Barsalou LW (2004) Perceptual simulation in property ver- ification. Memory & Cognition 32(2):244-259. https://doi.org/10.3758/ BF03196856
Dropout: a simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, The journal of machine learning research. 151Srivastava N, Hinton G, Krizhevsky A, et al (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15(1):1929-1958
Energy and policy considerations for deep learning in NLP. E Strubell, A Ganesh, A Mccallum, 10.18653/v1/P19-1355Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsFlorence, ItalyStrubell E, Ganesh A, McCallum A (2019) Energy and policy considerations for deep learning in NLP. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computa- tional Linguistics, Florence, Italy, pp 3645-3650, https://doi.org/10.18653/ v1/P19-1355, URL https://aclanthology.org/P19-1355
W Su, X Zhu, Y Cao, arXiv:190808530Vl-bert: Pre-training of generic visuallinguistic representations. arXiv preprintSu W, Zhu X, Cao Y, et al (2019) Vl-bert: Pre-training of generic visual- linguistic representations. arXiv preprint arXiv:190808530
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSzegedy C, Vanhoucke V, Ioffe S, et al (2016) Rethinking the inception archi- tecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818-2826
Vokenization: Improving language understanding with contextualized, visual-grounded supervision. H Tan, M Bansal, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsTan H, Bansal M (2020) Vokenization: Improving language understand- ing with contextualized, visual-grounded supervision. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP). Association for Computational Linguistics, Online, pp 2066-2080, URL https://aclanthology.org/2020.emnlp-main.162
Abstract word deficits in aphasia: Evidence from semantic priming. L K Tyler, H E Moss, F Jennings, Neuropsychology. 93354Tyler LK, Moss HE, Jennings F (1995) Abstract word deficits in aphasia: Evidence from semantic priming. Neuropsychology 9(3):354
American Psychological Association. G R Vandenbos, Washington, DC, URLAPA Dictionary of PsychologyVandenBos GR (ed) (2015) APA Dictionary of Psychology, 2nd edn. Amer- ican Psychological Association, Washington, DC, URL http://www.jstor. org/stable/j.ctv1chrw2d
Attention is all you need. A Vaswani, N Shazeer, N Parmar, Advances in neural information processing systems. Vaswani A, Shazeer N, Parmar N, et al (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998-6008
A Wang, A Singh, J Michael, arXiv:180407461Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintWang A, Singh A, Michael J, et al (2018) Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:180407461
Evaluating word embedding models: Methods and experimental results. B Wang, A Wang, F Chen, APSIPA transactions on signal and information processing. 8Wang B, Wang A, Chen F, et al (2019) Evaluating word embedding mod- els: Methods and experimental results. APSIPA transactions on signal and information processing 8
The selective impairment of semantic memory. E K Warrington, 10.1080/14640747508400525The Quarterly journal of experimental psychology. 274Warrington EK (1975) The selective impairment of semantic memory. The Quarterly journal of experimental psychology 27(4):635-657. https://doi. org/10.1080/14640747508400525
Neural network acceptability judgments. A Warstadt, A Singh, S R Bowman, Transactions of the Association for Computational Linguistics. 7Warstadt A, Singh A, Bowman SR (2019) Neural network acceptability judgments. Transactions of the Association for Computational Linguistics 7:625-641
You Can't Drink a Word: Lexical and Individual Emotionality Affect Subjective Familiarity Judgments. C Westbury, 10.1007/s10936-013-9266-2Journal of Psycholinguistic Research. 435Westbury C (2014) You Can't Drink a Word: Lexical and Individual Emotion- ality Affect Subjective Familiarity Judgments. Journal of Psycholinguistic Research 43(5). https://doi.org/10.1007/s10936-013-9266-2
Wriggly, squiffy, lummox, and boobs: What makes some words funny. C Westbury, G Hollis, 10.1037/xge0000467Journal of Experimental Psychology: General. 1481Westbury C, Hollis G (2019) Wriggly, squiffy, lummox, and boobs: What makes some words funny? Journal of Experimental Psychology: General 148(1). https://doi.org/10.1037/xge0000467
Imagery, Context Availabilty, Contextual Constraint and Abstractness. K Wiemer-Hastings, J Krug, X Xu, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science SocietyMahwah, NJ23Wiemer-Hastings K, Krug J, Xu X (2001) Imagery, Context Availabilty, Con- textual Constraint and Abstractness. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol 23. Lawrence Erlbaum, Mahwah, NJ, pp 1134-1139
A broad-coverage challenge corpus for sentence understanding through inference. A Williams, N Nangia, S R Bowman, arXiv:170405426arXiv preprintWilliams A, Nangia N, Bowman SR (2017) A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:170405426
Huggingface's transformers: State-ofthe-art natural language processing. T Wolf, L Debut, V Sanh, arXiv:191003771arXiv preprintWolf T, Debut L, Sanh V, et al (2019) Huggingface's transformers: State-of- the-art natural language processing. arXiv preprint arXiv:191003771
B Xu, N Wang, T Chen, arXiv:150500853Empirical evaluation of rectified activations in convolutional network. arXiv preprintXu B, Wang N, Chen T, et al (2015) Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:150500853
Does vision-and-language pretraining improve lexical grounding?. T Yun, C Sun, E Pavlick, 10.18653/v1/2021.findings-emnlp.370Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics. Punta Cana, Dominican RepublicYun T, Sun C, Pavlick E (2021) Does vision-and-language pretraining improve lexical grounding? In: Findings of the Association for Computa- tional Linguistics: EMNLP 2021. Association for Computational Linguis- tics, Punta Cana, Dominican Republic, pp 4357-4366, https://doi.org/ 10.18653/v1/2021.findings-emnlp.370, URL https://aclanthology.org/2021. findings-emnlp.370
The educator's word frequency guide. S Zeno, S H Ivens, R T Millard, Touchstone Applied Science Associates. Zeno S, Ivens SH, Millard RT, et al (1995) The educator's word frequency guide. Touchstone Applied Science Associates
| [
"https://github.com/Hazel1994/Visually",
"https://github.com/huggingface/transformers/tree/master/examples/pytorch/textclassification"
] |
[
"RELATIONSHIP OF THE LANGUAGE DISTANCE TO ENGLISH ABILITY OF A COUNTRY *",
"RELATIONSHIP OF THE LANGUAGE DISTANCE TO ENGLISH ABILITY OF A COUNTRY *"
] | [
"Cao Xinxin caoxinxin@mail.nwpu.edu.cn \nSchool of Foreign Studies\nSchool of Foreign Studies Northwestern\nNorthwestern Polytechnical University\nXi'an\n",
"Lei Xiaolan \nSchool of Computer Science Northwestern\nPolytechnical University\nXi'an\n",
"Murtadha Ahmed a.murtadha@mail.nwpu.edu.cn \nPolytechnical University\nXi'an\n"
] | [
"School of Foreign Studies\nSchool of Foreign Studies Northwestern\nNorthwestern Polytechnical University\nXi'an",
"School of Computer Science Northwestern\nPolytechnical University\nXi'an",
"Polytechnical University\nXi'an"
] | [] | Language difference is one of the factors that hinder the acquisition of second language skills. In this article, we introduce a novel solution that leverages the strength of deep neural networks to measure the semantic dissimilarity between languages based on their word distributions in the embedding space of the multilingual pre-trained language model (e.g., BERT). Then, we empirically examine the effectiveness of the proposed semantic language distance (SLD) in explaining the consistent variation in English ability of countries, which is proxied by their performance in the Internet-Based Test of English as Foreign Language (TOEFL iBT). The experimental results show that the language distance demonstrates negative influence on a country's average English ability. Interestingly, the effect is more significant on speaking and writing subskills, which pertain to the productive aspects of language learning. Besides, we provide specific recommendations for future research directions.Keywords English ability · the language distance · pre-trained language models (PLM) · BERT | null | [
"https://export.arxiv.org/pdf/2211.07855v1.pdf"
] | 253,523,164 | 2211.07855 | 15e8a21312fe4d9cbc0d1b7e0b0124b94d2ca03f |
RELATIONSHIP OF THE LANGUAGE DISTANCE TO ENGLISH ABILITY OF A COUNTRY *
Cao Xinxin caoxinxin@mail.nwpu.edu.cn
School of Foreign Studies
School of Foreign Studies Northwestern
Northwestern Polytechnical University
Xi'an
Lei Xiaolan
School of Computer Science Northwestern
Polytechnical University
Xi'an
Murtadha Ahmed a.murtadha@mail.nwpu.edu.cn
Polytechnical University
Xi'an
RELATIONSHIP OF THE LANGUAGE DISTANCE TO ENGLISH ABILITY OF A COUNTRY *
Language difference is one of the factors that hinder the acquisition of second language skills. In this article, we introduce a novel solution that leverages the strength of deep neural networks to measure the semantic dissimilarity between languages based on their word distributions in the embedding space of the multilingual pre-trained language model (e.g., BERT). Then, we empirically examine the effectiveness of the proposed semantic language distance (SLD) in explaining the consistent variation in English ability of countries, which is proxied by their performance in the Internet-Based Test of English as Foreign Language (TOEFL iBT). The experimental results show that the language distance demonstrates negative influence on a country's average English ability. Interestingly, the effect is more significant on speaking and writing subskills, which pertain to the productive aspects of language learning. Besides, we provide specific recommendations for future research directions.Keywords English ability · the language distance · pre-trained language models (PLM) · BERT
Introduction
As a special type of human capital, language plays an indispensable role in the economic activities. From the aggregated level, the divergence between languages exerts huge influence on the international bilateral trade (Ku Zussman 2010;Lohmann 2011;Su 2020), migration flows (Adsera Pytlikova 2015), and cross-border market integration (Choi Bordia, 2020;Fenske Kala, 2021). For individuals, making decision of investing in a new language is likely influenced by the expected benefits and the cost associated with the target language (Chiswick Miller 1995). On one hand, multilingual ability facilitates the socioeconomic integration of immigrants in the destination country, and improves their economic status to a large scale (Evans et al. 2020;Zorlu Hartog 2018). On the other hand, the linguistic diversity leads to considerable cost.
In the past decades, language ability has been measured through self-report language proficiency or performance in language proficiency test ( Van der Slik, 2010). For instance, many researchers have adopted various methods, e.g., multidimensional scaling and factor analysis, to explain the consistent differences in English among language groups through their TOEFL performance (Elder 1996;Hale, Rock Jirele 1982;Oltman, Stricker Barrows 1988;Swinton Powers 1980). It is Snow (1998) and Kim and Lee (2010) who initiated to empirically evaluate the effects of linguistic and non-linguistic factors on the paper-based and computer-based TOEFL test, TOEFL PBT and TOEFL CBT performance, respectively, for cross-country level. However, the language distance measures were rather limited then, they applied purely qualitative methods to capture the distance between languages. This paper presents a highly quantitative BERT-based semantic language distance (SLD) approach to explain the differentiation in English ability of countries through their average TOEFL iBT score. To the best of our knowledge, we are the first to study the influence of the language distance on a country's English ability through the TOEFL iBT score.
The language distance has been widely discussed in the field of Second Language Acquisition (SLA) (Ellis 1989;Gass, Behney Plonsky 2020;Dekydtspotter, Sprouse Thyre 2000;Diaubalick Guijarro-Fuentes 2019), however, due to the complexity of language, identifying the effects of the language distance appears rather difficult. Therefore, a limited number of empirical strategies, if not non-exist, could facilitate the present analysis on region-based language proficiency. Besides, the existing methods unfortunately cannot capture the semantic similarity between languages, and thus raises the need for an effective method to measure the semantic distance between languages.
With the rapid development of Deep Neural Network (DNN), the Pre-trained Language Model (PLM) (e.g., Word2Vec, Glove, and BERT) based approaches have achieved the state-of-the-art performance of various Natural Language Processing (NLP) tasks, including text generation, machine translation, text classification and sentiment analysis (Qiu et al. 2020;Campos et al. 2019;Peters, Ruder Smith 2019). The ultimate goal of PLM is to semantically map the words that occur in the same context to close points in the latent space. To better illustrate how the words are semantically distributed, we visualize ten words of the closest and the furthest languages (i.e., German and Vietnamese) based on the BERT model in Figure 1. As can be clearly seen, the German words are indeed very close to English than the Vietnamese ones (e.g., 10: kill). In other words, the words that are semantically related are located very close in the embedding space. As these models are well pre-trained in the unsupervised manner (i.e., no labeled data is needed), the embeddings of a pair of languages are readily available. Motivated by this intuition, this study leverages the semantic distance between words of two given languages to measure their language distance. Specifically, it represents each word with a multi-dimensional vector. Then, it uses the cosine similarity to compute the distance of two given words from different languages. Figure 1. An illustrative example of BERT-based SLD for the words in English, German, and Vietnamese colored with blue, green, and orange, respectively. Each word is labeled with a number, which denotes its respective translation to English.
The main contributions of this paper are summarized as follows:
• A neural network-based approach is introduced to measure semantic dissimilarity between a pair of languages based on their word distributions in the latent space of the pre-trained language model, e.g., BERT;
• We validate the proposed method on the real-data of TOEFL iBT total and subsection scores since the language distance is an important factor impeding the acquisition of new language skills.
The remainder of this paper is organized as follows. Section 2 presents the traditional language distance methods. Section 3 describes the proposed BERT SLD. Section 4 provides the data and the validation process of BERT-based SLD. Then we provide specific future research directions and conclusions with Section 6.
Traditional Language Distance Methods
Task Definition
Generally, the language distance denotes the degree of similarity between languages. It is meaningful to distinguish the linguistic distance and the language distance since there is no clear definition of the two terms in literature. The linguistic distance is an umbrella term, which includes the language distance and the perceived linguistic distance (Leusink 2017). The language distance is the objective measure, which can be obtained by comparing features in different languages such as vocabulary, syntax, semantics, phonetic form or grammatical structures. While the subjective side is labeled as the perceived linguistic distance, indicating the distance that learners perceive to exist between languages, but may not actually exist (Kellerman 1983(Kellerman , 1995. Therefore, the major concern of this study comes to the language distance. It should be noted that we only attempt to examine and expand current quantitative measure concerning language distance rather than use the concept of the language distance to operationalize cross-linguistic differences, which largely oversimplifies the current understanding of the multi-dimensional nature of cross-linguistic differences.
Traditional Language Distance Methods
During past several decades, typological linguists and experts in historical and comparative linguistics started to construct quantitative measures to facilitate the analysis of L1 influence on later language learning (Bouckaert,
Tree Method
Apart from the WALS index, there exists another source of language information called the Ethnologue 1 project, which aims at evaluating the family relations for all existing languages (Campbell 2008). Tree approach was a purely qualitative method developed in phylogenetics, a subfield of historical and comparative linguists, to retrace the evolution of the world languages. However, it has become an alternative measure for the language distance since the concept of language proximity index was introduced by Adsera and Pytlikova (2015). For language proximity, 1 denotes the same language and 0 stands for languages without any family relations. In the middle of the two values, it assigns a value, e.g., 0.1, 0.25, 0.45, and 0.7, denoting the number of shared branches. Then, the language distance is defined by LD = 1 value. Regardless of the expert opinions behind, tree-based language distance value shows only a marginal variation. What's more, Ethonologue project is not freely available now.
ASJP Method
The Automatic Similarity Judgment Program (ASJP 2 ) method has achieved a considerable improvement in performance compared to the aforementioned alternatives. It was implemented based on the findings of German Max Planck Institute for Evolutionary Anthropology. ASJP was originally designed for generating language trees to automatically compute language distance between languages based on their phonetic representations. Specifically, it begins with a list of 40 words, e.g., parts of human body, selected from the Swadesh corpus (Swadesh 1952) and then transfers them into their phonetic script, called ASJP code. For instance, the person pronoun I is transferred into Ei in English, while wataSi in Japanese. Then, it explores Levenshtein Distance, i.e., the minimum edit distance between two strings, to calculate both the local (each word pair) and global (non-related items) distance existed between two phonetic strings. The resulting normalized and divided Levenshtein distance (LDND) is interpreted as the degree of phonetic dissimilarity between two languages (see details in Isphording Otten 2014). Although ASJP method has been proved internationally to be a powerful method for computing language distance, it captures only the phonetic differences between languages.
Overall, the aforementioned measures for the language distance are very helpful for comparing relatively peripheral language pairs with restricted number of observations. However, they also demonstrate some disadvantages in their own calculating processes, e.g., across the two measures, the language distance is derived on the small data comparison.
∈
Besides, these methods unfortunately cannot capture the semantic distance between languages, which is of great value in Linguistic studies (Robins 2014;Palmer Frank Robert 1981).
BERT SLD Method
As BERT is curial component of our approach, this section highlights the basic concepts.
Bidirectional Encoder Representations from Transformers (BERT)
Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al. 2018) is the state-of-the-art language representation model. BERT is built upon the Transformer architecture and released in the tensor2tensor library (Vincent et al. 2008;Devlin et al. 2018). Two basic steps are involved: pre-training and fine-tuning. BERT was pre-trained on unlabeled large corpora from Wikipedia and Google book. In order to train a deep bidirectional representation, two tasks were performed. Masked Language Model (MLM) task, which randomly masks a percentage of the input tokens, and then requires the model to predict them, and Next Sentence Prediction (NSP) task that encourages BERT to learn the relationship between sentences. For further fine-tuning step, BERT has been used widely in various downstream tasks of NLP (e.g., sentiment analysis, topic classification, etc.) (Karimi, Rossi Prati 2020;Devlin et al. 2018). Specifically, they initialize the weights of the network by the pre-trained weights of BERT and then rely on the labeled corpus to fine-tune these weights in order to improve the performance on the task on-target. Despite the significant improvement of these methods, the performance heavily relies on the labeled data, which may not readily available in real-world scenarios.
In our work, we simply exploit BERT at an inference step to benefit from the success of contextual representation (i.e., word distribution) to compute the semantic distance between two texts of different languages. In other words, we rely on the pre-trained weights of BERT without further fine-tuned step for two reasons: (1) Our approach simply computes the semantic distance between two texts based on their corresponding weights generated by BERT;
(2) Fine-tuning step requires an additional manually annotated bilingual corpus and a defined downstream task (i.e., natural language inference), which is labor-intensive and may not readily available in real-world scenarios. In our experiment, we use mBERT 3 (a new version of BERT for multilingual learning) as a weight generator. Specifically, the output of the language model is a matrix E ∈ R n×d , where n is the number of words in the vocabulary and d is the dimension of embedding, e.g., 768. Then, we retrieve the corresponding weights of a bilingual corpus Bible to compute the semantic distance.
Bible Corpus
In our implementation, we use a multilingual parallel corpus created from 100 translations of the Bible 4 . Several characteristics are worth mentioning about the Bible corpus. First, with the globally missionary expansion, the Bible has more than 100 translated versions, which exceeds any other single work in literature (Christodouloupoulos Steedman 2015). The highly parallel corpus offers excellent and valuable information for low-resource language pairs. Next outstanding feature of this multilingual corpus comes to the text size of the Bible. The average size of normal fiction novels is about 100k words, while the Old Testament and the New Testament together contain around 800k English words (Christodouloupoulos Steedman 2015), enabling adequate feed to the pre-training of the model. Besides, the structure of the Bible is also unique as it is divided into books, chapters and verses, which allows for the model to automatically align every language at the verse level without ambiguity. Another characteristic we should note is that the translating principle behind the Bible is basically "sense-for-sense" since the aim of most missionary linguists was to convey the message from God as accurate as possible. Thus, the semantic meaning of the Bible is well kept in almost every language with this content-sensitive translation approach (Christodouloupoulos Steedman 2015).
BERT SLD computation
We leverage the PLM BERT, which has achieved the state-of-the-art performance in various NLP tasks (Gamallo et al. 2017). The key idea behind it is to map the words that occur in the same context into close points in the latent space, in other words, to convert the input words into the semantic vectors. With the bilingual dictionaries provided on Github community https://anonymous.4open.science/r/bible-corpus-1FF2, now we describe how to use the resulted vectors to measure the semantic language distance.
Cosine Similarity
Given bilingual dictionary, the output of the language model is a matrix E∈ R n×d , where n is the number of words in the vocabulary and d is the dimension of embedding, e.g., 300. Thus, each row is a d vector that represents the respective word distribution. Given two words of different languages, we compute the cosine similarity between their respective vectors as follows:
( , ) = ( , )(1)
where ei and ej denote the respective vectors of source word wi and the target word wj, respectively, and (. ) represents the cosine similarity θ function. Then, we average the scores of the bilingual dictionary of a given pair of language lk to compute the semantic similarity between languages:
( ) = 1 ∑ ( , ),(2)
in which SLS(.) can be interpreted as how words of given language lk are close to their parallel English ones in the latent space. As the ultimate goal is to compute the semantic distance, we coast the semantic similarity to the semantic distance as follows:
= 1 − (3)
The result of the language distance derived from both PLM BERT and the traditional methods are presented in Table 1.
Validation Evidence
In this section, we aim to examine the predictive validity of the proposed language distance method on the average English ability of a country by investigating the relationship of BERT-based language distance to TOEFL iBT performance. First, we describe the data used in our implementation. Second, we examine the relationship between BERT-based SLD and TOEFL iBT total and subsection scores through two steps of analysis: (1) correlation of BERT-based language distance with TOEFL iBT total and subsection scores, and correlation of ASJP, Tree language distance measures with TOEFL iBT total and subsection scores; and (2) Multivariate Analysis of Variance (MANOVA) model, an extension of ANOVA, is employed to further evaluate the effects of BERT-based language distance on TOEFL iBT total and subsection scores. Both correlational analysis and MANOVA analysis are performed through IBM SPSS Version 23 (IBM Corp 2018).
Data
Test of English as a Foreign Language (TOEFL) is designed to measure the English proficiency of people whose mother language is not English. According to Education Testing Service board (ETS 2021), TOEFL has been accepted for admission purposes by over 11000 institutions, including universities, colleges and even governments in more than 150 countries, which significantly proves its efficacy in measuring the language proficiency in English. TOEFL test underwent two big changes to better assess language learning. The original paper-based TOEFL (TOEFL PBT) was introduced in 1962, which consists of three parts: reading, listening and speaking tasks. With the release of computerbased TOEFL (TOEFL CBT) in 1998, writing becomes one component of TOEFL test. Next big improvement was the introduction of internet-based TOEFL (TOEFL iBT) in September 2005. The TOEFL iBT integrates all the four language skills, namely reading, listening, speaking and writing, which are designed to measure the ability to understand academic written materials, the ability to understand spoken English in the academic setting, the ability to speak in English and the ability to write in English, respectively.
The total score of TOEFL iBT is 120 points, and each of the four subsections occupies 30 points. A sample of TOEFL iBT test paper can be found on TOEFL website, where readers can take a closer look at the structure, administration and detailed requirements of the test. For scoring, reading and listening parts are scored through automatic AI scoring, while speaking and writing sections are done by combination of computer and highly experienced, welltrained human raters to minimize the possible rater bias and provide a more comprehensive and accurate evaluation of one's ability. Note. Mean scores for small samples of less than 30 test-takers are not reported to ensure the reliability. In addition, due to rounding, section scores may not add up to the total scores.
In each year, TOEFL board releases the Test and Score Data Summary for TOEFL iBT, where total and subsection score means from January to December with all examinees classified by country or region are available (see Table 2: An example of the TOEFL iBT dataset in 2019). The fair and objective scoring process makes this data reliable for comparing performance of examinees from different geographic regions or countries. To guarantee the reliability and validity, TOEFL website does not report means for small groups of less than 30 participants. Although the starting point of ETS board is not to encourage the practice of ranking countries, the surprisingly consistent and recurring variation pattern in scores does shed some light on the overall English proficiency profiles of specific regions or countries. As can be seen in Table 3, across three years, European participants constantly achieve higher scores than participants from other continents, especially Africa. Likewise, in Table 2, it is noticeable that both total and subsection scores of German examinees exceed that of Saudi Arabia to a large extent, which is in accordance with their real English proficiency profiles. In this study, given the impacts of COVID-19 pandemic on 2020 TOEFL iBT test (Ockey 2021), we leverage TOEFL iBT Test and Score Data Summary from 2017 to 2019 as our data. Around 170 countries attend TOEFL iBT test each year, however, we can investigate on the possible effects of the language distance only on the condition that the target country has one official language, otherwise we need to assign two values to a single sample, which is infeasible. Therefore, excluding those bilingual or multilingual nations, and further considering the BERT-based SLD coverage, our sample is composed of 91 countries where in total 34 different languages (including English) are spoken (more details can be found in the Supporting Online Information). To best showcase the effect of the language distance, the target sample covers languages from 9 language families including Indo-European, Afro-Asiatic, Sino-Tibetan, Uralic, Austronesian, Japonic, Koreanic, Tai Kadai and Turkic. Recent studies have consistently shown that TOEFL total scores do not constitute a meaningful indication of language proficiency. Instead, the subscores are more meaningful (e.g., Ginther Yan 2018; Cho Bridgment 2012). If one only examines the total scores, it is possible that these fine-grained differences will be masked. Therefore, we include both the TOEFL total and subscale scores to enable a more accurate operationalization of English language proficiency. Table 4 presents the basic descriptive statistics for TOEFL iBT total and subsection score means of 91 countries by year. Both the average TOEFL iBT total and subsection scores show little variation across three years with the mean clustering around 80 points. However, it is interesting to note that the same score profile pattern recurs in all three years: speaking> listening> writing> reading, which represents the typical TOEFL score profile or English language skills' profile of most countries in our sample. This specific pattern still exists when it comes to the average score of all countries attending that attend TOEFL iBT test in that year. A multitude of reasons can account for the outscore of the oral sections than the written sections. First, from the perspective of TOEFL administration, reading section is more time-limited than the other parts, examinees are required to read three to four academic passages of more than 700 words and respond to 30-40 multiple choice questions in around 60 minutes (TOEFL 2022). Based on the ETS standard of mapping TOEFL iBT test scores into Common European Framework Reference (CEFR) (TOEFL 2022) (see Table 5), the cut score of speaking section at C1, B2, B1 and A2 levels remains highest among the four subsections, which partly suggests that speaking section might be easier or examinees tend to achieve higher scores is the speaking part. Second, from the aspect of participant characteristics, students from different regions demonstrate different score profiles associated with that place (Ginther Yan 2018). Oral part never appears difficult for those who are frequent users of English, e.g., Americans, while in countries like China, examinees are more competent at written parts of the exam, i.e., reading and writing, as shown in the Test and Score Data Summary each year. Countries with score profile of higher oral competence and lower written competence outnumbers those having conversed score profile, thus leading to the intriguing pattern discussed above. Note. N/A: not applicable or not available.
Results and Discussion
Descriptive Statistics
Correlational Analysis
We provide Pearson correlations between language distance and TOEFL iBT total and subsection scores in Table 6. The relationship between the language distance and English proficiency is operationalized by the correlation coefficients between language distance values and TOEFL iBT test scores. Many scholars have repeatedly pointed out, there is no solid guidelines with regard to the interpretation of the correlation coefficients, especially in studies related to language tests because the dependent variable (e.g., language proficiency test scores) is often determined by a number of factors besides the studied independent variable (e.g., the language distance) (e.g., Ginther Yan 2018; Kim Lee 2010; Van der Slik 2010). For any single independent variable, it is impossible to observe a correlation coefficient that indicates a high effect, e.g., 0.75. In such case, the interpretation should be based on the expected magnitude of strength, even a small correlation coefficient, e.g., 0.3, can be perceived as strong and substantially meaningful (e.g., Cho Bridgment 2012;Ginther Yan 2018;Rosenthal Rubin 1982;Sackett, Borneman Connelly 2008). In our study, as expected, consistent negative correlation between the language distance and TOEFL iBT total and section scores occurs in year 2017, 2018 and 2019, and correlational patterns are similar across the three years. Note. *p<0.05, **p<0.01, ***p<0.001, two-tailed.
In 2019 cohort, an interesting correlation pattern between the language distance methods and TOEFL iBT scores emerges. As presented in Table 6, the correlation between BERT-based SLD, ASJP, Tree method and TOEFL iBT total score gathers around 0.2 (r total =-0.207*, r total =-0.220*, r total =-0.237*, respectively). However, when cut down into subsection scores, the correlation pattern demonstrates much variability. Strongly negative correlation can be observed in both speaking and writing subsections, and the magnitude is similar across BERT-based SLD, ASJP and Tree method (r speaking=-0.390***, r speaking=-0.381***, r speaking=-0.385***; r writing=-0.316**, r writing=-0.391***, r writing=-0.408***). For reading and listening section, the effect of this negative correlation decreases in a great scale, especially in reading section, where correlation coefficients are close to zero (r reading=-0.015, r reading=-0.011, r reading=-0.032). Similar patterns appear in 2018 and 2017 cohort, but the effect size shows a minor fluctuation. The overall correlation coefficients in the year 2018 are slightly lower than 2019, while those of 2017 cohort are a bit higher.
Overall, across three years, there is a moderately strong negative correlation observed between the language distance and TOEFL iBT total score. Furthermore, consistent and strong negative correlation is observed in case of TOEFL speaking and writing subsections. However, the correlation coefficients drop substantially in listening subscale, and the effect size shows no significance. For reading subscale, the correlation even disappears. TOEFL iBT total and subsection scores demonstrate great variation in terms of correlation magnitude, which suggests that section score might be a more meaningful and accurate measure of language skill than the total test score, as is recently proved by Ginther Yan (2018). The analysis of Pearson correlation coefficients seems to indicate that the language distance can exert more impacts on the productive skills (speaking and writing) than the receptive skills, i.e., reading and listening. However, such interpretation should be supported by careful and thorough investigation.
MANOVA Analysis
To determine whether there is a statistically significant difference in TOEFL iBT performance between groups that have varying degrees of closeness to English, we categorize the samples into groups by language distance, e.g., BERT language distance values range from 0 to 0.38, those with 0-0.19 are clustered into Group A, and 0.20-0.38 into Group B, thus languages in Group A are considered as closer to English while Group B shows more distance to English. Similar operation is conducted for ASJP and Tree cohorts, yet there is a slight difference in manipulation. ASJP values range from 0 to 1.04, however, there is no value between 0 and the 0.62, which doubtlessly would cause the imbalance in sample size between two groups, therefore, we further make a cutline between 0.62-1.04, those fall into 0-0.83 are categorized into Group A, and those range from 0.84 to 1.04 in Group B. Operation in Tree cohort is the same of ASJP method. Comparison of the mean scores of TOEFL iBT test between two groups is then performed through one-way MANOVA, a technique used to determine the influence of independent categorical variables on multiple dependent variables, e.g., total score and four subsection scores in the current study. It is slightly different from one-way ANOVA model in that it allows for simultaneous evaluation of more than one predicted variable. Quantile-Quantile (Q-Q) Plot shows that the predicted variables are normally distributed across three years, and equality of variance is guaranteed through the Homogeneity of Variance Test. Post hoc test is not performed because there are fewer than three groups in this study. In subsequent sections, we present and discuss only the 2017 cohort since there is little variation across three years. The MANOVA results of BERT, ASJP and Tree in 2017 cohort are presented in Table 7, Table 8 and Table 9, respectively.
In BERT case (see Table 7), MANOVA results indicate that there is a surprisingly significant variation between the means of Group A and B, across the total score and four subsections (p reading=0.003, p listening=0.004, p speaking=0.003, p writing=0.003p total=0.003, respectively). In terms of ASJP and Tree method, on contrary, as shown in Table 8 and Table 9, for TOEFL total score means, no statistically meaningful difference can be observed. For reading and listening section, the mean scores between Group A and B are quite similar, and statistically significance does not exist. More interestingly, for speaking and writing section, the mean scores differences demonstrate a huge variation, especially in writing section, the effect between Group A and Group B seems mildly stronger than that of BERT case (p asjp=0.003, p tree=0.003). In summary, when two groups are divided by ASJP or Tree measure, meaningful variation can be observed only in speaking and writing sections. However, there is consistent and significant difference between two groups clustered by BERT language distance in terms of both TOEFL total and four subscale means, which partly suggests that BERT language distance is more advantageous and suitable in explaining TOEFL iBT scores. Although the four TOEFL subsections, i.e., reading, listening, speaking and writing, are initially designed to measure four separate subskills in language learning, they are integrated to some extent in that one cannot accomplish any single task in each of the four sections without a full comprehension of the semantic meaning of the context (TOEFL 2022). Pearson correlation coefficients suggest that there are strong positive correlations between BERT based language distance and ASJP, between BERT and Tree method (r=0.878**, r=0.866**, respectively), yet their focuses are quite distinctive. What BERT model attempts to capture is exactly the semantic differences between different languages. Therefore, it is reasonable and justifiable that BERT based language distance demonstrates more robustness in explaining the TOEFL score variation.
The results of MANOVA seem to support the hypothesis that the language distance is a strong predictor of TOEFL iBT productive skills (reflected in speaking and writing). With respect to receptive skills (reflected in reading and listening), the predictive power varies across different language distance computation methods. For most learners, receptive or passive skills are the first step in learning a second language, which usually functions as a springboard to the acquisition of productive or active skills (Rico 2014). In the initial input stage of language learning, the language distance plays a minor role. Yet in the subsequent output stage of language learning, i.e., speaking and writing, the role of the language distance becomes more visible and significant. As illustrated above, across three entirely independent measurement of language distance, i.e., BERT, ASJP and Tree, the significantly negative meaningful correlation remains between the language distance and TOEFL iBT speaking and writing scores, which suggests that language distance is likely to hinder the learning of productive competence. The negative impact of the language distance on the learners' productive skills was also confirmed in one study conducted by Van der slik (2010), he examined the influence of mother tongue on immigrants' language proficiency in Dutch. Immigrants with mother tongues not belonging to Indo-European languages are proved to be less capable in Dutch speaking and written proficiency. However, we need to be rather careful and cautious when interpreting such results. Language proficiency test itself is complicated because it is determined by multi-layers of factors. For instance, in terms of country-level English proficiency, score profile would be one of the most powerful predictors. Many countries show strengths and weaknesses in different subskills; and depending on the subskill (e.g., speaking vs. writing), there can be complex combinations of subskill profiles (e.g., subscore profiles of students from the Middle East vs. China). When adding up the effects of these profiles, specific patterns, e.g., listening>speaking>reading>listening, are likely to emerge. Additionally, factors such as the average schooling time (Kim 2010;Snow 2008), education quality, the average investment in English teaching and learning (Hakuta 1976), experience of being ex-colony of Britain or U.S. ( Migge Léglise 2008;Spencer 2017), countries' reliance on business with English-speaking countries, linguistic diversity in the native country, cultural differences (Muthukrishna, et al. 2020), issue of lingua franca, power of the native language spoken in the country etc., all could be meaningful predictive indicators of a country's English proficiency. In addition, TOEFL administration features, task difficulty of each subtask, preparation efficacy might also partly contribute to the aforementioned phenomenon. Therefore, in future studies, more empirical evidence and stronger theoretical justification are expected to support the interpretation of the relationship between the language distance and the predicted English proficiency.
Conclusion and Future Work
This study presents a relatively new neural network model BERT-based approach to quantitatively measure the semantic distance between languages, and further validate the method on the real-data of TOEFL iBT scores since the language distance is an important factor hindering individuals in specific geographical regions to acquire new language skills. The experimental results have shown that the current quantified language distance measures can negatively influence the TOEFL iBT speaking and writing subskills, which pertain to the productive aspect of language learning. Additionally, the introduced BERT based language distance method outperforms the existing methods in predicting both the TOEFL iBT reading, listening subsection scores and total score.
A potential drawback of this study is that BERT based language distance computation indeed raises some technical challenges for SLA researchers. It is noteworthy that BERT was trained on large corpora, and thus reflects on the word's distribution in its latent space. The languages that share large parallel corpora may have better mapping of semantic feature, thus influences negatively on the language distance computation (Davison, Feldman Rush 2019; Rama, Beinborn Eger 2020; Peters, Ruder Smith 2019). For future work, a regularization term is highly recommended to discourage the model learn weights based on amount of data, technical details, however, need further investigation.
Table 1 .
1Language Distance between 33 Languages and English Notes. Due to the unavailability of several language pairs, there are some missing values indicated by the space in the above table.Language
Language Family
BERT
ASJP
Tree
Albanian
Indo-European
0.16
0.95
0.90
Arabic
Afro Asiatic
0.23
0.99
1.00
Armenian
Indo-European
0.36
0.97
0.90
Bulgarian
Indo-European
0.16
0.87
0.90
Burmese
Sino Tibetan
0.33
1.00
Chinese
Sino Tibetan
0.28
1.00
1.00
Croatian
Indo-European
0.20
0.87
0.90
Czech
Indo-European
0.18
0.91
0.90
Danish
Indo-European
0.18
0.67
0.90
French
Indo-European
0.16
0.89
0.90
German
Indo-European
0.16
0.69
0.55
Greek
Indo-European
0.22
0.97
0.90
Hebrew
Afro Asiatic
0.23
0.98
1.00
Hungarian
Uralic
0.24
0.95
1.00
Indonesian
Austronesian
0.37
0.99
1.00
Japanese
Japonic
0.38
1.01
1.00
Korean
Koreanic
0.27
1.00
1.00
Lithuanian
Indo-European
0.19
0.95
0.90
Nepali
Indo-European
0.32
0.99
0.90
Norwegian
Indo-European
0.19
0.64
0.75
Persian
Indo-European
0.21
0.92
0.90
Portuguese
Indo-European
0.15
0.90
0.90
Romanian
Indo-European
0.17
0.87
0.90
Russian
Indo-European
0.17
0.95
0.90
Serbian
Indo-European
0.20
0.00
Slovak
Indo-European
0.19
0.92
0.90
Slovenian
Indo-European
0.17
0.91
0.90
Spanish
Indo-European
0.17
0.91
0.90
Swedish
Indo-European
0.18
0.62
0.55
Thai
Tai Kadai
0.21
0.99
1.00
Turkish
Turkic
0.34
0.98
1.00
Ukrainian
Indo-European
0.36
0.94
0.90
Table 2 .
2An example of the TOEFL iBT dataset in 2019: TOEFL iBT total and section score means with all examinees classified by native countryNative Country
Reading
Listening
Speaking
Writing
Total
Germany
24
26
25
24
98
Hungary
23
24
23
22
92
Ukraine
20
22
22
21
86
Turkey
20
21
20
20
80
Saudi Arabia
16
20
21
18
74
Table 3 .
3TOEFL iBT total and section score means of continents by yearReading
Listening
Speaking
Writing
Total
2019
AFRICA
17
19
20
19
74
AMERICAS
20
22
22
21
84
ASIA
20
21
21
21
82
EUROPE
22
24
23
22
91
MIDDLE EAST
19
21
22
20
81
PACIFIC REGION
22
24
23
22
91
2018
AFRICA
16
18
20
19
74
AMERICAS
20
22
22
21
84
ASIA
19
21
21
21
82
EUROPE
21
23
23
22
90
MIDDLE EAST
18
21
22
20
80
PACIFIC REGION
21
22
22
22
86
2017
AFRICA
17
18
20
19
74
AMERICAS
20
22
22
21
85
ASIA
19
20
21
21
81
EUROPE
22
23
23
22
90
MIDDLE EAST
18
21
22
20
80
PACIFIC REGION
20
21
22
21
83
Table 4 .
4Descriptive statistics for TOEFL iBT total and subsection score means of studied samples by year2019
2018
2017
(N=91)
(N=91)
(N=91)
M
SD
M
SD
M
SD
Total
84.70
6.842
84.10
7.124
84.13
7.317
Reading
20.10
2.166
19.70
2.178
20.00
2.196
Listening
21.81
2.016
21.46
2.182
21.41
2.231
Speaking
21.98
1.626
21.85
1.763
21.79
1.877
Writing
20.87
1.600
20.89
1.609
21.02
1.725
Table 5. Mapping TOEFL iBT scores into CEFR levels
CEFR level
Total
Reading
Listening
Speaking
Writing
C2
114
29
28
28
29
C1
95
24
22
25
24
B2
72
18
17
20
17
B1
42
4
9
16
13
A2
N/A
N/A
N/A
10
7
Table 6 .
6Pearson correlations between language distance and TOEFL iBT scoresReading
Listening
Speaking
Writing
Total
Table 7 .
7MANOVA results: the effects of BERT language distance on 2017 TOEFL iBT scoresReading
Listening
Speaking
Writing
Total
Mean_A
20.56
21.96
22.31
21.41
86.19
Mean_B
19.19
20.59
21.03
20.46
81.14
F
9.283
8.993
11.538
7.074
11.703
Sig
0.003
0.004
0.001
0.009
0.001
Note. Significance level: 0.05
Table 8. MANOVA results: the effects of ASJP language distance on 2017 TOEFL iBT scores
Reading
Listening
Speaking
Writing
Total
Mean_A
20.05
21.78
23.10
22.35
87.00
Mean_B
20.00
21.32
21.55
20.76
83.57
F
0.011
0.557
7.924
11.382
2.751
Sig
0.916
0.457
0.006
0.001
0.101
Note. Significance level: 0.05
Table 9 .
9MANOVA results: the effects of Tree language distance on 2017 TOEFL iBT scoresReading
Listening
Speaking
Writing
Total
Mean_A
20.07
21.80
23.00
22.33
87.00
Mean_B
19.99
21.31
21.53
20.76
83.52
F
0.016
0.605
8.197
11.488
2.859
Sig
0.899
0.439
0.005
0.001
0.094
Note. Significance level: 0.05
of L2 Spanish deal with acquisitional problems. Language acquisition, 26(3), 282-301. Elder, C. 1996. The effect of language background on "foreign" language test performance: The case of chinese, italian, and modern greek. Language Learning, 46 (2), 233-282. https://doi.org/10.1111/j.1467-1770.1996.tb01236.x Ellis, R. 1989. Understanding second language acquisition (Vol. 31). Oxford university press Oxford. doi:10.1016/0346-251x(88)90038-3 Educational Testing Service 2021, ETS Org, https://www.ets.org/s/toefl/pdf/toefl_tsds_data_2020.pdf Evans, M., Schneider, C., Arnot, M., Fisher, L., Forbes, K., Liu, Y., & Welply, O. 2020. Language development and https://doi.org/10.1017/9781108656047social
integration
of
students
with
english
as
an
additional
language.
Ethnologue: https://www.ethnologue.com/. 2 ASJP: https://asjp.clld.org/.
mBERT: https://anonymous.4open.science/r/bert-DA06.4 Bible-corpus: https://anonymous.4open.science/r/bible-corpus-1FF2.
AcknowledgmentsWe would like to thank AI community for providing valuable open-source information about BERT model. We would also like to thank the Educational Testing Service center (ETS) who offers us authoritative and valuable information about the TOEFL iBT test.
The role of language in shaping international migration. A Adsera, M Pytlikova, 10.1111/ecoj.12231The Economic Journal. 125586Adsera, A., & Pytlikova, M. 2015. The role of language in shaping international migration. The Economic Journal, 125 (586), F49-F81. https://doi.org/10.1111/ecoj.12231
Mapping the origins and expansion of the indo-european language family. R Bouckaert, P Lemey, M Dunn, 957-960.10.1126/science.1219669Science. 3376097Bouckaert, R., Lemey, P., Dunn, M., & Greenhill. 2012. Mapping the origins and expansion of the indo-european language family. Science, 337 (6097), 957-960. 10.1126/science.1219669
Ethnologue: Languages of the world. L Campbell, JSTORCampbell, L. 2008. Ethnologue: Languages of the world. JSTOR.
Measuring diachronic language distance using perplexity: Application to english, portuguese, and spanish. J R P Campos, P Gamallo, I Alegria, 10.1017/S1351324919000378Natural Language Engineering. 26Campos, J. R. P., Gamallo, P., & Alegria, I. 2019. Measuring diachronic language distance using perplexity: Application to english, portuguese, and spanish.Natural Language Engineering, 26, 433-454. https://doi.org/10.1017/S1351324919000378
The history and geography of human genes. L L Cavalli-Sforza, P Menozzi, A Piazza, Princeton University PressPrinceton, NJCavalli-Sforza, L. L., Menozzi, P., & Piazza, A. 1994. The history and geography of human genes. Princeton, NJ: Princeton University Press.
The endogeneity between language and earnings: International analyses. B R Chiswick, P W Miller, 10.1086/298374Journal of labor economics. 132Chiswick, B. R., & Miller, P. W. 1995. The endogeneity between language and earnings: International analyses. Journal of labor economics, 13 (2), 246-288. https://doi.org/10.1086/298374
Relationship of toefl ibtQR scores to academic performance: Some evidence from american universities. Y Cho, B Bridgeman, 10.1177/0265532211430368Language Testing. 293Cho, Y., & Bridgeman, B. 2012. Relationship of toefl ibtQR scores to academic performance: Some evidence from american universities. Language Testing, 29 (3), 421-442. https://doi.org/10.1177/0265532211430368
The effect of linguistic distance on cross-border merger and acquisition (m&a) deal duration. Y Choi, S Bordia, 10.5465/AMBPP.2020.15305abstract202015305Choi, Y., & Bordia, S. 2020. The effect of linguistic distance on cross-border merger and acquisition (m&a) deal duration. 2020 (1), 15305. https://doi.org/10.5465/AMBPP.2020.15305abstract
A massively parallel corpus: the bible in 100 languages. Language resources and evaluation. C Christodouloupoulos, M Steedman, 10.1007/s10579-014-9287-y49Christodouloupoulos, C., & Steedman, M. 2015. A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, 49(2), 375-395. doi: 10.1007/s10579-014-9287-y
Commonsense knowledge mining from pretrained models. J Davison, J Feldman, A M Rush, 10.18653/v1/D19-1109Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp). the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp)Davison, J., Feldman, J., & Rush, A. M. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp) (pp. 1173-1178). https://doi.org/10.18653/v1/D19-1109
The interpretation of quantification at a distance in English-French interlanguage: Domain specificity and second-language acquisition. L Dekydtspotter, R A Sprouse, R Thyre, Language Acquisition. 84Dekydtspotter, L., Sprouse, R. A., & Thyre, R. (2000). The interpretation of quantification at a distance in English- French interlanguage: Domain specificity and second-language acquisition. Language Acquisition, 8(4), 265-320.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.-W., Lee, K., & Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
The strength of L1 effects on tense and aspect: How German learners. T Diaubalick, P Guijarro-Fuentes, Diaubalick, T., & Guijarro-Fuentes, P. (2019). The strength of L1 effects on tense and aspect: How German learners
Linguistic distance and market integration in india. J Fenske, N Kala, The Journal of Economic History. 811Fenske, J., & Kala, N. 2021. Linguistic distance and market integration in india. The Journal of Economic History, 81 (1), 1-39.
From language identification to language distance. P Gamallo, J R Pichel, I Alegria, 10.1016/j.physa.2017.05.011Physica A: Statistical Mechanics and its Applications. 484Gamallo, P., Pichel, J. R., & Alegria, I. 2017. From language identification to language distance. Physica A: Statistical Mechanics and its Applications, 484, 152-162. https://doi.org/10.1016/j.physa.2017.05.011
Second language acquisition: An introductory course. S M Gass, J Behney, L Plonsky, 10.1016/j.system.2013.12.019Gass, S. M., Behney, J., & Plonsky, L. 2020. Second language acquisition: An introductory course. Routledge. https://doi.org/10.1016/j.system.2013.12.019
Interpreting the relationships between toefl ibt scores and gpa: Language proficiency, policy, and profiles. Language Testing. A Ginther, X Yan, 10.1177/026553221770401035Ginther, A., & Yan, X. 2018. Interpreting the relationships between toefl ibt scores and gpa: Language proficiency, policy, and profiles. Language Testing, 35 (2), 271-295. https://doi.org/10.1177/0265532217704010
English proficiency tests as predictors of success in graduate studies in education. L R Gue, E A Holdaway, 10.1111/j.1467-1770.1973.tb00099.xLanguage Learning. 231Gue, L. R., & Holdaway, E. A. 1973. English proficiency tests as predictors of success in graduate studies in education. Language Learning, 23 (1), 89-103. https://doi.org/10.1111/j.1467-1770.1973.tb00099.x
A case study of a japanese child learning english as a second language 1, 2. Language learning. K Hakuta, 10.1111/j.1467-1770.1976.tb00280.x26Hakuta, K. 1976. A case study of a japanese child learning english as a second language 1, 2. Language learning, 26 (2), 321-351. https://doi.org/10.1111/j.1467-1770.1976.tb00280.x
Confirmatory factor analysis of the test of english as a foreign language. G A Hale, D A Rock, T Jirele, 10.1002/j.2333-8504.1982.tb01327.xi-51ETS Research Report Series. 19822Hale, G. A., Rock, D. A., & Jirele, T. 1982. Confirmatory factor analysis of the test of english as a foreign language. ETS Research Report Series, 1982 (2), i-51. https://doi.org/10.1002/j.2333-8504.1982.tb01327.x
Expected achievement in speaking proficiency. L Hart-Gonzalez, S Lindemann, Foreign Services Institute, Department of StateSchool of Language Studies. mimeoHart-Gonzalez, L., & Lindemann, S. 1993. Expected achievement in speaking proficiency, 1993. School of Language Studies, Foreign Services Institute, Department of State, mimeo.
IBM SPSS Statistics for Windows (Version 23.0). Ibm Corp, computer softwareIBM Corp. 2018. IBM SPSS Statistics for Windows (Version 23.0) [computer software].
. Ny: Ibm Armonk, Corp, Armonk, NY: IBM Corp.
Linguistic barriers in the destination language acquisition of immigrants. I E Isphording, S Otten, 10.1016/j.jebo.2014.03.027Journal of economic Behavior & organization. 105Isphording, I. E., & Otten, S. 2014. Linguistic barriers in the destination language acquisition of immigrants. Journal of economic Behavior & organization, 105, 30-50. https://doi.org/10.1016/j.jebo.2014.03.027
Improving bert performance for aspect-based sentiment analysis. A Karimi, L Rossi, A Prati, arXiv:2010.11731arXiv preprintKarimi, A., Rossi, L., & Prati, A. 2020. Improving bert performance for aspect-based sentiment analysis. arXiv preprint arXiv:2010.11731.
Now you see it. E Kellerman, Language Transfer in Language Learning: Issues in Second Language Research. Gass s.m./selinker l.now you don'tKellerman, E. 1983. Now you see it, now you don't, in: Gass s.m./selinker l. (eds.). Language Transfer in Language Learning: Issues in Second Language Research, 112-134.
Crosslinguistic influence: Transfer to nowhere?. E Kellerman, 10.1017/S0267190500002658Annual Review of Applied Linguistics. 15Kellerman, E. 1995. Crosslinguistic influence: Transfer to nowhere? Annual Review of Applied Linguistics, 15, 125- 150. https://doi.org/10.1017/S0267190500002658
Linguistic and nonlinguistic factors determining proficiency of english as a foreign language: a cross-country analysis. M.-H Kim, H.-H Lee, Applied Economics. 4218Kim, M.-H., & Lee, H.-H. 2010. Linguistic and nonlinguistic factors determining proficiency of english as a foreign language: a cross-country analysis. Applied Economics, 42 (18), 2347-2364.
. 10.1080/00036840701857960https://doi.org/10.1080/00036840701857960
Lingua franca: The role of english in international trade. H Ku, A Zussman, 10.1016/j.jebo.2010.03.013GetrightsandcontentJournal of Economic Behavior & Organization. 752Ku, H., & Zussman, A. 2010. Lingua franca: The role of english in international trade. Journal of Economic Behavior & Organization, 75 (2), 250-260. https://doi.org/10.1016/j.jebo.2010.03.013Get rights and content
The influence of linguistic distance on foreign language attrition. J W Leusink, Leusink, J. W. 2017. The influence of linguistic distance on foreign language attrition.
English proficiency and academic performance of international students. R L Light, M Xu, J Mossop, 10.2307/3586734Tesol Quarterly. 212Light, R. L., Xu, M., & Mossop, J. 1987. English proficiency and academic performance of international students. Tesol Quarterly, 21 (2), 251-261. https://doi.org/10.2307/3586734
Do language barriers affect trade?. J Lohmann, 10.1016/j.econlet.2010.10.023Economics Letters. 1102Lohmann, J. 2011. Do language barriers affect trade? Economics Letters, 110 (2), 159-162. https://doi.org/10.1016/j.econlet.2010.10.023
Language classification by numbers. A Mcmahon, R Mcmahon, Oxford University PressOxfordMcMahon, A., & McMahon, R. 2005. Language classification by numbers. Oxford: Oxford University Press.
Language and colonialism: Applied linguistics in the context of creole communities. B Migge, I Léglise, Migge, B., & Léglise, I. 2008. Language and colonialism: Applied linguistics in the context of creole communities.
Beyond western, educated, industrial, rich, and democratic (weird) psychology: Measuring and mapping scales of cultural and psychological distance. M Muthukrishna, A V Bell, J Henrich, C M Curtin, A Gedranovich, J Mcinerney, B Thue, Psychological science. 6Muthukrishna, M., Bell, A. V., Henrich, J., Curtin, C. M., Gedranovich, A., McInerney, J., & Thue, B. 2020. Beyond western, educated, industrial, rich, and democratic (weird) psychology: Measuring and mapping scales of cultural and psychological distance. Psychological science, 31 (6), 678-701.
. 10.1177/0956797620916782https://doi.org/10.1177/0956797620916782
An overview of COVID-19's impact on English language university admissions and placement tests. G J Ockey, Language Assessment Quarterly. 181Ockey, G. J. 2021. An overview of COVID-19's impact on English language university admissions and placement tests. Language Assessment Quarterly, 18(1), 1-5.
Native language, english proficiency, and the structure of the test of english as a foreign language. P K Oltman, L J Stricker, T S Barrows, 10.1002/j.2330-8516.1988.tb00282.xETS Research Report Series. 136Oltman, P. K., Stricker, L. J., & Barrows, T. S. 1988. Native language, english proficiency, and the structure of the test of english as a foreign language. ETS Research Report Series, 1988 (1), i-36. https://doi.org/10.1002/j.2330-8516.1988.tb00282.x
. F R Palmer, P Robert, Cambridge university pressPalmer, F. R., & Frank Robert, P. 1981. Semantics. Cambridge university press.
To tune or not to tune?. M E Peters, S Ruder, N A Smith, arXiv:1903.05987adapting pretrained representations to diverse tasks. arXiv preprintPeters, M. E., Ruder, S., & Smith, N. A. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987.
Pre-trained models for natural language processing: A survey. X Qiu, T Sun, Y Xu, Y Shao, N Dai, X Huang, 10.1007/s11431-020-1647-3Science China Technological Sciences. 6310Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., & Huang, X. 2020. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10), 1872-1897. https://doi.org/10.1007/s11431-020- 1647-3
Probing Multilingual BERT for Genetic and Typological Signals. T Rama, L Beinborn, S Eger, arXiv:2011.02070arXiv preprintRama, T., Beinborn, L., & Eger, S. 2020. Probing Multilingual BERT for Genetic and Typological Signals. arXiv preprint arXiv:2011.02070.
Identifying factors causing difficulties to productive skills among foreign languages learners. Opening writing doors journal. L J A Rico, 11Rico, L. J. A. 2014. Identifying factors causing difficulties to productive skills among foreign languages learners. Opening writing doors journal, 11(1), 65-86.
R H Robins, General linguistics. Routledge. Robins, R. H. 2014. General linguistics. Routledge.
A simple, general purpose display of magnitude of experimental effect. R Rosenthal, D B Rubin, Journal of educational psychology. 742Rosenthal, R., & Rubin, D. B. 1982. A simple, general purpose display of magnitude of experimental effect. Journal of educational psychology, 74(2), 166-169.
High stakes testing in higher education and employment Appraising the evidence for validity and fairness. P R Sackett, M J Borneman, B S Connelly, American Psychologist. 634215Sackett, P. R., Borneman, M. J., & Connelly, B. S. 2008. High stakes testing in higher education and employment Appraising the evidence for validity and fairness. American Psychologist,63(4), 215.
Economic, statistical, and linguistic factors affecting success on the test of english as a foreign language (toefl). M S Snow, 10.1016/S0167-6245(9710Information Economics and PolicySnow, M. S. 1998. Economic, statistical, and linguistic factors affecting success on the test of english as a foreign language (toefl). Information Economics and Policy. 10 (2), 159-172. https://doi.org/10.1016/S0167- 6245(97)00018-8
Colonial language policies and their legacies. J Spencer, Linguistics in sub-saharan africa. Spencer, J. 2017. Colonial language policies and their legacies. In Linguistics in sub-saharan africa (pp. 537-547).
. De Gruyter Mouton, 10.1515/9783111562520-019De Gruyter Mouton. https://doi.org/10.1515/9783111562520-019
Language distance, verbal communication and foreign trade growth-theoretical hypothesis and chinese evidence. J Su, Jianghan Tribune. 09Su, J. 2020. Language distance, verbal communication and foreign trade growth-theoretical hypothesis and chinese evidence. Jianghan Tribune, 09, 49-54.
Lexico-statistic dating of prehistoric ethnic contacts: with special reference to north american indians and eskimos. M Swadesh, Proceedings of the American philosophical society. the American philosophical society96Swadesh, M. 1952. Lexico-statistic dating of prehistoric ethnic contacts: with special reference to north american indians and eskimos. Proceedings of the American philosophical society, 96 (4), 452-463.
Factor analysis of the test of english as a foreign language for several language groups. S S Swinton, D E Powers, 10.1002/j.2333-8504.1980.tb01229.xi-79ETS Research Report Series. 2Swinton, S. S., & Powers, D. E. 1980. Factor analysis of the test of english as a foreign language for several language groups. ETS Research Report Series, 1980 (2), i-79. https://doi.org/10.1002/j.2333- 8504.1980.tb01229.x
. ETS Org. ETS OrgTest of English as a Foreign Language 2022, ETS Org, https://www.ets.org/toefl/score-users/about/structure/ Test of English as a Foreign Language 2022, ETS Org, https://www.ets.org/toefl/score-users/scores- admissions/compare
. ETS Org. Test of English as a Foreign Language 2022, ETS Org, https://www.ets.org/toefl/test- takers/ibt/about/content/
Acquisition of Dutch as a second language: The explanative power of cognate and genetic linguistic distance measures for 11 West European first languages. F W Van Der Slik, Studies in Second Language Acquisition. 323Van der Slik, F. W. 2010. Acquisition of Dutch as a second language: The explanative power of cognate and genetic linguistic distance measures for 11 West European first languages. Studies in Second Language Acquisition, 32(3), 401-432.
Extracting and composing robust features with denoising autoencoders. P Vincent, H Larochelle, Y Bengio, P A Manzagol, 10.1145/1390156.1390294Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningVincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. 2008, July. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning (pp. 1096- 1103). https://doi.org/10.1145/1390156.1390294
The impact of language on socioeconomic integration of immigrants. A Zorlu, J Hartog, 10.2139/ssrn.3170274Institute of Labor EconomicsZorlu, A., & Hartog, J. 2018. The impact of language on socioeconomic integration of immigrants. Institute of Labor Economics. http://dx.doi.org/10.2139/ssrn.3170274
| [] |
[
"\"Is Whole Word Masking Always Better for Chinese BERT?\": Probing on Chinese Grammatical Error Correction",
"\"Is Whole Word Masking Always Better for Chinese BERT?\": Probing on Chinese Grammatical Error Correction"
] | [
"Yong Dai yongdai@tencent.com \nTencent AI Lab\nChina\n",
"Linyang Li linyangli19@fudan.edu.cn \nFudan University\n\n",
"Cong Zhou \nTencent AI Lab\nChina\n",
"Zhangyin Feng aifeng@tencent.com \nTencent AI Lab\nChina\n",
"Enbo Zhao enbozhao@tencent.com \nTencent AI Lab\nChina\n",
"Xipeng Qiu xpqiu@fudan.edu.cn \nFudan University\n\n",
"Piji Li \nTencent AI Lab\nChina\n",
"Duyu Tang duyutang@tencent.com \nTencent AI Lab\nChina\n"
] | [
"Tencent AI Lab\nChina",
"Fudan University\n",
"Tencent AI Lab\nChina",
"Tencent AI Lab\nChina",
"Tencent AI Lab\nChina",
"Fudan University\n",
"Tencent AI Lab\nChina",
"Tencent AI Lab\nChina"
] | [
"Association for Computational Linguistics: ACL 2022"
] | Whole word masking (WWM), which masks all subwords corresponding to a word at once, makes a better English BERT model(Sennrich et al., 2016). For the Chinese language, however, there is no subword because each token is an atomic character. The meaning of a word in Chinese is different in that a word is a compositional unit consisting of multiple characters. Such difference motivates us to investigate whether WWM leads to better context understanding ability for Chinese BERT. To achieve this, we introduce two probing tasks related to grammatical error correction and ask pretrained models to revise or insert tokens in a masked language modeling manner. We construct a dataset including labels for 19,075 tokens in 10,448 sentences. We train three Chinese BERT models with standard character-level masking (CLM), WWM, and a combination of CLM and WWM, respectively. Our major findings are as follows: First, when one character needs to be inserted or replaced, the model trained with CLM performs the best. Second, when more than one character needs to be handled, WWM is the key to better performance. Finally, when being fine-tuned on sentence-level downstream tasks, models trained with different masking strategies perform comparably. | 10.18653/v1/2022.findings-acl.1 | [
"https://www.aclanthology.org/2022.findings-acl.1.pdf"
] | 247,187,913 | 2203.00286 | 3c29c1999081b1b32cd44afd8b6fdf660a1289d7 |
"Is Whole Word Masking Always Better for Chinese BERT?": Probing on Chinese Grammatical Error Correction
May 22-27, 2022
Yong Dai yongdai@tencent.com
Tencent AI Lab
China
Linyang Li linyangli19@fudan.edu.cn
Fudan University
Cong Zhou
Tencent AI Lab
China
Zhangyin Feng aifeng@tencent.com
Tencent AI Lab
China
Enbo Zhao enbozhao@tencent.com
Tencent AI Lab
China
Xipeng Qiu xpqiu@fudan.edu.cn
Fudan University
Piji Li
Tencent AI Lab
China
Duyu Tang duyutang@tencent.com
Tencent AI Lab
China
"Is Whole Word Masking Always Better for Chinese BERT?": Probing on Chinese Grammatical Error Correction
Association for Computational Linguistics: ACL 2022
May 22-27, 2022
Whole word masking (WWM), which masks all subwords corresponding to a word at once, makes a better English BERT model(Sennrich et al., 2016). For the Chinese language, however, there is no subword because each token is an atomic character. The meaning of a word in Chinese is different in that a word is a compositional unit consisting of multiple characters. Such difference motivates us to investigate whether WWM leads to better context understanding ability for Chinese BERT. To achieve this, we introduce two probing tasks related to grammatical error correction and ask pretrained models to revise or insert tokens in a masked language modeling manner. We construct a dataset including labels for 19,075 tokens in 10,448 sentences. We train three Chinese BERT models with standard character-level masking (CLM), WWM, and a combination of CLM and WWM, respectively. Our major findings are as follows: First, when one character needs to be inserted or replaced, the model trained with CLM performs the best. Second, when more than one character needs to be handled, WWM is the key to better performance. Finally, when being fine-tuned on sentence-level downstream tasks, models trained with different masking strategies perform comparably.
Introduction
BERT (Devlin et al., 2018) is a Transformer-based pretrained model, whose prosperity starts from English language and gradually spreads to many other languages. The original BERT model is trained with character-level masking (CLM). 1 A certain percentage (e.g. 15%) of tokens in the input se- * Work done during internship at Tencent AI Lab. * indicates equal contributions. † Corresponding author. 1 Next sentence prediction is the other pretraining task adopted in the original BERT paper. However, it is removed in some following works like RoBERTa . We do not consider the next sentence prediction in this work. quence is masked and the model is learned to predict the masked tokens.
It is helpful to note that a word in the input sequence of BERT can be broken into multiple wordpiece tokens (Wu et al., 2016). 2 For example, the input sentence "She is undeniably brilliant" is converted to a wordpiece sequence "She is un ##deni ##ably brilliant", where "##" is a special prefix added to indicate that the token should be attached to the previous one. In this case the word "undeniably" is broken into three wordpieces {"un", "##deni", "##ably"}. In standard masked language modeling, CLM may mask any one of them. In this case, if the token "##ably" is masked, it is easier for the model to complete the prediction task because "un" and "##deni" are informative prompts. To address this, Whole word masking (WWM) masks all three subtokens (i.e., {"un", "##deni", "##ably"}) within a word at once. For Chinese, however, each token is an atomic character that cannot be broken into smaller pieces. Many Chinese words are compounds that consisting of multiple characters (Wood and Connelly, 2009). 3 For example, "手机" (cellphone) is a word consisting of two characters "手" (hand) and "机" (machine). Here, learning with WWM would lose the association among characters corresponding to a word.
In this work, we introduce two probing tasks to study Chinese BERT model's ability on characterlevel understanding. The first probing task is character replacement. Given a sentence and a position where the corresponding character is erroneous, the task is to replace the erroneous character with the correct one. The second probing task is character insertion. Given a sentence and the positions where a given number of characters should be inserted, the task is to insert the correct characters. We leverage the benchmark dataset on grammatical error correction (Rao et al., 2020a) and create a dataset including labels for 19,075 tokens in 10,448 sentences.
We train three baseline models based on the same text corpus of 80B characters using CLM, WWM, and both CLM and WWM, separately. We have the following major findings. (1) When one character needs to be inserted or replaced, the model trained with CLM performs the best. Moreover, the model initialized from RoBERTa (Cui et al., 2019) and trained with WWM gets worse gradually with more training steps. (2) When more than one character needs to be handled, WWM is the key to better performance. (3) When evaluating sentence-level downstream tasks, the impact of these masking strategies is minimal and the model trained with them performs comparably.
Our Probing Tasks
In this work, we present two probing tasks with the goal of diagnosing the language understanding ability of Chinese BERT models. We present the tasks and dataset in this section.
The first probing task is character replacement, which is a subtask of grammatical error correction. Given a sentence s = {x 1 , x 2 , ..., x i , ..., x n } of n characters and an erroneous span es = [i, i + 1, ..., i + k] of k characters, the task is to replace es with a new span of k characters.
The second probing task is character insertion, which is also a subtask of grammatical error correction. Given a sentence s = {x 1 , x 2 , ..., x i , ..., x n } of n characters, a position i, and a fixed number k, the task is to insert a span of k characters between the index i and i + 1.
We provide two examples of these two probing tasks with k = 1 in Figure 1. For the character replacement task, the original meaning of the sentence is "these are all my ideas". Due to the misuse of a character at the 7th position, its meaning changed significantly to "these are all my attention". Our character replacement task is to replace the misused character "主" with "注". For the character insertion task, what the writer wants to express is "Human is the most important factor. However, due to the lack of one character between the 5th and 6th position, its meaning changed to "Human is the heaviest factor". The task is to insert "要" after the 5th position. Both tasks are also extended to multiple characters (i.e., k ≥ 2). Examples can be found at Section 3.2.
We build a dataset based on the benchmark of Chinese Grammatical Error Diagnosis (CGED) in years of (Lee et al., 2016Rao et al., 2017Rao et al., , 2018Rao et al., , 2020b. The task of CGED seeks to identify grammatical errors from sentences written by non-native learners of Chinese (Yu et al., 2014). It includes four kinds of errors, including insertion, replacement, redundant, and ordering. The dataset of CGED composes of sentence pairs, of which each sentence pair includes an erroneous sentence and an error-free sentence corrected by annotators. However, these sentence pairs do not provide information about erroneous positions, which are indispensable for the character replacement and character insertion. To obtain such position information, we implement a modified character alignment algorithm (Bryant et al., 2017) tailored for the Chinese language. Through this algorithm, we obtain a dataset for the insertion and replacement, both of which are suitable to examine the language learning ability of the pretrained model. We leave redundant and ordering types to future work. The statistic of our dataset is detailed in Appendix A.
Experiments
In this section, we first describe the BERT-style models that we examined, and then report numbers.
Chinese BERT Models
We describe the publicly available BERT models as well as the models we trained. As mentioned earlier, BERT-base (Devlin et al., 2018) 4 is trained with the standard MLM objective. 5 To make a fair comparison of CLM and WWM, we train three simple Chinese BERT baselines from scratch 6 : (1) Ours-clm: we train this model using CLM. (2) Ours-wwm: this model only differs in that it is trained with WWM. (3) Oursclm-wwm: this model is trained with both CLM and WWM objectives. We train these three models on a text corpus of 80B characters consisting of news, wiki, and novel texts. For the WWM task, we use a public word segmentation tool Texsmart to tokenize the raw data first. The mask rate is 15% which is commonly used in existing works. We use a max sequence length of 512, use the ADAM optimizer (Kingma and Ba, 2014) with a batch size of 8,192. We set the learning rate to 1e-4 with a linear optimizer with 4 https://github.com/google-research/ bert/blob/master/README.md 5 We do not compare with RoBERTa-wwm-ext because the released version lacks of the language modeling head. 6 We also further train these models initialized from RoBERTa and BERT and results are given in Appendix B. 5k warmup steps and 100k training steps in total. Models are trained on 64 Tesla V100 GPUs for about 7 days.
Probing Results
We present the results on two probing tasks here. Models are evaluated by Prediction @k, denoting whether the ground truth for each position is covered in the top-k predictions. From Table 1, we can make the following conclusions. First, Oursclm consistently performs better than Ours-wwm on probing tasks that one character needs to be replaced or inserted. We suppose this is because WWM would lose the association between characters corresponding to a word. Second, WWM is crucial for better performance when there is more than one character that needs to be corrected. This phenomenon can be observed from the results of Ours-wwm and Ours-clm-wwm, which both adopt WWM and perform better than Ours-clm. Third, pretrained with a mixture of CLM and WWM, Ours-clm-wwm performs better than Ours-wwm in the one-character setting and does better than Ours-clm when more than one characters need to be handled. For each probing task, two examples with predictions produced by Ours-clm-wwm are given in Figure 2.
Analysis
To further analyze how CLM and WWM affect the performance on probing tasks, we initialized our model from RoBERTa (Cui et al., 2019) and further trained baseline models. We show the performance of these models with different training steps on the insertion task. From Figure 3 (top), we can observe that as the number of training steps increases, the performance of Ours-wwm decreases.
In addition, we also evaluate the performance of trained BERT models on downstream tasks with model parameters fine-tuned. The performance of Ours-clm-wwm is comparable with Ours-wwm and Ours-clm. More information can be found in Appendix C.
Related Work
We describe related studies on Chinese BERT model and probing of BERT, respectively.
The authors of BERT (Devlin et al., 2018) provided the first Chinese BERT model which was trained on Chinese Wikipedia data. On top of that, Cui et al. (2019) trained RoBERTa-wwm-ext with WWM on extended data. Cui et al. (2020) further trained a Chinese ELECTRA model and MacBERT, both of which did not have [MASK] tokens. ELEC-TRA was trained with a token-level binary classification task, which determined whether a token was the original one or artificially replaced. In MacBERT, [MASK] tokens were replaced with synonyms and the model was trained with WWM and ngram masking. ERNIE (Sun et al., 2019) was trained with entity masking, similar to WWM yet tokens corresponding to an entity were masked at once. Language features are considered in more recent works. For example, AMBERT and Lattice-BERT (Lai et al., 2021) both take word information into consideration. Chinese-BERT (Sun et al., 2021) utilizes pinyin and glyph of characters.
Probing aims to examine the language understanding ability of pretrained models like BERT when model parameters are clamped, i.e., without being fine-tuned on downstream tasks. Petroni et al. (2019) study how well pretrained models learn factual knowledge. The idea is to design a natural language template with a [MASK] token, such as "the wife of Barack Obama is [MASK].". If the model predicts the correct answer "Micheal Obama", it shows that pretrained models learn factual knowledge to some extent. Similarly, Davison et al. (2019) study how pretrained models learn commonsense knowledge and Talmor et al. (2020) examine on tasks that require symbolic understanding. propose to probe Chinese BERT models in terms of linguistic and world knowledge.
Conclusion
In this work, we present two Chinese probing tasks, including character insertion and replacement. We provide three simple pretrained models dubbed Ours-clm, Ours-wwm, and Ours-clm-wwm, which are pretrained with CLM, WWM, and a combination of CLM and WWM, respectively. Ours-wwm is prone to lose the association between words and result in poor performance on probing tasks when one character needs to be inserted or replaced. Moreover, WWM plays a key role when two or more characters need to be corrected.
A The statistic of dataset
B Probing results from models with different initialization
We also verify the performance of models initialized from BERT (Devlin et al., 2018) and RoBERTa (Cui et al., 2019) on probing tasks. The results are detailed in Table 3, from which we can obtain consistent conclusions with the previous section.
C The evaluation on downstream tasks
We test the performance of BERT-style models on tasks including text classification (TNEWS, IFLY-TEK), sentence-pair semantic similarity (AFQMC), coreference resolution (WSC), key word recognition (CSL), and natural language inference (OC-NLI) (Xu et al., 2020a). We follow the standard fine-tuning hyper-parameters used in Devlin et al.
(Figure 1 :
1En: These are all my attention.) (En: These are all my ideas.) 素 (En: Human is the heaviest factor.) (En: Human is the most important factor.) Illustrative examples of two probing tasks. For character replacement (upper box), the highlighted character at 7th position should be replaced with another one. For character insertion (bottom box), one character should be inserted after the 5th position. Translations in English are given in parentheses.
Figure 2 :
2Top predictions of Ours-clm-wwm for replacement and insertion types. For each position, probability of the top prediction is given in parenthesis. The model makes the correct prediction for top three examples. For the bottom example, the prediction also makes sense, although it is different from the ground truth.
Figure 3 :
3Model performance at different training steps on the probing task of character insertion. The top and bottom figures give the results evaluated on spans with one and two characters, respectively.
Smoking is not only bad for your health, but also bad to non-smokers.) (En: Next time I go to Beijing, I can not miss the Peking Duck. What we have eaten in Beijing are Vietnamese cuisine and other foreign dishes.)Length = 1
Length = 2
Length > 3
Average
Insertion
p@1 p@10 p@1 p@10 p@1 p@10 p@1 p@10
BERT-base
76.0
97.0
37.2
76.0
14.4
50.1
42.5
74.4
Ours-clm
77.2
97.3
36.7
74.4
13.3
49.3
42.4
73.7
Ours-wwm
56.6
80.1
42.9
79.1
19.3
54.0
39.6
71.1
Ours-clm-wwm 71.3
95.1
42.6
80.9
20.6
53.0
44.8
76.3
Replacememt
p@1 p@10 p@1 p@10 p@1 p@10 p@1 p@10
BERT-base
66.0
95.1
21.0
58.2
10.1
46.1
32.4
66.5
Ours-clm
67.4
96.6
20.4
58.3
7.4
36.9
31.7
63.9
Ours-wwm
34.8
68.2
25.7
65.3
7.4
35.2
22.6
56.2
Ours-clm-wwm 59.2
93.7
26.5
66.4
12.4
41.6
32.7
67.2
Table 1: Probing results on character replacement and insertion.
(En: I have no right to destroy other people's lives.)
Character Replacement
Character Insertion
我没有权利破害别人的生活
Input:
Label: 坏
Prediction:
坏 (99.97%)
(En: The problem of generation gap is getting worse.)
代沟问题越来越深刻。
Input:
Label: 严重
Prediction:
严 (79.94%) 重 (91.85%)
Input: 吸烟不但对自己的健康 好,而且对非吸烟者带来不好的影响。
不
Label:
Prediction:
不 (99.98%)
(En: Input: 我下次去北京的时候,一定要吃北京烤鸭,我们在北京吃过的
是越南料理等外国的
。
Label: 饭菜
Prediction:
美 (40.66%) 食 (33.55%)
Table 2 :
2The statistic of our dataset.
;Xu et al. (2020b);Lai et al. (2021) and report results on the development sets. The detailed results is shown inTable 4.Initialization
Length = 1
Length = 2
Length > 3
Average
Insertion
p@1 p@10 p@1 p@10 p@1 p@10 p@1 p@10
BERT-base
76.0
97.0
37.2
76.0
14.4
50.1
42.5
74.4
Ours-clm
from scratch
77.2
97.3
36.7
74.4
13.3
49.3
42.4
73.7
Ours-wwm
56.6
80.1
42.9
79.1
19.3
54.0
39.6
71.1
Ours-clm-wwm
71.3
95.1
42.6
80.9
20.6
53.0
44.8
76.3
Ours-clm
from BERT
79.2
97.7
40.0
77.6
16.2
53.5
45.1
76.3
Ours-wwm
61.2
87.7
43.4
79.4
20.1
56.4
41.6
74.5
Ours-clm-wwm
73.1
96.1
41.8
80.6
20.6
56.7
45.2
77.8
Ours-clm
from RoBERTa
79.4
97.9
42.0
80.4
20.6
52.3
47.3
76.9
Ours-wwm
61.4
87.9
44.3
79.9
20.1
59.3
41.9
75.7
Ours-clm-wwm
77.3
97.5
46.8
83.3
22.5
58.7
48.9
79.8
Replacememt
p@1 p@10 p@1 p@10 p@1 p@10 p@1 p@10
BERT-base
66.0
95.1
21.0
58.2
10.1
46.1
32.4
66.5
Ours-clm
from scratch
67.4
96.6
20.4
58.3
7.4
36.9
31.7
63.9
Ours-wwm
34.8
68.2
25.7
65.3
7.4
35.2
22.6
56.2
Ours-clm-wwm
59.2
93.7
26.5
66.4
12.4
41.6
32.7
67.2
Ours-clm
from BERT
69.0
96.9
24.5
64.7
8.4
47.3
34.0
69.6
Ours-wwm
40.6
81.6
27.2
67.9
8.4
39.4
25.4
63.0
Ours-clm-wwm
61.6
94.9
27.6
67.8
10.4
47.0
33.2
69.9
Ours-clm
from RoBERTa
69.7
96.8
26.7
68
12.1
51.7
36.2
72.2
Ours-wwm
41.7
80.9
28.2
68.2
12.4
47.2
27.4
65.4
Ours-clm-wwm
67.3
96.7
28.4
69.7
15.7
54.2
37.1
73.5
Table 3 :
3Probing results from models with different initialization.ModelTNEWS IFLYTEK AFQMC OCNLI WSC CSL Average
In this work, wordpiece and subword are interchangeable. 3 When we describe Chinese tokens, "character" means 字 that is the atomic unit and "word" means 词 that may consist of multiple characters.
Automatic annotation and evaluation of error types for grammatical error correction. Christopher Bryant, Mariano Felice, Edward Briscoe, Association for Computational LinguisticsChristopher Bryant, Mariano Felice, and Edward Briscoe. 2017. Automatic annotation and evalua- tion of error types for grammatical error correction. Association for Computational Linguistics.
Revisiting pre-trained models for Chinese natural language processing. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657-668, Online. Association for Computa- tional Linguistics.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu, arXiv:1906.08101Pretraining with whole word masking for chinese bert. arXiv preprintYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre- training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.
Commonsense knowledge mining from pretrained models. Joe Davison, Joshua Feldman, Alexander M Rush, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pre- trained models. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1173-1178.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Yuxuan Lai, Yijia Liu, arXiv:2104.07204Yansong Feng, Songfang Huang, and Dongyan Zhao. 2021. Lattice-bert: Leveraging multi-granularity representations in chinese pre-trained language models. arXiv preprintYuxuan Lai, Yijia Liu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2021. Lattice-bert: Leveraging multi-granularity representations in chi- nese pre-trained language models. arXiv preprint arXiv:2104.07204.
Overview of NLP-TEA 2016 shared task for Chinese grammatical error diagnosis. Gaoqi Lung-Hao Lee, Liang-Chih Rao, Endong Yu, Baolin Xun, Li-Ping Zhang, Chang, Proceedings of the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016). the 3rd Workshop on Natural Language Processing Techniques for Educational Applications (NLPTEA2016)Osaka, JapanThe COLING 2016 Organizing CommitteeLung-Hao Lee, Gaoqi Rao, Liang-Chih Yu, Endong Xun, Baolin Zhang, and Li-Ping Chang. 2016. Overview of NLP-TEA 2016 shared task for Chinese grammatical error diagnosis. In Proceedings of the 3rd Workshop on Natural Language Processing Tech- niques for Educational Applications (NLPTEA2016), pages 40-48, Osaka, Japan. The COLING 2016 Or- ganizing Committee.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Language models as knowl. Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, Sebastian Riedel, arXiv:1909.01066arXiv preprintFabio Petroni, Tim Rocktäschel, Patrick Lewis, An- ton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066.
Overview of NLPTEA-2018 share task Chinese grammatical error diagnosis. Gaoqi Rao, Qi Gong, Baolin Zhang, Endong Xun, 10.18653/v1/W18-3706Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications. the 5th Workshop on Natural Language Processing Techniques for Educational ApplicationsMelbourne, AustraliaAssociation for Computational LinguisticsGaoqi Rao, Qi Gong, Baolin Zhang, and Endong Xun. 2018. Overview of NLPTEA-2018 share task Chi- nese grammatical error diagnosis. In Proceedings of the 5th Workshop on Natural Language Process- ing Techniques for Educational Applications, pages 42-51, Melbourne, Australia. Association for Com- putational Linguistics.
Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. Gaoqi Rao, Erhong Yang, Baolin Zhang, Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications. the 6th Workshop on Natural Language Processing Techniques for Educational ApplicationsGaoqi Rao, Erhong Yang, and Baolin Zhang. 2020a. Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Tech- niques for Educational Applications, pages 25-35.
Overview of NLPTEA-2020 shared task for Chinese grammatical error diagnosis. Gaoqi Rao, Erhong Yang, Baolin Zhang, Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications. the 6th Workshop on Natural Language Processing Techniques for Educational ApplicationsSuzhou, ChinaAssociation for Computational LinguisticsGaoqi Rao, Erhong Yang, and Baolin Zhang. 2020b. Overview of NLPTEA-2020 shared task for Chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Tech- niques for Educational Applications, pages 25-35, Suzhou, China. Association for Computational Lin- guistics.
IJCNLP-2017 task 1: Chinese grammatical error diagnosis. Gaoqi Rao, Baolin Zhang, Endong Xun, Lung-Hao Lee, Proceedings of the IJCNLP 2017. the IJCNLP 2017Taipei, TaiwanAsian Federation of Natural Language ProcessingGaoqi Rao, Baolin Zhang, Endong Xun, and Lung-Hao Lee. 2017. IJCNLP-2017 task 1: Chinese grammat- ical error diagnosis. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 1-8, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Lin- guistics.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hua Hao Tian, Wu, arXiv:1904.09223Ernie: Enhanced representation through knowledge integration. arXiv preprintYu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced represen- tation through knowledge integration. arXiv preprint arXiv:1904.09223.
Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu, Jiwei Li, arXiv:2106.16038Chinesebert: Chinese pretraining enhanced by glyph and pinyin information. arXiv preprintZijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu, and Jiwei Li. 2021. Chinesebert: Chinese pretraining enhanced by glyph and pinyin information. arXiv preprint arXiv:2106.16038.
2020. olmpics-on what language model pre-training captures. Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant, Transactions of the Association for Computational Linguistics. 8Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olmpics-on what language model pre-training captures. Transactions of the As- sociation for Computational Linguistics, 8:743-758.
Intrinsic knowledge evaluation on chinese language models. Zhiruo Wang, Renfen Hu, arXiv:2011.14277arXiv preprintZhiruo Wang and Renfen Hu. 2020. Intrinsic knowl- edge evaluation on chinese language models. arXiv preprint arXiv:2011.14277.
Contemporary perspectives on reading and spelling. C Wood, V Connelly, C. Wood and V. Connelly. 2009. Contemporary perspec- tives on reading and spelling.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144arXiv preprintYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine trans- lation. arXiv preprint arXiv:1609.08144.
CLUE: A Chinese language understanding evaluation benchmark. Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, Zhenzhong Lan, 10.18653/v1/2020.coling-main.419Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineInternational Committee on Computational LinguisticsLiang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020a. CLUE: A Chinese language understanding evalua- tion benchmark. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 4762-4772, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, arXiv:2004.05986Clue: A chinese language understanding evaluation benchmark. arXiv preprintLiang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, et al. 2020b. Clue: A chinese language understanding evaluation benchmark. arXiv preprint arXiv:2004.05986.
Overview of grammatical error diagnosis for learning chinese as a foreign language. Liang-Chih Yu, Lung-Hao Lee, Liping Chang, Proceedings of the 1stWorkshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14). the 1stWorkshop on Natural Language Processing Techniques for Educational Applications (NLP-TEA'14)Liang-Chih Yu, Lung-Hao Lee, and Liping Chang. 2014. Overview of grammatical error diagnosis for learning chinese as a foreign language. In Proceedings of the 1stWorkshop on Natural Language Processing Tech- niques for Educational Applications (NLP-TEA'14), pages 42-47.
Texsmart: A text understanding system for fine-grained ner and enhanced semantic analysis. Haisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, Xiao Feng, Tao Chen, Tao Yang, Dong Yu, Feng Zhang, Zhanhui Kang, Shuming Shi, arXiv:2012.15639arXiv preprintHaisong Zhang, Lemao Liu, Haiyun Jiang, Yangming Li, Enbo Zhao, Kun Xu, Linfeng Song, Suncong Zheng, Botong Zhou, Jianchen Zhu, Xiao Feng, Tao Chen, Tao Yang, Dong Yu, Feng Zhang, Zhanhui Kang, and Shuming Shi. 2020. Texsmart: A text un- derstanding system for fine-grained ner and enhanced semantic analysis. arXiv preprint arXiv:2012.15639.
Ambert: A pretrained language model with multi-grained tokenization. Xinsong Zhang, Hang Li, arXiv:2008.11869arXiv preprintXinsong Zhang and Hang Li. 2020. Ambert: A pre- trained language model with multi-grained tokeniza- tion. arXiv preprint arXiv:2008.11869.
Evaluation results on the dev set of each downstream task. 4Model parameters are fine-tunedTable 4: Evaluation results on the dev set of each downstream task. Model parameters are fine-tuned.
| [
"https://github.com/google-research/"
] |
[
"PALI-NLP at SemEval-2022 Task 4: Discriminative Fine-tuning of Transformers for Patronizing and Condescending Language Detection",
"PALI-NLP at SemEval-2022 Task 4: Discriminative Fine-tuning of Transformers for Patronizing and Condescending Language Detection"
] | [
"Dou Hu \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Mengyuan Zhou \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Xiyang Du \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Mengfei Yuan \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Meizhi Jin \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Lianxin Jiang \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Yang Mo \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Xiaofeng Shi \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n",
"Ping An \nLife Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}\n"
] | [
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}",
"Life Insurance Company of China, Ltd. {HUDOU470\nZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}"
] | [] | Patronizing and condescending language (PCL) has a large harmful impact and is difficult to detect, both for human judges and existing NLP systems. At SemEval-2022 Task 4, we propose a novel Transformerbased model and its ensembles to accurately understand such language context for PCL detection. To facilitate comprehension of the subtle and subjective nature of PCL, two fine-tuning strategies are applied to capture discriminative features from diverse linguistic behaviour and categorical distribution. The system achieves remarkable results on the official ranking, including 1st in Subtask 1 and 5th in Subtask 2. Extensive experiments on the task demonstrate the effectiveness of our system and its strategies. | 10.18653/v1/2022.semeval-1.43 | [
"https://arxiv.org/pdf/2203.04616v2.pdf"
] | 247,318,785 | 2203.04616 | 36f7da6d848070f74b48bdf2accace00a66202a3 |
PALI-NLP at SemEval-2022 Task 4: Discriminative Fine-tuning of Transformers for Patronizing and Condescending Language Detection
Dou Hu
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Mengyuan Zhou
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Xiyang Du
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Mengfei Yuan
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Meizhi Jin
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Lianxin Jiang
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Yang Mo
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Xiaofeng Shi
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
Ping An
Life Insurance Company of China, Ltd. {HUDOU470
ZHOUMENGYUAN425, DUXIYANG037, YUANMENGFEI854, JINMEIZHI005, JIANGLIANXIN769, MOYANG853, SHIXIAOFENG309}
PALI-NLP at SemEval-2022 Task 4: Discriminative Fine-tuning of Transformers for Patronizing and Condescending Language Detection
Patronizing and condescending language (PCL) has a large harmful impact and is difficult to detect, both for human judges and existing NLP systems. At SemEval-2022 Task 4, we propose a novel Transformerbased model and its ensembles to accurately understand such language context for PCL detection. To facilitate comprehension of the subtle and subjective nature of PCL, two fine-tuning strategies are applied to capture discriminative features from diverse linguistic behaviour and categorical distribution. The system achieves remarkable results on the official ranking, including 1st in Subtask 1 and 5th in Subtask 2. Extensive experiments on the task demonstrate the effectiveness of our system and its strategies.
Introduction
"Don't worry, I know this is a mistake you usually make, we all make it sometimes, but I am bringing you a solution.", which is a typical example of Patronizing and Condescending Language (PCL) (Giles et al., 1993;Huckin, 2002), shows a superior attitude and apparent kindness towards others, while is generally expressed unconsciously. The impact of PCL can potentially be very harmful, as it feeds inequalities and routinizes discrimination (Ng, 2007), especially if it is geared towards vulnerable communities in the media. If we are able to detect and identify when we are condescending or patronizing towards others, a corrective action (e.g., a more inclusive message) could be taken for a more responsible communication.
Recently, some works (Wang and Potts, 2019;Sap et al., 2020) on PCL are gradually emerging in NLP community. Remarkably, Pérez-Almendros et al. (2020) have shown that general pre-trained language models (Devlin et al., 2019;Liu et al., 2019) can achieve nontrivial performance. However, the behaviour of PCL is usually more unconscious, subtler, and subjective than other harmful types of discourse that are widely studied, i.e., hate speech (Basile et al., 2019), offensive language (Zampieri et al., 2019), intended sarcasm (Du et al., 2022), fake news (Zhang et al., 2021b) and rumor . These characteristics make PCL detection a difficult challenge, both for human judges and existing NLP systems.
To address this, we propose a novel Transformerbased model BERT-PCL (and its ensembles) with two discriminative fine-tuning strategies, to accurately understand such language context for PCL detection. The two strategies are grouped layerwise learning rate decay (Grouped LLRD) and weighted random sampler (WRS), and both are beneficial for task-adaptive fine-tuning based on language models.
A brief description of these two strategies is as follows: 1) As different layers capture different types of information (Yosinski et al., 2014), Grouped LLRD, a variant of LLRD (Howard and Ruder, 2018;Zhang et al., 2021a), is applied to group hidden layers of the pre-trained Transformer (Vaswani et al., 2017) into different sets and apply different learning rates to each in a certain extent. And then, we can make full use of different layers to capture more diverse and fine-grained linguistic features, which can boost understanding of the subtle and subjective nature of PCL. 2) There is a quite common phenomenon that positive samples (patronizing or condescending) have a smaller number than the negative, which reflects usage rates of PCL in public forums (Wang and Potts, 2019;Pérez-Almendros et al., 2020). But the positive samples are more important when detecting PCL, due to the harmful impacts. To deal with imbalanced classes scenarios, we introduce WRS to place more emphasis on the minority classes. Under this strategy, our classifier can capture discriminative features from the categorical distribution and detect whether it contains PCL in an unbiased manner.
At SemEval-2022 Task 4 (Pérez-Almendros et al., 2022), our proposed system achieves 1st in Subtask 1 and 5th in Subtask 2 on the evaluation leaderboard 1 . Meanwhile, in the post-evaluation phase, we further verified the results of the system on the test set of both subtasks. For Subtask 1, the single model BERT-PCL and its ensembles obtained 63.69% and 65.41% performance in terms of F1 of positive class, respectively. For Subtask 2, the single model BERT-PCL and its ensembles obtained 43.28% and 45.66% performance in terms of macro-average F1, respectively. Moreover, a series of experiments are conducted on the two subtasks of PCL detection. Results consistently demonstrate that our model and its ensembles significantly outperform comparison methods and the effectiveness of two strategies used in our system.
Background
Task and Data Description
The aim of SemEval-2022Task 4 (Pérez-Almendros et al., 2022 is to identify PCL, and to categorize the linguistic techniques (categories) used to express it, specifically when referring to vulnerable communities in the media. This challenge is divided into two subtasks, each corresponding to a subset of the Don't Patronize Me! (DPM) dataset (Pérez-Almendros et al., 2020). The 10,469 annotated paragraphs (i.e., sentences in context) from the DPM corpus are used as training data, where each paragraph mentions one or several predefined vulnerable communities. These paragraphs are collected using a keyword-based strategy and cover English language news sources from 20 different countries. A short description of the two subtasks and training data is as follows:
• Subtask 1: Binary classification. Given a paragraph, a system must predict whether or not it contains any form of PCL. The training set consists of 10,469 paragraphs annotated with a label ranging from 0 to 4. Label 2, 3, and 4 means positive examples (condescending or patronizing) of PCL and the remaining labels means negatives.
• Subtask 2: Multi-label classification. Given a paragraph, a system must identify the categories of PCL that are present. The 993 unique paragraphs (positive examples) in the training set, totaling 2,760 instances of PCL, are labeled with one or more PCL categories: Unbalanced power relations, Shallow solution, Presupposition, Authority voice, Metaphor, Compassion, The poorer, the merrier.
In addition, the test set for the evaluation phase contains around 4,000 manually annotated paragraphs with the PCL annotation scheme. More details about the task can be found on the competition page 2 .
Related Work
Harmful language detection/recognition has been widely studied in various forms of discourse, such as hate speech (Basile et al., 2019), offensive language (Zampieri et al., 2019), intended sarcasm (Du et al., 2022), fake news (Zhang et al., 2021b) and rumors . Unlike these works generally focused on explicit, aggressive and flagrant phenomena, the study of patronizing and condescending Language (PCL) (Giles et al., 1993;Huckin, 2002;Chouliaraki, 2006;Margić, 2017) has been almost ignored in NLP community until recently.
To encourage more research on PCL language, Wang and Potts (2019) present a condescension detection task and provides a TALKDOWN dataset in comment-reply pairs from Reddit. Besides, Pérez-Almendros et al. (2020) introduce a Don't Patronize Me! dataset and the challenge of PCL detection towards vulnerable communities (e.g. refugees, homeless people, poor families). These works establish several advanced baselines using pre-trained language models (Devlin et al., 2019;Liu et al., 2019), and suggest that detecting such language is a challenging task both for humans and NLP systems due to its subtle and subjective nature.
System Overview
In this section, we review our system adopted in SemEval-2022 Task 4, where we design a novel Transformer-based model BERT-PCL (and its ensembles) with two discriminative fine-tuning strategies for both subtasks of PCL detection.
Model Architecture
BERT (Devlin et al., 2019) uses masked language models to enable pretrained deep bidirectional representations, and can be fine-tuned to create task-specific models with powerful performance . Inspired by this, our system utilizes Transformers (Vaswani et al., 2017) to learn contextual representations of the input sentence under the BERT-like architecture.
Formally, given an input token sequence x i1 , ..., x iN where x ij refers to j-th token in the i-th input sample, and N is the maximum sequence length, the model learns to generate the context representation of the input token sequences: (1) where [CLS] and [SEP] are special tokens, usually at the beginning and end of each sequence, respectively. h i indicates the hidden representation of the i-th input sample, computed by the representation of [CLS] token in the last layer of the encoder.
h i = BERT([CLS], x i1 , ..., x iN , [SEP]),
PCL Detection
Subtask 1: Binary Classification
Subtask 1, a binary classification task, aims to predict whether or not a paragraph contains any form of PCL. After encoding, we apply a fully connected layer with the Softmax function to predict whether or not the input contains any form of PCL:
y i = Sof tmax(Wh i + b),(2)
where W and b are trainable parameters. We leverage Cross-entropy loss to optimize the system. The objective function of Subtask 1 is defined as:
L = − 1 N i (y i log(ŷ i ) + (1 − y i ) log(1 −ŷ i ))
(3) where y i is the ground-truth label of PCL.
Subtask 2: Multi-Label Classification
Subtask 2 is a multi-label classification task. Its goal is to determine which PCL categories a paragraph expresses. After encoding, we also apply a fully connected layer with the sigmoid function to predict the probability of each PCL class:
y c i = σ(W c h i + b c ),(4)
where σ is the sigmoid function. W c and b c are trainable parameters. We use Binary Cross Entropy (BCE) loss (Bengio et al., 2013) for the multi-label classification task, denoted as:
L = − 1 N i M c=1 [y c i log(ŷ c i )+(1−y c i ) log(1−ŷ c i )],(5)
where M is the number of classes,ŷ c i indicates the predicted probability that the i-th sample belongs to the c-th class.
Fine-tuning Strategies
For discriminative fine-tuning of the model, we introduce two strategies to boost the accurate understanding of PCL context, namely grouped layerwise learning rate decay (Grouped LLRD) and weighted random sampler (WRS).
Grouped LLRD
As different layers capture different types of information (Yosinski et al., 2014), they should be fine-tuned to different extents. Therefore, instead of using the same learning rate for all hidden layers of the Transformer, we tune each layer with different learning rates. Layer-wise learning rate decay (LLRD) (Howard and Ruder, 2018;Zhang et al., 2021a) is a popular fine-tuning strategy that applies higher learning rates for top layers and lower learning rates for bottom layers. Inspired by this, we group layers into different sets and apply different learning rates to each, denoted as Grouped LLRD.
Formally, we split all hidden layers of the Transformer into G sets with embeddings attached to the first set. The parameters of layers are denoted as {θ 1 , ..., θ G }, where θ g refers to the g-th group. The corresponding learning rate values are denoted as {η 1 , ..., η G }, where η g indicates the learning rate of the g-th group. To capture discriminative features, a multiplicative decay rate λ is used to change relative value of initial learning rates from adjacent groups in a controlled fashion. At time step t, the update of parameters θ is computed by:
θ g t = θ g t−1 − η g · ∇ θ g J(θ),(6)
where ∇ θ g J(θ) is the gradient with regard to the model's objective function. The learning rate of the lower layer is applied as η g−1 = η g /λ during finetuning to decrease the learning rate group-by-group. In addition, same as LLRD, we use a learning rate that is slightly higher than the top hidden layer for the pooler head and classifier.
Under the above setting, we can capture more diverse and fine-grained linguistic features by flexibly optimizing different hidden layers of the Transformer. It can boost understanding of PCL's subtle and subjective nature.
Weighted Random Sampler
The PCL dataset is highly imbalanced, which causes problems for training the above models. To alleviate this imbalanced classes problem, we use a Weighted Random Sampler (WRS) to place more emphasis on the minority classes. The samples are weighted and the probability of each sample being selected is determined by its relative weight.
For both subtasks, the sampling weight of the i-th sample is computed by:
s i = 1/ √ κ p , if it contains PCL, 1/ √ κ n , otherwise,(7)
where κ p and κ n refer to the ratio of positive and negative examples of PCL in training data, respectively. Then, the elements are sampled based on the passed weights. It is worth noting that the number of samples is equal to the length of the training set. During training, the sampler tends to select samples from positive examples with small data volume. In this way, we can have positive and negative classes with equal probability. And the classifier can capture discriminative features from categorical distribution in an unbiased manner.
Ensemble
For the final submissions, we apply a voter-based fusion technique (Morvant et al., 2014) to ensemble several BERT-PCL models. Concretely, we train the proposed BERT-PCL with five different random seeds. Then, we select Top-3 models according to average result of k-fold cross-validation on the training data. Finally, results of the test set predicted by the three optimal models are voted to get the final submission.
Experimental Setup
Comparison Methods
We compare BERT-PCL and its ensembles with the following several methods:
• Random is based on random guessing, choosing each class/label with an equal probability.
• BERT (Devlin et al., 2019) is a language model pre-trained in a self-supervised fashion based on deep bidirectional transformers. We use bert-base-uncased 3 to initialize BERT.
3 https://huggingface.co/
• ALBERT (Lan et al., 2020) presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. We use albert-large-v2 3 to initialize ALBERT.
• ERNIE 2.0 (Sun et al., 2020) is a continual pre-training framework, which builds and learns incrementally pre-training tasks through constant multi-task learning. We use nghuyong/ernie-2.0-large-en 3 to initialize ERNIE 2.0.
• RoBERTa (Liu et al., 2019) optimizes the training procedure of BERT and removes the next sentence predict objective when pretraining. We use roberta-large 3 to initialize RoBERTa.
Implementation Details
For both subtasks, stratified k-fold cross validation (Kohavi, 1995;Sechidis et al., 2011) is performed to split limited training data into 5 folds. We choose the optimal hyperparameter values based on the the average result of validation sets for all folds, and evaluate the performance of systems on the test data. BERT-PCL is initialized with the robertalarge 3 parameters, due to the nontrivial and consistent performance in both subtasks. Following Pérez-Almendros et al. (2020,2022), the evaluation metrics are F1 over the positive class for Subtask 1 and macro-average F1 for Subtask 2. We group layers into 3 groups, i.e., G = 3. The learning rate for layers in the lower, median, and higher groups as η/λ, η, and η * λ, respectively, where η is set to 1e-5. λ is a hyperparameter searched from {0.6, 1.6, 2.6, 3.6, 4.6, 5.6, 6.6}. For the training of BERT-PCL, the optimal value of lambda is 1.6 for Subtask 1 and 3.6 for Subtask 2. The experiments are conducted with batch size of 4, maximum length of 250, and dropout rate of 0.4. The number of epochs is set to 10 and the maximum patience number of early stopping is set to 50 batches. AdamW optimizer (Kingma and Ba, 2015) is used with a weight decay of 0.01. A cosine annealing schedule (Loshchilov and Hutter, 2017) is applied to decay the learning rate, with a linear warmup for the first 10% steps.
To effectively utilize the country term and search keyword term corresponding to each paragraph in the corpus, we concatenate these terms with the original paragraph as the input sequence. The results are reported in terms of the precision (P), recall (R) and F1 score of the positive class. All compared pre-trained models are fine-tuned on the task dataset. † indicates the results on the official ranking.
implementation, two special token pairs (i.e., <e> and </e>) are introduced as the term boundary.
Result and Discussion
Overall Result
The overall results in both subtasks are summarized in Table 1 and 2. Unsurprisingly, all pretrained models clearly outperform the random baseline. The proposed BERT-PCL and its ensembles (i.e., Ensemble 1.0 and Ensemble 2.0) consistently obtain the best performance than the comparison methods on both subtasks. Specifically, BERT-PCL gains 1.09% and 5.39% absolute improvements for Subtask 1 and 2, respectively. These results show the superiority of our models. In Table 1, both Ensemble 1.0 and Ensemble 2.0 are fused by three optimal full BERT-PCL models with different seeds and obtain a better performance than BERT-PCL in Subtask 1. In Table 2, Ensemble 1.0 is fused by the three optimal BERT-PCL that removes WRS, since we found that it is worse when performing WRS according to weights of category label in Subtask 2. Different from it, in the post-evaluation phase, we perform WRS according to weights of positive samples (patronizing or condescending) and fuse three optimal full BERT-PCL as Ensemble 2.0. As shown in Table 2, Ensemble 2.0 obtains a better performance than BERT-PCL and Ensemble 1.0 in Subtask 2.
Then, we qualitatively analyze the performance of BERT-PCL and typical baselines on the validation set for both subtasks. The results are illustrated in Figure 1. From the figure, BERT-PCL consis- Figure 1: Results on the validation set for both subtasks. The box displays the distribution of results where the green triangle indicates the mean of results, the green line and two blue lines represent the 25%, 50%, and 75% quartiles, respectively, and black lines are the maximum and minimum values. For Subtask 1, we report F1 score of the positive class; and for Subtask 2, we list macro-average F1 score. Figure 2: Results of ablation study for PCL detection. For Subtask 1, we report F1 score of the positive class; and for Subtask 2, we list macro-average F1 score.
tently obtains the best performance on the validation set for both subtasks, which confirms again the superiority of the proposed method.
Ablation Study
We conduct ablation studies by removing key components of BERT-PCL: 1) -w/o Grouped LLRD refers to removing the Grouped LLRD. 2) -w/o WRS refers to removing the WRS. 3) -w/o Grouped LLRD -w/o WRS refers to removing both Grouped LLRD and WRS, degenerated to RoBERTa. Figure 2 shows results of ablation studies on two subtasks of PCL detection. The full model yields the best performance on both subtasks. When removing either Grouped LLRD or WRS, the results Table 2: Results for the problem of categorizing PCL, viewed as a paragraph-level multi-label classification problem (Subtask 2). We report the per-class F1 and macro-average F1. All compared pre-trained model are fine-tuned on the task dataset. of variants decline significantly on both subtasks. Specifically, when only removing Grouped LLRD, the model achieves 2.50% and 2.14% degradation of performance in Subtask 1 and 2, respectively. When only removing WRS, the results decline by 1.37% and 3.74% in terms of F1 scores in Subtask 1 and 2, respectively. The above results consistently indicate the effectiveness of the two components. When removing both components, the performance also decreases on both subtasks. Note thatw/o Grouped LLRD -w/o WRS achieves a better F1 score of the positive class than -w/o WRS or -w/o Grouped LLRD in Subtask 1. This can be explained that ignoring Grouped LLRD limits to explore diverse features of positive samples, and further removing WRS may magnify this limitation due to the imbalanced class problem. Therefore, the model with two modules removed yields slightly better results than ablation models with only one module removed. Different from Subtask 1, Subtask 2 is a multi-label classification problem and we report the macro-average F1 score. Using Grouped LLRD can capture diverse features of each category label in positive samples, and WRS according to weights of positive and negative samples further promotes the model's attention to positive samples. Hence, removing both modules obtains the worst performance in Subtask 2.
Parameter Analysis
In this part, we explore the performance of BERT-PCL against different λ in Grouped LLRD. Bottom groups often encode more general and broad-based information, while top groups closer to the output encode information more localized and specific to the task on hand. In our model, a suitable value λ can control and balance these different layers of the Transformer to capture different kinds of features from diverse linguistic behaviour.
The results are illustrated in Figure 3. Based on k-fold cross validation on training data, we find the local optimum of λ, 1.6 for Subtask 1 and 3.6 for Subtask 2, and the resulting model consistently performed excellently on the test set. Under the optimal setting, different layers in the model can capture more diverse and fine-grained linguistic features, enhancing the understanding of the subtle and subjective nature of PCL. However, larger λ would make the model overfit to small datasets and suffer catastrophic forgetting during fine-tuning. Hence, the performance degrades as λ increases.
It is worth noting that in when λ is up to 5.6 for Subtask 2, the model achieves suboptimal results on the validation set but performs exceptionally There is a saying that goes "A friend in need is a friend indeed. " This means, a good friend is the one who rescues a friend trapped in unsolved problems.
neg. neg. neg. pos.
3 "The government is implementing several schemes that would change the economic position of poor families," she added.
pos. neg. neg. neg.
4
Alexis and her family decided to donate more than 400 of those presents to children in need.
neg. pos. pos. pos. well on the test set. This may be because the model overfits some redundant features of the corpus. Table 3 and Table 4 show several typical examples in the training set from Subtask 1 and 2, respectively. Their gold labels and predictions by BERT-PCL, RoBERTa and ERNIE 2.0 are presented in the corresponding columns. In Table 3, the first case is correctly classified by BERT-PCL, while is misclassified by other methods. We can easily observe that this example has the characteristics of Unbalanced power relations and Authority voice, and the language expression of the latter is more subtle. Unlike other methods, BERT-PCL can capture the linguistic phenomena of PCL through a discriminative fine-tuning process, and thus detect them correctly. For the second, BERT-PCL and RoBERTa can accurately identify the positive paragraphs, using the sentence representation ability learned by the pre-trained model. The latter two examples are consistently predicted as false negatives and false positives by all methods, respectively. We notice that both paragraphs have been annotated by two human annotators as borderline PCL. Unsurprisingly, these methods also struggle to detect such cases.
Case Study
As seen in Table 4, only BERT-PCL can correctly determine fine-grained PCL categories of the first two cases, which again illustrates the superiority of our method. It can be noticed that the third example has three PCL sub-categories (i.e., unb., pres., met.) with a certain internal correlation, and the gold label (i.e., merr.) of the fourth example appears too little in the training set. These phenomena increase the difficulty of identifying the two examples, which leads to wrong predicted labels. We believe that identifying multiple related sub-categories simultaneously and controlling the imbalance of positive PCL labels are urgent challenges for Subtask 2.
Conclusion
In this paper, we propose an advanced BERT-like model and its ensembles to accurately understand and detect patronizing and condescending language. Based on the pre-trained Transformer, we apply two fine-tuning strategies to capture discriminative features from diverse linguistic behaviour and categorical distribution. At SemEval-2022 Task 4, our system achieves 1st in Subtask 1 and 5th in Subtask 2 on the official ranking. Extensive experiments demonstrate the effectiveness and superiority of the proposed system and its strategies.
† indicates the results on the official ranking. The considered seven categories are Unbalanced power relations (unb.), Shallow solution (shal.), Presupposition (pres.), Authority voice (auth.), Metaphor (met.), Compassion (comp.) and The poorer, the merrier (merr.).
Figure 3 :
3Results against different values of hyperparameter λ in Grouped LLRD on both test and validation sets. We report F1 score of the positive class for Subtask 1, and list macro-average F1 score for Subtask 2.
No. Para.Gold
Pred.
Pred.
Pred.
(BERT-PCL) (RoBERTa) (ERNIE 2.0)
1
"Jesus is the Master Feminist because he championed the cause
of women," she said.
pos.
pos.
neg.
neg.
2
Table 3 :
3Case studies in Subtask 1: Binary Classification. The table shows four examples of paragraphs, their gold labels and predictions by three methods (BERT-PCL, RoBERTa and ERNIE 2.0). The pos. means the positive class of PCL, i.e. as instances containing PCL. Likewise, the neg. means the negatives. Through Gawad Kalinga, Meloto has proven to be a key player in the housing industry, helping provide decent homes and sustainable livelihood to the marginalized and homeless Filipinos. Pope Francis will visit a tiny Italian island to greet refugees and immigrants, pray for those who have lost their lives at sea and call for greater solidarity. In South Africa, education is a right and not a privilege, but an unfavourable background can unconsciously infringe on this right.unb., pres., met. unb., met. unb., auth. unb. Thankfully, while Krishna Tulasi can't entirely escape from the trope of disabled persons with hearts of gold, it manages to do better than many previous films with disabled protagonists.No. Para.
Table 4 :
4Case studies in Subtask 2: Multi-Label Classification. The table shows four examples of paragraphs, their gold labels and predictions by three methods (BERT-PCL, RoBERTa and ERNIE 2.0). The categories stand for: Unbalanced power relations (unb.), Shallow solution (shal.), Presupposition (pres.), Authority voice (auth.), Metaphor (met.), Compassion (comp.) and The poorer, the merrier (merr.).
https://sites.google.com/view/ pcl-detection-semeval2022/ranking
https://competitions.codalab.org/ competitions/34344
AcknowledgementsThe research is supported by the Ping An Life Insurance. All the work in this paper are conducted during the SemEval-2022 Competition. Last but not least, we thank the SemEval-2022 organizers for their effort in preparing the challenge, and the reviewers for their insightful and constructive comments.
Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti, SemEval@NAACL-HLT. Association for Computational LinguisticsValerio Basile, Cristina Bosco, Elisabetta Fersini, Deb- ora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In SemEval@NAACL-HLT, pages 54-63. Association for Computational Linguistics.
Representation learning: A review and new perspectives. Yoshua Bengio, Aaron C Courville, Pascal Vincent, IEEE Trans. Pattern Anal. Mach. Intell. 358Yoshua Bengio, Aaron C. Courville, and Pascal Vin- cent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798-1828.
The spectatorship of suffering. Lilie Chouliaraki, Sage. Lilie Chouliaraki. 2006. The spectatorship of suffering. Sage.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL-HLT (1). Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT (1), pages 4171-4186. As- sociation for Computational Linguistics.
PALI-NLP at SemEval-2022 Task 6: iSarcasmEval-Fine-tuning the Pre-trained Model for Detecting Intended Sarcasm. Xiyang Du, Dou Hu, Meizhi Jin, Lianxin Jiang, Yang Mo, Xiaofeng Shi, Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). the 16th International Workshop on Semantic Evaluation (SemEval-2022)Association for Computational LinguisticsXiyang Du, Dou Hu, Meizhi Jin, Lianxin Jiang, Yang Mo, and Xiaofeng Shi. 2022. PALI-NLP at SemEval-2022 Task 6: iSarcasmEval-Fine-tuning the Pre-trained Model for Detecting Intended Sar- casm. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). Association for Computational Linguistics.
Patronizing the elderly: Intergenerational evaluations. Howard Giles, Susan Fox, Elisa Smith, Research on Language and Social Interaction. 262Howard Giles, Susan Fox, and Elisa Smith. 1993. Patronizing the elderly: Intergenerational evalua- tions. Research on Language and Social Interaction, 26(2):129-149.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, Association for Computational Linguistics. ACL (1)Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL (1), pages 328-339. Association for Computa- tional Linguistics.
SLK-NER: exploiting second-order lexicon knowledge for chinese NER. Dou Hu, Lingwei Wei, SEKE. KSI Research IncDou Hu and Lingwei Wei. 2020. SLK-NER: exploiting second-order lexicon knowledge for chinese NER. In SEKE, pages 413-417. KSI Research Inc.
A rumor detection approach based on multi-relational propagation tree. Dou Hu, Lingwei Wei, Wei Zhou, Xiaoyong Huai, Jizhong Han, Songlin Hu, Journal of Computer Research and Development. 5871395Dou Hu, Lingwei Wei, Wei Zhou, Xiaoyong Huai, Jizhong Han, and Songlin Hu. 2021. A rumor detec- tion approach based on multi-relational propagation tree. Journal of Computer Research and Develop- ment, 58(7):1395.
Critical discourse analysis and the discourse of condescension. Discourse studies in composition. Thomas Huckin, 155176Thomas Huckin. 2002. Critical discourse analysis and the discourse of condescension. Discourse studies in composition, 155:176.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR (Poster). Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster).
A study of cross-validation and bootstrap for accuracy estimation and model selection. Ron Kohavi, IJCAI. Morgan KaufmannRon Kohavi. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selec- tion. In IJCAI, pages 1137-1145. Morgan Kauf- mann.
ALBERT: A lite BERT for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, ICLR. OpenReview.netZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In ICLR. OpenReview.net.
Roberta: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692CoRRYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.
SGDR: stochastic gradient descent with warm restarts. Ilya Loshchilov, Frank Hutter, ICLR (Poster). OpenReview.net. Ilya Loshchilov and Frank Hutter. 2017. SGDR: stochastic gradient descent with warm restarts. In ICLR (Poster). OpenReview.net.
Communication courtesy or condescension? linguistic accommodation of native to non-native speakers of english. Margić Branka Drljača, Journal of English as a lingua franca. 61Branka Drljača Margić. 2017. Communication cour- tesy or condescension? linguistic accommodation of native to non-native speakers of english. Journal of English as a lingua franca, 6(1):29-55.
Majority vote of diverse classifiers for late fusion. Emilie Morvant, Amaury Habrard, Stéphane Ayache, S+SSPR. Springer8621Emilie Morvant, Amaury Habrard, and Stéphane Ay- ache. 2014. Majority vote of diverse classifiers for late fusion. In S+SSPR, volume 8621 of Lec- ture Notes in Computer Science, pages 153-162. Springer.
Language-based discrimination: Blatant and subtle forms. Hung Sik, Ng, Journal of Language and Social Psychology. 262Sik Hung Ng. 2007. Language-based discrimination: Blatant and subtle forms. Journal of Language and Social Psychology, 26(2):106-122.
Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities. Carla Pérez-Almendros, Luis Espinosa Anke, Steven Schockaert, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsCarla Pérez-Almendros, Luis Espinosa Anke, and Steven Schockaert. 2020. Don't patronize me! an annotated dataset with patronizing and condescend- ing language towards vulnerable communities. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5891-5902.
SemEval-2022 Task 4: Patronizing and Condescending Language Detection. Carla Pérez-Almendros, Luis Espinosa-Anke, Steven Schockaert, Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). Association for Computational Linguistics. the 16th International Workshop on Semantic Evaluation (SemEval-2022). Association for Computational LinguisticsCarla Pérez-Almendros, Luis Espinosa-Anke, and Steven Schockaert. 2022. SemEval-2022 Task 4: Pa- tronizing and Condescending Language Detection. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). Associa- tion for Computational Linguistics.
Social bias frames: Reasoning about social and power implications of language. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, Yejin Choi, ACL. Association for Computational LinguisticsMaarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf- sky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power im- plications of language. In ACL, pages 5477-5490. Association for Computational Linguistics.
On the stratification of multilabel data. Konstantinos Sechidis, Grigorios Tsoumakas, Ioannis P Vlahavas, ECML/PKDD. Springer6913Konstantinos Sechidis, Grigorios Tsoumakas, and Ioan- nis P. Vlahavas. 2011. On the stratification of multi- label data. In ECML/PKDD (3), volume 6913 of Lecture Notes in Computer Science, pages 145-158. Springer.
ERNIE 2.0: A continual pre-training framework for language understanding. Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hua Hao Tian, Haifeng Wu, Wang, AAAI. AAAI PressYu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. ERNIE 2.0: A continual pre-training framework for language understanding. In AAAI, pages 8968-8975. AAAI Press.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30.
Talkdown: A corpus for condescension detection in context. Zijian Wang, Christopher Potts, EMNLP/IJCNLP (1). Association for Computational LinguisticsZijian Wang and Christopher Potts. 2019. Talkdown: A corpus for condescension detection in context. In EMNLP/IJCNLP (1), pages 3709-3717. Association for Computational Linguistics.
Hierarchical interaction networks with rethinking mechanism for document-level sentiment analysis. Lingwei Wei, Dou Hu, Wei Zhou, Xuehai Tang, Xiaodan Zhang, Xin Wang, Jizhong Han, Songlin Hu, ECML/PKDD. Springer12459Lingwei Wei, Dou Hu, Wei Zhou, Xuehai Tang, Xi- aodan Zhang, Xin Wang, Jizhong Han, and Songlin Hu. 2020. Hierarchical interaction networks with rethinking mechanism for document-level sentiment analysis. In ECML/PKDD (3), volume 12459 of Lecture Notes in Computer Science, pages 633-649. Springer.
Towards propagation uncertainty: Edge-enhanced bayesian graph convolutional networks for rumor detection. Lingwei Wei, Dou Hu, Wei Zhou, Zhaojuan Yue, Songlin Hu, ACL/IJCNLP (1). Association for Computational LinguisticsLingwei Wei, Dou Hu, Wei Zhou, Zhaojuan Yue, and Songlin Hu. 2021. Towards propagation uncertainty: Edge-enhanced bayesian graph convolutional net- works for rumor detection. In ACL/IJCNLP (1), pages 3845-3854. Association for Computational Linguistics.
How transferable are features in deep neural networks?. Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson, Advances in neural information processing systems. 27Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? Advances in neural information processing systems, 27.
Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval). Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar, Association for Computational Linguistics. SemEval@NAACL-HLTMarcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Semeval-2019 task 6: Identifying and catego- rizing offensive language in social media (offense- val). In SemEval@NAACL-HLT, pages 75-86. As- sociation for Computational Linguistics.
Revisiting few-sample BERT fine-tuning. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Q Kilian, Yoav Weinberger, Artzi, ICLR. OpenReview. netTianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, and Yoav Artzi. 2021a. Revisiting few-sample BERT fine-tuning. In ICLR. OpenRe- view.net.
Mining dual emotion for fake news detection. Xueyao Zhang, Juan Cao, Xirong Li, Qiang Sheng, Lei Zhong, Kai Shu, WWW. ACM / IW3C2Xueyao Zhang, Juan Cao, Xirong Li, Qiang Sheng, Lei Zhong, and Kai Shu. 2021b. Mining dual emotion for fake news detection. In WWW, pages 3465-3476. ACM / IW3C2.
| [] |
[
"DDXPlus: A new Dataset for Medical Automatic Diagnosis",
"DDXPlus: A new Dataset for Medical Automatic Diagnosis"
] | [
"Arsene Fansi Tchango ",
"Zhi Wen ",
"Rishab Goel ",
"Joumana Ghosn "
] | [] | [] | There has been rapidly growing interests in Automatic Diagnosis (AD) and Automatic Symptom Detection (ASD) systems in the machine learning research literature, aiming to assist doctors in telemedicine services. These systems are designed to interact with patients, collect evidence relevant to their concerns, and make predictions about the underlying diseases. Doctors would review the interaction, including the evidence and the predictions, before making their final decisions. Despite the recent progress, an important piece of doctors' interactions with patients is missing in the design of AD and ASD systems, namely the differential diagnosis. Its absence is largely due to the lack of datasets that include such information for models to train on. In this work, we present a largescale synthetic dataset that includes a differential diagnosis, along with the ground truth pathology, for each patient. In addition, this dataset includes more pathologies, as well as types of symtoms and antecedents. As a proof-of-concept, we extend several existing AD and ASD systems to incorporate differential diagnosis, and provide empirical evidence that using differentials in training signals is essential for such systems to learn to predict differentials 1 . | 10.48550/arxiv.2205.09148 | [
"https://arxiv.org/pdf/2205.09148v1.pdf"
] | 248,887,532 | 2205.09148 | 68968828fb8ff99264bcce6481156ff672a4f6a2 |
DDXPlus: A new Dataset for Medical Automatic Diagnosis
May 20, 2022
Arsene Fansi Tchango
Zhi Wen
Rishab Goel
Joumana Ghosn
DDXPlus: A new Dataset for Medical Automatic Diagnosis
May 20, 2022
There has been rapidly growing interests in Automatic Diagnosis (AD) and Automatic Symptom Detection (ASD) systems in the machine learning research literature, aiming to assist doctors in telemedicine services. These systems are designed to interact with patients, collect evidence relevant to their concerns, and make predictions about the underlying diseases. Doctors would review the interaction, including the evidence and the predictions, before making their final decisions. Despite the recent progress, an important piece of doctors' interactions with patients is missing in the design of AD and ASD systems, namely the differential diagnosis. Its absence is largely due to the lack of datasets that include such information for models to train on. In this work, we present a largescale synthetic dataset that includes a differential diagnosis, along with the ground truth pathology, for each patient. In addition, this dataset includes more pathologies, as well as types of symtoms and antecedents. As a proof-of-concept, we extend several existing AD and ASD systems to incorporate differential diagnosis, and provide empirical evidence that using differentials in training signals is essential for such systems to learn to predict differentials 1 .
Introduction
In a clinical conversation between a doctor and a patient, the patient usually initiates the discussion by specifying an initial set of symptoms they are experiencing. The doctor then iteratively inquires about additional informative symptoms and antecedents (describing the patient's history and potential risk factors), and produces a differential diagnosis, i.e. a set of plausible diseases, at the end of the interaction. During this multi-step process, the doctor tries to collect all relevant information to narrow down the list of the differentials. Once the differential diagnosis is established, the doctor can ask the patient to undergo medical exams to eliminate most pathologies included in the differential and confirm the one(s) the patient is suffering from, or can decide to directly prescribe a treatment to the patient.
Aiming to aid doctors in such clinical interactions, there has been significant recent progress on building Automatic Diagnosis (AD) systems and Automatic Symptom Detection (ASD) systems, using recent machine learning and Reinforcement Learning (RL) techniques [1,2,3,4,5,6,7]. They are meant to collect all symptoms and antecedents relevant to the patient's concern, while minimizing the length of the interaction to improve efficiency. They can potentially also predict the underlying disease to further aid the doctors in deciding appropriate next steps in the patient diagnoses.
However, this setting differs from realistic patients' interactions in an important way, namely the absence of the differential diagnosis. Based on the conversation alone, without further evidence such as physical exams, doctors tend to consider the differentials rather than a single pathology [8]. Doing so accounts for the uncertainty in the diagnosis and presents a more comprehensive view of the doctor's opinions on the underlying disease. Considering differentials is especially important for AD and ASD systems to account for the potential errors in predictions, and therefore allowing better acceptability by doctors. The absence of differential diagnosis in recent AD/ASD systems is mainly due to the lack of dataset that includes such information. The most commonly used public datasets, for example DX [1], Muzhi [2] and SymCAT [9], all are designed for predicting the ground truth pathology and lack differentials.
To close this gap and encourage future research that focuses on differential diagnosis, we present a large-scale synthetic dataset for building AD and ASD systems. This dataset is similar in format to other public datasets such as DX [1] and Muzhi [2], but differs in several important ways. First, it is larger in scale, in terms of the number of patients, as well as the number of pathologies, symptoms and antecedents. Second, it includes not only binary evidence, as existing datasets do, but also categorical, multi-choice and numerical types. Finally, each patient has a corresponding set of differential diagnosis in addition to the ground truth pathology. To the best of our knowledge, this is the first large scale dataset that includes both the ground truth pathology, a differential diagnosis and non-binary symptoms. To summarize, we make the following contributions:
• Release a large scale synthetic benchmark dataset of 1 million patients.
The dataset is generated using a proprietary medical knowledge base and contains a mixture of multi-choice, categorical and binary symptoms and antecedents. It also contains a differential diagnosis for each patient.
• We extend several existing AD and ASD systems to incorporate differential diagnosis. We then show that using the differentials in training signals is essential for such systems to be able to predict differentials.
Existing datasets and their limitations
The agent's training requires having access to the symptoms experienced by each patient, the relevant antecedents, and the differential diagnosis. There is unfortunately no such public data set. Existing public datasets, such as the MIMIC-III dataset [29], often lack symptom-related data and are therefore inappropriate. Other datasets, such as Munzhi [7], are of small scale, lack the differential diagnosis, and don't necessarily keep track of all the evidences experienced by patients. Moreover, due to privacy laws and security concerns, medical data is difficult to obtain from clinics and hospitals.
To tackle these limitations, previous works [1,9] relied on the SymCAT database [30] for data synthesis. Unfortunately, SymCAT doesn't provide a differential diagnosis. Moreover, SymCAT is limited to binary evidences, which can lead to unnecessarily long interactions with patients (compared to categorical or multi-choice questions that allow the collection of information with a smaller number of dialog turns). Finally, the symptoms listed in SymCAT are not always defined in an understandable way by patients and would require to be made more explicit (e.g., "are you experiencing flu-like syndrome?").
We choose not to use DX [1] and Muzhi [2] datasets, both of which are commonly used as public benchmarks in prior works [3,4,5,6]. The main reason is that they have a small number of samples (527 and 710 respectively), diseases (5 and 4 respectively), and symptoms (41 and 67 respectively), as noted by prior works [7]. Additionally, we also notice that the reliability of DX dataset was questioned by reviewers in previous submissions to ICLR 2 , further undermining the suitability of DX dataset in this work.
Proposed dataset
Source
The dataset we propose in this work heavily relies on a proprietary knowledge base (KB) extracted from medical literature and which was used to design OTA, a rule-based system that has been deployed in a real-world telemedicine platform. In total, the knowledge base covers a set of 440 pathologies and 802 symptoms and antecedents. The pathologies are regrouped in overlapping subgroups based on common characteristics refer to as chief complaints [10,11]. In this work, as a first step, we focus on pathologies belonging to the chief complaint related to cough, sore throat, or breathing issues. This subgroup is of medium size and it contains a set of 49 pathologies covering 110 symptoms and 113 antecedents. Extending the dataset to all pathologies is left for future work.
Each pathology d in the knowledge base is characterized by either an incidence rate, a prevalence rate, or both values. Both rates are conditioned on the age, the sex, and the geographical region of the patient. Additionally, for each pathology, a set of symptoms and antecedents describing the pathology is provided together with their related probabilities. These probabilities are conditioned on the age and sex of the patient. Thus, the values p(s|d, age, sex) and p(a|d, age, sex) are provided for each symptom s and each antecedent a.
Unlike existing datasets mentioned above, evidences (i.e., symptoms and antecedents) within this knowledge base are of several types. They can be binary (e.g., cough?), categorical (e.g., pain intensity from 0 to 10?), or multi-choice (e.g., pain location?). Sometimes, an evidence f s (e.g., pain location) may be dependent on another evidence s (e.g., pain), in which case the knowledge base provides means to extract the corresponding probability p(f s |s, d, age, sex). Finally, each pathology is characterized by its level of severity ranging from 1 to 5 with the lowest values describing the most severe pathologies from an emergency perspective.
To generate the synthetic patients, we rely on Synthea TM [12], a synthetic patient generator that generates high-quality patient data along with the associated health records covering every aspect of healthcare, as well as the statistics extracted from the knowledge base. In the next section, we describe in details the generation process as well as the assumptions made to this end.
Generation Process
To generate the synthetic dataset, we made some assumptions and developed several rules to exploit the knowledge base.
Assumptions on Socio-Demographic Data
As mentioned above, the pathology's statistics from the knowledge base are conditioned on the age, the sex, and the geographical region of the patient. In this work, we assume that the age, sex, and geographical region are independent. In other words, we have p(age, sex, geo) = p(age) × p(sex) × p(geo)
, where geo is the random variable representing the geographical region. The distribution on the age and the one on the sex can both be obtained from Census data. For this dataset, we used the 2010-2015 US Census data from the State of New York [13]. For more details, see Section 5.1. Regarding the geographical region, one needs to embed the notion of patient location, or at least the notion of recent travel while synthesizing a patient. In this project, we opt for the second case: each synthesized patient is generated by simulating the fact that he recently travelled or not, and if he travelled, in which geographical region. This choice is motivated by the fact that we are synthesizing patients from the state of New York population statistics and there are some pathologies of interest that can be contracted only if the patient is from a different geographical region. We thus assume the availability of a prior distribution p(travel) representing the proportion of the population travelling each month and we consider that the distribution regarding the geographical regions of destination is uniform. Finally, we assume that the default geographical region is "North America" for any person who has not recently travelled. Based on these assumption, we derive the following prior distribution p(geo) over the geographical regions:
• Sample u ∼ U(0, 1).
• If u < p(travel), then randomly select a geographical region from the available set of geographical regions. We used p(travel) = 0.25 for this dataset.
• If u ≥ p(travel), then set the geographical region to be "North America".
Assumptions on Pathologies
In this work, the incidence rate, when available, is used as the pathology prior distribution. However, we fall back to the prevalence rate when the incidence rate is not available. This is one of the major limitations of the data generation process which needs to be addressed in future work. From several discussions with a medical expert, it seems that one can approximate incidence rate with the prevalence by multiplying it with a constant factor (representing the average duration of the disease) which can be different for each pathology. Out of the 49 pathologies present in this dataset, 8 are affected by this shortcut. When the resulting rate is greater than 100% (e.g., an incidence rate of 200% means that an individual will likely develop the pathology on average twice a year), we simply cap it at 100%. We first explored the strategy consisting of capping the rate at 100% and generating as much as patients to comply with the original rate (e.g., we generate two patients for an incidence rate of 200%). However this strategy led us to a dataset that was dominated by only a few pathologies (more than half of the patients within the dataset were suffering from one of the three pathologies whose incidence rate was greater than 100%).
The knowledge base also contains some diseases that have extremely low incidence rates, and therefore patients suffering from those pathologies were barely generated. To increase the chance of those pathologies to be represented within the dataset, we decided to cap the rate at a minimum of 10%. In other words, the rates used to generate the dataset were capped to lie between 10% and 100%. This simple alteration of the original rates from the knowledge base leads to the generation of a significant number of patients for all the diseases.
Assumptions on Symptoms and Antecedents
At this point, we are able to sample a pathology d from its prior distribution p(d|age, sex, geo). The next step is to generate all the evidences (symptoms and antecedents) the synthesized patient will be experiencing. However, the knowledge base doesn't contain the joint distribution of symptoms and antecedents given the disease, sex and age. It only contains marginal distributions for each symptom and antecedent. So, a simplifying assumption is made according to which given the disease, age and sex, all the evidences are conditionally independent of each other. In other words, we have:
p(E|d, age, sex) = e∈E p(e|d, age, sex)
, where E is the set of evidences experienced by the patient. This simplifying assumption is yet another limitation of our dataset.
Some evidences, such as the pain intensity, are described as integer values on a scale of 0 to 10. However, the knowledge base only provides the average value of this evidence given the disease, the age, and the sex of the patient. To inject some randomness in the patient generation process, the values of those evidences are uniformly sampled from the interval
[max(0, v − 3), min(10, v + 3)]
where v is the average value present in the knowledge base.
Additionally, for realistic purposes, we limit to 5 the maximum number of choices associated with multi-choices evidences such as pain location.
Tools
As mentioned above, we rely on Synthea TM along with the described assumptions on conditional probabilities from the knowledge base to generate the patients. However, Synthea TM relies on static graphs refereed to as modules to synthesize the patients. Because those graphs are static, the order in which the possible values of a categorical and a multi-choice evidence are traversed during the generation process is fixed and deterministic. Consequently, Synthea TM will stop exploring the remaining values as soon as a value is synthesized for a categorical evidence or 5 values are synthesized for a multi-choice evidence. In this work, the possible values of an evidences are ordered, within the Synthea TM modules, in ascending order based on their conditional probability of occurrence p(e|d, age, sex).
Differential Diagnosis Generation
From the assumptions made above, we are able to synthesize a patient, that is someone suffering from a pathology and experiencing the related symptoms and antecedents. In this section, we focus on the generation of the differential diagnosis associated with the set of symptoms and antecedents experienced by a given patient.
As mentioned above, the knowledge base we rely on has been used to build a rule-based system which deployed in a real-world telemedicine platform. We leverage this platform to compute the differential diagnosis. More specifically, we proceed by using the platform in a batch 3 mode according to the following high-level steps:
• We provide the age and the sex of the patient, the appropriate chief complaint, and we answer "yes" to the question "Are you consulting for a new problem?".
• We add all the generated symptoms and antecedents experienced by the patient to the payload at the beginning of the interaction. The motivation behind this is to provide as much information to the platform so as to minimize the bias resulting from the interaction into the differential diagnosis.
• The platform may still inquire about additional questions. If that is the case, we answer "no" for those questions until we see a "QUIT" response from the platform or the maximum interaction length is reached.
• When the maximum interaction length is reached, the platform does not produce a differential diagnosis. We interpret this situation as if the synthesized patient is not as realistic as needed by the platform, and therefore, the patient is discarded from the dataset.
• When a "QUIT" response is provided by the platform, it contains a differential diagnosis. We further proceed by verifying if the synthesized disease is part of the generated differential diagnosis. If it is not the case (because the platform itself is not a perfect system or because the patient didn't have enough evidences for the rule-based system to include the simulated disease in the differential diagnosis), the patient is discarded from the dataset. Each pathology within the generated differential diagnosis has a score. Those scores are normalized to obtain a probability distribution.
The platform sometimes returns a differential diagnosis that contains pathologies which do not belong to the specified chief complaint. There are several options for handling this situation: (1) create an "other pathologies" category and assign it the cumulative mass of the corresponding pathologies, or (2) manually remove those pathologies from the differential diagnosis and re-normalize the distribution. We opt for the second option in this work. On average, we removed 1.78 (±1.68) pathologies from the generated differential diagnosis for an average cumulative probability mass of 0.10 (±0.11). Statistics regarding the rank from which those pathologies are excluded are described in Section 5.2.
Dataset Characteristics
With the above assumptions and limitations, we generate a large scale dataset of roughly of 1.32 Millions distinct patients where a patient is characterized by the combination of his/her age, sex, race, pathology, symptoms, antecedents, as well as the corresponding differential diagnosis. We further divide the dataset in training, validation, and test subsets using the stratified sampling strategy based on the simulated pathology as well as the classical 80-10-10 proportions.
With respect to existing datasets from the Automatic Diagnosis and Automatic Symptom Detection literature, our dataset present several advantages:
• Unlike the SymCAT [14] and the Muzhi [1] datasets, our dataset does not only contain binary evidences. Instead, it also includes categorical and multi-choice evidences which can be naturally match to the kind of questions a doctor can ask to a given patients.
• Our dataset makes a clear distinction between antecedents and symptoms which can be of great importance when designing automated systems. While both symptoms and antecedents are useful for characterising a pathology, antecedents are usually known from the patient medical records, and therefore one can decide to put less emphasis on retrieving them when designing such an automated system.
• Each pathology in our dataset is characterized by a severity level which makes it suitable for designing solutions dedicated to emergency scenarios.
• To the best of our knowledge, this is the first large scale dataset containing differential diagnosis which can be useful for designing fine-grained automated systems.
In the next section, we perform further analysis of the generated dataset.
Data Analysis
In this section, we conduct throughout analysis of the so generated dataset and report the resulting statistics. The results reported in this section are for the whole set of generated patients. Statistics on the train, validation, and test subsets are presented in Section 5.3.
Types of evidences:
The statistics regarding the types of evidences that are part of the generated dataset are shown in Table 1. Although the dataset is mostly composed of binary evidences, it does include categorical and multichoice evidences which are evidences that cannot be characterized by a yes-or-no option. Table 1: The statistics of the considered evidences from the knowledge base. Table 2 shows an overview of the synthesized patients in terms of the number of simulated evidences. On average, a synthesized patient is characterized by 10.02 symptoms as well as 3.45 antecedents.
Evidences Symptoms Antecedents
Number of evidences:
Pathology Statistics: Figure 1 shows the histogram of the pathology the synthesized patients are suffering from in the generated dataset. Although there are three dominating pathologies (URTI, Viral pharyngitis, and Anemia), it can be observed that the other pathologies are also well represented.
Socio-Demographic Statistics: The statistics regarding the socio-demographic data of the synthesized patients are shown in Figure 2. As expected, these statistics are compliant with the 2015 US Cesus data of the state of New-York which was used as reference during the generation process. Differential Diagnosis Statistics: The histogram of the length of the differential diagnosis characterizing the generated patients is depicted in Figure 3a. with a probability mass less than or equals to 0.01 (resp. 0.05) are filtered out from the differential diagnoses. As observed, the generated differential diagnosis can have a number of pathologies ranging from 1 to more than 10.
(a) (b) (c) Figure 3: Statistics regarding the length of the differential diagnosis. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
It is also interesting to have an insight of the the rank of the simulated patient pathology within the generated differential diagnosis. Figure 4 particularly addresses this point. As it can be noticed, the simulated pathology is ranked first for more than 70% of patients. This finally validates the quality of the generated data. Figure 4: Statistics regarding the rank of the simulated pathology within the differential diagnosis. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
Disclaimer
From the sections discussed above, it should be clear that any models trained on this dataset should not be directly used in a real-world system prior to performing rigorous evaluations to ensure proper coverage and representativity of the population that such a model will interact with.
Experiments
There hasn't been a lot of work on building automatic diagnosis systems that aims at producing a differential diagnosis based on the collected evidences from the patients.In this secton, we propose to adapt two of the existing approaches to our setting:
• AARLC: AARLC [7] is an RL-based approach consisting of two branches, an evidence acquisition branch and a classifier branch. The method proposes an adaptive approach to align the tasks performed by the two branches using entropy of the distributions predicted by the classifier branch.
• ASD: this supervised learning based approach builds on top of the evidence acquisition module proposed in [15] -with the exception of the knowledge graph -and adds a policy network which aims at predicting the underlying patient disease at the end of the evidence acquisition process. More details about this approach in Appendix 6.
Furthermore, we train for each approaches two versions: a version which is trained to predict the ground truth pathology and another one which is trained to predict the differential diagnosis.
Experimental setup
Each patient has one initial evidence which is provided to the model at the beginning of the interaction. The model then iteratively inquiries about various symptoms and antecedents and the patient responds with an appropriate response. The system repeats this until all the relevant symptoms and antecedents have been inquired and produces a differential diagnosis at the end of the interaction. The maximum number of turns are caped to T turns (i.e. 30 in all of our experiments). For AARLC, we use the same setup as in [7] whereas, for the ASD approach, the agent is made of an MLPs with 2 hidden layers of size 2048.
Results
We report on the interaction length (IL). Also, to evaluate the evidence collection, we measure the recall (PER), precision (PEP), and F1 score (PEF1) of the inquired evidences. To evaluate the differential diagnosis, we calculate the recall (DDR@k), precision (DDP@k) and F1 score (DDF1@k) when considering the top k entries of the predicted distributions. We also compute the accuracy of inclusion of the gold truth pathology (i.e., the pathology a patient was simulated from) in the differential diagnosis (GTPA@k). Details of these metrics, including all formulas, can be found in Appendix 7. Tables 3 and 4 show the results obtained for the two approaches. As it can be observed for both approaches, the differential helps collecting more evidences.
Conclusion and Future Work
In this work, we release a large scale benchmark dataset of 1 million patients suffering from pathologies that include cough, sore throat or breathing problems as symptoms. The dataset contains binary, categorical and multi-choice evidences.
We hope that this dataset will help the research community improve automatic diagnosis systems. We emphasize that this dataset should not be used to train and deploy a model prior to performing rigorous evaluations to ensure proper coverage and representativity of the populations that such a model will interact with. Based on this dataset, we extend several AD and ASD baselines. We extend two approaches (based on RL and non-RL settings) that reduces the interaction length, and improves evidence collection and differential diagnosis prediction.
In this work, we considered all diseases as equally important. But in general, when establishing a differential diagnosis, doctors ask questions to specifically explore and rule out severe pathologies. Our dataset has a severity flag associated with each pathology. We will therefore explore approaches that better handle severity. Extending our system to support uncertain answers from patients (e.g. "I don't know." or "I am not sure.") will also be an important next step.
Appendix
Demographics statistics from census data
To be able to synthesize patients, one needs to have access to the prior distributions regarding the age, sex and race of the population of interest. In this work, we rely on the 2010-2015 US census data [13] from the state of New York. In our implementation, because the Synthea TM tool uses the "Hispanic" value to characterize the ethnicity of a synthesized patient instead of the race, we remove "Hispanic" from the set of possible race values and re-normalize the probabilities of each of the remained values.
Differential Diagnosis Post-Processing
As mentioned previously, pathologies that are part of the differential diagnosis generated by the rule-base platform but that do not belong to the specified chief complaint are excluded from the differential diagnosis. Table 6 describes, for each rank in the differential diagnosis, the proportion of patients for which the pathology at that rank is excluded from the differential diagnosis. Table 6: Proportion (%) of the patients for which pathologies are excluded at each rank from the differential returned by the platform.
Analysis of the Train, Validation, and Test Subsets
Evidences Statistics
The statistics of the evidences experienced by the synthesized patients belonging to the train, validation, and test subsets are depicted in Tables 7, 8, and 9 respectively. As illustrated, the evidences are uniformly distributed across the three subsets. Table 9: The statistics of the synthesized patients on the test subset.
Evidences Symptoms Antecedents
Pathology Statistics
Figures 5, 6, and 7 depict the histograms of the pathology experienced by the synthesized patients in the train, validation, and test subsets. As illustrated, the pathologies are evenly distributed across the three subsets. Figure 5: Histogram of the patient pathologies in the train subset.
Socio-Demographic Statistics
We depicted in this section the statistics regarding the socio-demographic data of the synthesized patients from the train (cf. Figure 8), validation (cf. Figure 9), and test (cf. Figure 10) subsets. In all the three figures, the sex distribution is shown in plot (a) while the age and the race distributions are respectively depicted in plot (b) and plot (c). Figure 6: Histogram of the patient pathologies in the valid subset.
Differential Diagnosis Statistics
The histograms regarding the length of the differential diagnosis from the train, validation, and test subsets ares respectively illustrated in Figures 11, 13, and 15. Similarly, the histograms regarding the rank of the simulated pathology within the differential diagnosis from the train, validation, and test subsets ares respectively depicted in Figures 12, 14, and 16.
ASD Approach
The ASD agent consists on an MLP network with 2 prediction branches:
• a policy branch whose role is to predict whether to stop or continue the interaction, and if the latter, what evidence to inquire about next;
• a classifier branch to make prediction regarding the underlying patient disease.
To train the network, we simulate dialogue states together with their target values. In other words, let us assume a given patient has n evidences that he/she is experiencing. We simulate a dialogue state state as follows:
1. Randomly select p ∈ [1, n] representing the number of positive evidences already acquired. Sample p evidences from the ones experienced by the patient and set them in the simulated dialog state.
2. Randomly select q ∈ [0, T − p) representing the number of negative evidences already inquired where T is the maximum allowed dialog turns. Sample q evidences from the ones not experienced by the patient and set them in the simulated dialog state.
(a) (b) (c) Figure 11: Statistics regarding the length of the differential diagnosis in the train subset. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
(a) (b) (c) Figure 12: Statistics regarding the rank of the simulated pathology within the differential diagnosis in the train subset. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05. Figure 13: Statistics regarding the length of the differential diagnosis in the validation subset. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
(a) (b) (c)
(a) (b) (c) Figure 14: Statistics regarding the rank of the simulated pathology within the differential diagnosis in the validation subset. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
Evaluation Metric
This section describes the metrics used to evaluate the implemented agents. Let D be the number of patients. And let T i be the set of evidences collected by an agent from the i th patient, Y i be the ground truth differential, andŶ i be the pathology distribution generated by the agent for that patient. Figure 15: Statistics regarding the length of the differential diagnosis in the test subset. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
(a) (b) (c) Figure 16: Statistics regarding the rank of the simulated pathology within the differential diagnosis in the test subset. a) Full differential as generated by the system. b) Pathologies with probability mass less than or equals to 0.01 are filtered out. c) Same as (b) but the threshold is set to 0.05.
Interaction Length (IL):
The average interaction length is defined as:
IL = 1 |D| Σ |D| i=1 |T i |.(1)
Differential Diagnosis Recall for top k entries (DDR@k): At the end of an interaction, the agent produces a distributionŶ over the pathologies. We measure whether the top k pathologiesŶ k extracted from this distribution are also present in the top k pathologies Y k of the ground truth differential (if the ground truth differential has less than k pathologies, then we set k to the size of the ground truth differential). The metric is defined as:
DDR@k = 1 |D| Σ |D| i=1 |Ŷ i k ∩ Y i k | |Y i k | ,(2)
whereŶ i k and Y i k are respectively the predicted and gold truth top k pathologies for the i th patient.
Differential Diagnosis Precision for top k entries (DDP@k): This metric measures the precision of the differential diagnoses predicted by an agent when considering the top-k entries:
DDP @k = 1 |D| Σ |D| i=1 |Ŷ i k ∩ Y i k | |Ŷ i k | .(3)
Differential Diagnosis F1 for top k entries (DDF1@k): We combine the aforementioned recall and precision metrics to compute the F1 score of the differential diagnosis.
Gold Truth Pathology Accuracy (GTPA@k): This metric measures whether the differential diagnosis predicted by an agent contains the pathology p a patient was simulated from within its top-k entries:
GT P A@k = 1 |D| Σ |D| i=1 1 p i ∈Ŷ i k .(4)
Positive Evidence Recall (PER): Let us suppose that the i th patient is experiencing the set S i of symptoms and the set A i of antecedents. Also let us assume that the agent inquires the setŜ i (resp. i ) of symptoms (resp. antecedents) from whichŜ i + ⊆ S i (resp. i + ⊆ A i ) is the set of symptoms (resp. antecedents) experienced by the i th patient. Then, the recall for the positive evidences is calculated as:
P ER = 1 |D| Σ |D| i=1 P ER i , where P ER i = |Ŝ i + ∪Â i + | |S i ∪ A i | .(5)
Positive Evidence Precision (PEP): Similarly to the recall metric, we compute the precision of inquired evidences. We have:
P EP = 1 |D| Σ |D| i=1 P EP i , with P EP i = |Ŝ i + ∪Â i + | |T i | .(6)
Positive Evidence F1 (PEF1): We combine the aforementioned recall and precision metrics to compute the F1 score of the inquired evidences.
P EF 1 = 1 |D| Σ |D| i=1 2 P EP i × P ER i P EP i + P ER i .(7)
Figure 1 :
1Histogram of the pathologies experienced by the synthesized patients.
Figure 3b (Figure 2 :
3b2resp. Figure 3c) illustrates the same histogram when the pathology The socio-demographic statistics of the synthesized patients. The sex distribution is shown in plot (a) while the age and the race distributions are respectively depicted in plots (b) and (c).
Figure 7 :
7Histogram of the patient pathologies in the test subset.
Figure 8 :
8The socio-demographic statistics of the patients from the train subset.
Figure 9 :
9The socio-demographic statistics of the patients from the validation subset.
Figure 10 :
10The socio-demographic statistics of the patients from the test subset.
Table 2 :
2The statistics of the synthesized patients.
Table 3
3Method
IL
GTPA@1 GTPA@3 GTPA@5 PER PEP PEF1
ASD w/ Diff
17.46 68.23
91.35
96
87.71 74.05 76.64
AARLC w/ Diff
14.1
61.74
91.51
96.6
51.56 74.83 56.99
ASD w/o Diff
17.85 96.65
98.32
98.34
88.15 73.26 76.17
AARLC w/o Diff 7.27
98.86
99.79
99.93
35.91 79.38 45.51
Table 4
Method
DDR@1 DDP@1 DDF1@1 DDR@3 DDP@3 DDF1@3 DDR@5 DDP@5 DDF1@5
ASD w/ Diff
79.59
79.59
79.59
75.72
76.7
75.27
77.1
78.86
76.36
AARLC w/ Diff
71.59
71.59
71.59
74.86
69.58
71.14
79.33
69.79
72.53
ASD w/o Diff
71.35
71.35
71.35
37.19
92.05
51.25
28.42
96.54
41.05
AARLC w/o Diff 72.73
72.73
72.73
37.31
91.78
51.25
28.56
95.97
41.05
Table 5
5describes the corresponding statistics.Category
Probability
Sex
Male
0.4836
Female
0.5164
Age
Less than 1-year
0.0154
1-4-years
0.0461
5-14-years
0.1146
15-29-years
0.2132
30-44-years
0.2025
45-59-years
0.2042
60-74-years
0.1399
75-years and more
0.0641
Race
White
0.5778
Hispanic
0.1658
Black
0.1638
Asian
0.0831
Native
0.0087
Other
0.0008
Table 5 :
5The 2010-2015 census data of the state of New York.
Table 7 :
7The statistics of the synthesized patients on the train subset.Evidences Symptoms Antecedents
Avg
13.69
10.24
3.45
Std dev
5.05
4.65
2.23
Min
1
1
0
1st quartile 10
8
2
Median
13
10
3
3rd quartile 17
12
5
Max
34
25
12
Table 8 :
8The statistics of the synthesized patients on the validation subset.Evidences Symptoms Antecedents
Avg
13.66
10.19
3.46
Std dev
5.05
4.67
2.23
Min
1
1
0
1st quartile 10
8
2
Median
13
10
3
3rd quartile 17
12
5
Max
35
25
12
Dataset available at https://github.com/bruzwen/ddxplus.
https://openreview.net/forum?id=TCAmP8zKZ6k¬eId=SCxQNdf67Rw
As opposed to the interactive mode where the evidences are provided sequentially upon requests made by the platform.
. If p = n, set the target of the policy branch to "stop"; otherwise set the target to be one of the experienced evidences that was not sampled at step 1).4. Set the classifier branch target to be the ground-truth pathology.Both branches are trained using the cross-entropy loss function and the classifier branch is only updated when the target of the policy branch is set to "stop".
Task-oriented dialogue system for automatic diagnosis. Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuan-Jing Huang, Kam-Fai Wong, Xiang Dai, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuan- Jing Huang, Kam-Fai Wong, and Xiang Dai. Task-oriented dialogue system for automatic diagnosis. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201-207, 2018.
End-to-end knowledge-routed relational dialogue system for automatic diagnosis. Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, Liang Lin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, and Liang Lin. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7346-7353, 2019.
Diaformer: Automatic diagnosis via symptoms sequence generation. Junying Chen, Dongfang Li, Qingcai Chen, Wenxiu Zhou, Xin Liu, arXiv:2112.10433arXiv preprintJunying Chen, Dongfang Li, Qingcai Chen, Wenxiu Zhou, and Xin Liu. Diaformer: Automatic diagnosis via symptoms sequence generation. arXiv preprint arXiv:2112.10433, 2021.
A weighted heterogeneous graph-based dialog system. Xinyan Zhao, Liangwei Chen, Huanhuan Chen, IEEE Transactions on Neural Networks and Learning Systems. Xinyan Zhao, Liangwei Chen, and Huanhuan Chen. A weighted heteroge- neous graph-based dialog system. IEEE Transactions on Neural Networks and Learning Systems, 2021.
A bayesian approach for medical inquiry and disease inference in automated differential diagnosis. Hong Guan, Chitta Baral, arXiv:2110.08393arXiv preprintHong Guan and Chitta Baral. A bayesian approach for medical inquiry and disease inference in automated differential diagnosis. arXiv preprint arXiv:2110.08393, 2021.
Wenge Liu, Yi Cheng, Hao Wang, Jianheng Tangi, Yafei Liu, Ruihui Zhao, Wenjie Li, Yefeng Zheng, Xiaodan Liang, arXiv:2204.13953Building a medical diagnosis agent with interpretable inquiry logics. arXiv preprintmy nose is running.""are you also coughingWenge Liu, Yi Cheng, Hao Wang, Jianheng Tangi, Yafei Liu, Ruihui Zhao, Wenjie Li, Yefeng Zheng, and Xiaodan Liang. "my nose is running.""are you also coughing?": Building a medical diagnosis agent with interpretable inquiry logics. arXiv preprint arXiv:2204.13953, 2022.
Efficient symptom inquiring and diagnosis via adaptive alignment of reinforcement learning and classification. Hongyi Yuan, Sheng Yu, arXiv:2112.00733arXiv preprintHongyi Yuan and Sheng Yu. Efficient symptom inquiring and diagnosis via adaptive alignment of reinforcement learning and classification. arXiv preprint arXiv:2112.00733, 2021.
The patient history: Evidence-based approach. Mark Henderson, M Lawrence, Gerald W Tierney, Smetana, McGraw Hill ProfessionalMark Henderson, Lawrence M Tierney, and Gerald W Smetana. The patient history: Evidence-based approach. McGraw Hill Professional, 2012.
Refuel: Exploring sparse features in deep reinforcement learning for fast disease diagnosis. Yu-Shao Peng, Kai-Fu Tang, Hsuan-Tien Lin, Edward Chang, Advances in neural information processing systems. 31Yu-Shao Peng, Kai-Fu Tang, Hsuan-Tien Lin, and Edward Chang. Refuel: Exploring sparse features in deep reinforcement learning for fast disease diagnosis. Advances in neural information processing systems, 31:7322- 7331, 2018.
Academic emergency medicine : official journal of the Society for Academic Emergency Medicine. Dominik Aronsky, Diane Kendall, Kathleen Merkley, Brent James, Peter Haug, 8A comprehensive set of coded chief complaints for the emergency departmentDominik Aronsky, Diane Kendall, Kathleen Merkley, Brent James, and Pe- ter Haug. A comprehensive set of coded chief complaints for the emergency department. Academic emergency medicine : official journal of the Society for Academic Emergency Medicine, 8:980-9, 11 2001.
Academic emergency medicine : official journal of the Society for Academic Emergency Medicine. David Thompson, David Eitel, Christopher Fernandes, Jesse Pines, James Amsterdam, 13Coded chief complaints-automated analysis of freetext complaintsDavid Thompson, David Eitel, Christopher Fernandes, Jesse Pines, and James Amsterdam. Coded chief complaints-automated analysis of free- text complaints. Academic emergency medicine : official journal of the Society for Academic Emergency Medicine, 13:774-82, 08 2006.
Synthea: An approach, method, and software mechanism for generating synthetic patients and the synthetic electronic health care record. Jason Walonoski, Mark Kramer, Joseph Nichols, Andre Quina, Chris Moesel, Dylan Hall, Carlton Duffett, Kudakwashe Dube, Thomas Gallagher, Scott Mclachlan, Journal of the American Medical Informatics Association. 253Jason Walonoski, Mark Kramer, Joseph Nichols, Andre Quina, Chris Moe- sel, Dylan Hall, Carlton Duffett, Kudakwashe Dube, Thomas Gallagher, and Scott McLachlan. Synthea: An approach, method, and software mech- anism for generating synthetic patients and the synthetic electronic health care record. Journal of the American Medical Informatics Association, 25(3):230-238, 08 2017.
2010-2015 US Census Data. Us Government, US Government. 2010-2015 US Census Data. https://www2.census.gov/ programs-surveys/popest/datasets/2010-2015/. Accessed: 2021-01- 31.
AHEAD Research Inc. SymCAT: Symptom-based. AHEAD Research Inc. SymCAT: Symptom-based, Computer Assisted Triage. http://www.symcat.com/. Accessed: 2021-01-31.
Knowledge grounded conversational symptom detection with graph memory networks. Hongyin Luo, Shang-Wen, James R Li, Glass, CLIN-ICALNLP. Hongyin Luo, Shang-Wen Li, and James R. Glass. Knowledge grounded conversational symptom detection with graph memory networks. In CLIN- ICALNLP, 2020.
| [
"https://github.com/bruzwen/ddxplus."
] |
[
"Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension",
"Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension",
"Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension",
"Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension"
] | [
"Nuo Chen \nADSPLAB\nSchool\nDepartment of Electrical Engineering\nECE Peking University\nShenzhenChina\n",
"Chenyu You \nYale University\nCTUSA\n",
"Nuo Chen \nADSPLAB\nSchool\nDepartment of Electrical Engineering\nECE Peking University\nShenzhenChina\n",
"Chenyu You \nYale University\nCTUSA\n"
] | [
"ADSPLAB\nSchool\nDepartment of Electrical Engineering\nECE Peking University\nShenzhenChina",
"Yale University\nCTUSA",
"ADSPLAB\nSchool\nDepartment of Electrical Engineering\nECE Peking University\nShenzhenChina",
"Yale University\nCTUSA"
] | [] | Recently, the attention-enhanced multi-layer encoder, such as Transformer, has been extensively studied in Machine Reading Comprehension (MRC). To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i.e., passage and question. The analysis shows that the representation of source sequence becomes more coarse-grained from finegrained as the encoding layer increases. It is generally believed that with the growing number of layers in deep neural networks, the encoding process will gather relevant information for each location increasingly, resulting in more coarse-grained representations, which adds the likelihood of similarity to other locations (referring to homogeneity). Such phenomenon will mislead the model to make wrong judgement and degrade the performance. In this paper, we argue that it would be better if the predictor could exploit representations of different granularity from the encoder, providing different views of the source sequences, such that the expressive power of the model could be fully utilized. To this end, we propose a novel approach called Adaptive Bidirectional Attention-Capsule Network (ABA-Net), which adaptively exploits the source representations of different levels to the predictor. Furthermore, due to the better representations are at the core for boosting MRC performance, the capsule network and self-attention module are carefully designed as the building blocks of our encoders, which provides the capability to explore the local and global representations, respectively. Experimental results on three benchmark datasets, i.e., SQuAD 1.0, SQuAD 2.0 and COQA, demonstrate the effectiveness of our approach. In particular, we set the new state-of-the-art performance on the SQuAD 1.0 dataset. | 10.48550/arxiv.2208.08750 | [
"https://export.arxiv.org/pdf/2208.08750v1.pdf"
] | 251,643,623 | 2208.08750 | ad934d22f6df7d2de9c9958fe12af17198394e1b |
Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension
Nuo Chen
ADSPLAB
School
Department of Electrical Engineering
ECE Peking University
ShenzhenChina
Chenyu You
Yale University
CTUSA
Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension
Recently, the attention-enhanced multi-layer encoder, such as Transformer, has been extensively studied in Machine Reading Comprehension (MRC). To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i.e., passage and question. The analysis shows that the representation of source sequence becomes more coarse-grained from finegrained as the encoding layer increases. It is generally believed that with the growing number of layers in deep neural networks, the encoding process will gather relevant information for each location increasingly, resulting in more coarse-grained representations, which adds the likelihood of similarity to other locations (referring to homogeneity). Such phenomenon will mislead the model to make wrong judgement and degrade the performance. In this paper, we argue that it would be better if the predictor could exploit representations of different granularity from the encoder, providing different views of the source sequences, such that the expressive power of the model could be fully utilized. To this end, we propose a novel approach called Adaptive Bidirectional Attention-Capsule Network (ABA-Net), which adaptively exploits the source representations of different levels to the predictor. Furthermore, due to the better representations are at the core for boosting MRC performance, the capsule network and self-attention module are carefully designed as the building blocks of our encoders, which provides the capability to explore the local and global representations, respectively. Experimental results on three benchmark datasets, i.e., SQuAD 1.0, SQuAD 2.0 and COQA, demonstrate the effectiveness of our approach. In particular, we set the new state-of-the-art performance on the SQuAD 1.0 dataset.
Introduction
Machine reading comprehension (MRC) (Chen et al., 2021c;You et al., 2021b;You et al., 2020b) is a long-standing task that aims to teach machine how to read and comprehend a given source sequence, i.e., passage/paragraph, then answer its corresponding given questions automatically. It has large amounts of real application scenarios such as question answering and dialog systems. In recent years, the deep neural networks, including the recurrent neural networks (RNNs) and convolutional neural networks (CNNs) Zhang, 2019) have been introduced to extract representations of source sequence efficiently and make great progress in MRC. However, for RNNs-based models, the undeniable deficiency is that due to the sequential dependency, its learning ability of long sentences is insufficient and it is unsuitable for parallel training (Vaswani et al., 2017a); For CNNs-based models, in which the pooling operation incorrectly discards spatial information and does not consider the hierarchical relationship between extracted features (Liu et al., 2019b).
Recently, the attention-enhanced multi-layer encoder, e.g., Transformer (Vaswani et al., 2017a), AL-BERT (Lan et al., 2019), RoBERTa (Liu et al., 2019c) and XLNet (Yang et al., 2019b), which is based solely on attention mechanisms (Bahdanau et al., 2015) and eliminates recurrence entirely, has been proposed and has established the state-of-the-art in multiple challenging MRC datasets (Rajpurkar et al., 2016;Joshi et al., 2017;Reddy et al., 2019). However, under the multi-layer deep learning setting, the representations of source sequence will become more coarse-grained from fine-grained with more encoder layers. Following in , Figure 1 illustrates that as the multi-layer encoder processes the source sequences, each input word will gradually gather more and more related information as more layers are used, resulting in more abstract representations, i.e., from fine-grained to coarse-grained representations that are more likely to be similar with other positions (Homogeneous Phenomenon). For those representations output by different encoder layers, the common practice for the current answer predictor is to draw information (coarse-grained representations) only from the final encoder layer. However, it should be intuitive that coarse-grained representations are good at expressing the overall meaning of the source sequence, but are less precise in the finer details. If we also exploit fine-grained representations for the answer predictor, it will help the predictor find source information more precisely and give answers that are more accurate. As we observe in Figure 1, due to the baseline model only focus on the coarse-grained representations, it gives an incorrect answer. In contrast, our method exploits detailed and accurate information, i.e., the fine-grained representations of NO.23 and NO.24, resulting in helping the model focus on the correct source information and predict correct answer.
In this paper, we propose a novel framework called Adaptive Bidirectional Attention-Capsule Network (ABA-Net) to dynamically provide multi-granularity source representations for better predicting the correct answer. As shown in Figure 2, the proposed approach builds connections with each encoder layer, so that the ABA-Net not only can exploit the coarse-grained representations, of the source sequence, which is instrumental in language modeling, but also can exploit the fine-grained representations of the source sequence, which is helpful to predict more precise answers. Hence, the answer predictor is encouraged to use source representations of different granularity, exploiting the expressive power of the model. Furthermore, due to better textual feature representations are at the core for boosting MRC performance, we propose to leverage the capsule network and self-attention module as the building blocks of encoders that separately encode the passage and question to obtain a more effective understanding of each word, i.e., the local and global representations. Last, before finally decoding to the probability that each position is the beginning or end of the answer span, we use our stacked encoder blocks to encode the generated representation again in the model encoding layer. Our experimental results show that each component has substantial gains in predicting answers correctly.
In general, our main contributions in this paper are as follows:
• We propose a novel approach called ABA-Net, which can encourage the answer predictor to take full advantage of the expressive power of the multi-layer encoder through exploring and exploiting the multi-granularity representations of source sequences.
• We propose to introduce the capsule network and self-attention module, which can better encode complicated text features and can capture the global representations, as our basic encoder block for obtaining effective representations. Our encoder blocks also can be stacked to form a hierarchy for multi-step interactions, which can gradually capture mutual relation and better transfer knowledge.
• Specifically, the proposed approach achieves the state-of-the-art performance on the SQuAD 1.0 dataset, showing the concrete benefit of exploring and exploiting multi-granularity representations. Attention to reasonably exploit multi-granularity representations (red lines), and specific implementation details can be seen in the Section 3.2.2. We use the same Encoder Block (right) throughout the model, only varying the number of Conv-Pri-Dig and layernorm layers for each block. Notice that, we use depthwise separable convolutions (Chollet, 2017) instead of traditional ones. Following (Yu et al., 2018), we use a residual connection between each layer of the Encoder Block and share weights of the passage and question encoder in the embedding layer, as well as the three modeling encodes .
Related Work
Machine Reading Comprehension. In recent years, MRC Chen et al., 2021b;You et al., 2020a;Chen et al., 2021c) has been an active field in NLP. Much of this popularity can be attributed to the release of many annotated and publicly available datasets, such as CNN/Daily News (Hermann et al., 2015), WikiReading (Hewlett et al., 2016), SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), DuReaderrobust (Tang et al., 2020), Molweni , etc. Based on these challenging datasets, a great number of end-to-end approaches have been proposed, including BiDAF (Seo et al., 2016), DCN (Xiong et al., 2016), R-Net (Wang et al., 2017). In MRC tasks, attention mechanism (Dong et al., 2020a;Gao et al., 2020;Zhu et al., 2020;Chen et al., 2022) have become an essential part to capture dependencies without considering their distance in the input/output sequences. Recently, some works show that well pre-trained models are powerful and convenient for downstream tasks, such as R-Trans (Liu et al., 2019a), DCMN+ , ALBERT (Lan et al., 2020) and GF-Net (Lee and Kim, 2020), which facilitate us to take pre-trained models as our backbone encoder. Capsule Network. Capsule Networks were initially proposed by Hinton (2011), which demonstrate the robustness in learning meaningful representations than vanilla CNNs. In (Sabour et al., 2017), it refered to a capsule as a set of neurons whose activity vectors represent instantiation parameters of a particular type of entity, such as an object or part thereof. Then Hinton (2018) introduced a novel iterative routing procedure between each capsule layer, which is based on the EM algorithm and shows a higher level of accuracy on the smallNORB (LeCun et al., 2004) dataset. Over the past few years, remarkable progress and research have been made with the capsule network in natural language processing (NLP). Dong (2020b) employs Capsule Network based on BiLSTM for the sentiment analysis. Zhao (2019), and Yang (2019a) investigate Capsule Network for text classification.
Our work is different from our predecessors in the following aspects: (1) We propose a new attention mechanism to help MRC models leverage multi-granularity representations of each word reasonably ; (2) We innovatively combine the capsule network with self-attention in a residual encoder block to capture the local and global representations, respectively ; (3) To the best of our knowledge, our model is the first to apply the capsule network in MRC.
Model
Problem Formulation
In this paper, the reading comprehension task is defined as follows: Given a passage/paragraph with n words P = {p 1 , p 2 , p 3 , ..., p n } and a question with m words Q = {q 1 , q 2 , q 3 , ..., q m } as inputs, the goal is to find an answer span A in P . If the question is answerable, the answer span A exists in P as a sequential text string; Otherwise, A is set to an empty string indicating that the question is unanswerable.
Formally, the answer is formulated as A = {p begin , p end }. In the case of unanswerable questions, A denotes the last token of the passage.
Model Overview
Generally, our model contains four major parts: an embedding layer, an interaction-attention layer, a model encoding layer and an output layer, as shown in Figure 2. Embedding layer is responsible for encoding each token in the passage and question into a fixed-length vector. In this layer, we use Glove (Pennington et al., 2014) as fundamental word embedding, and then they are passed through a BiLSTM (Hochreiter and Schmidhuber, 1997) and our proposed Encoder block separately. Moreover, we utilize ALBERT (Lan et al., 2020) contextualized word embedding. Interaction-Attention layer plays a key role in ABA-Net, which employs Adaptive Bidirectional Attention to distinguish and utilize multigranularity presentations of each word reasonably. M odel Encoding Layer encodes the generated representations again by our encoder blocks. Output Layer leverages the output of the previous layer to compute the final answer span by a bilinear projection.
Embedding Layer
Word-Char Embedding. We obtain the fixed word embedding 1 from pre-trained Glove (Pennington et al., 2014) word vectors. Following (Yu et al., 2018), we obtain the character-level embedding of each word using CNN. Following (Seo et al., 2016), a two-layer highway network in (Srivastava et al., 2015) is used on the top. As results, we obtain the final embeddings for the tokens for P as a matrix E p ∈ R d×n , and tokens in Q as a matrix E q ∈ R d×m .
Contextualized Word Embedding. Since the context encoding generated by the pre-trained model plays an important role in machine reading comprehension, we employ ALBERT (Lan et al., 2020) as contextualized embedding 2 like SDNET . ALBERT generates L layers of hidden states for all BPE (Sennrich et al., 2015) tokens in a sentence/passage and we employ a weighted sum of these hidden states to obtain contextualized embedding. In detail, given a word w, which is tokenized to S BPE tokens w = a 1 , a 2 , ..., a s , and ALBERT generates L hidden states for each BPE token, h l t , 1 ≤ l ≤ L, 1 ≤ s ≤ S. The contextual embedding ALBERT w for word w is then a per-layer weighted sum of average ALBERT embedding, with weights θ 1 , ..., θ L : (1)
BiLSTM. In this component, we use two separate bidirectional RNNs (BiLSTMs (Hochreiter and Schmidhuber, 1997)) to learn the contextualized understanding for P and Q.
h P,k 1 , ..., h P,k m = BiLST M (h P,k−1 1 , ..., h P,k−1 m ) (2) h Q,k 1 , ..., h Q,k m = BiLST M (h Q,k−1 1 , ..., h Q,k−1 m )(3)
where1 ≤ k ≤ K and K is the number of BiLSTM layers. Thus, we get the representation:Ĉ p ∈ R d×2n for passage andĈ p ∈ R d×2m for question. Encoder Block. In this component, we leverage and extend the strategy in (Yu et al., 2018) to construct our own encoder block architecture. As showed in Figure 2, we propose the following basic building block : [convolution-PrimaryCaps-DigitCpas-layer ×n + self-attention-layer + feed-forwardlayer] to encode the contextual information of both passages/paragraphs and questions respectively. In detail, the kernel size is 7, the number of filters is d = 128 and the number of Conv-Pri-Dig layers within a block is 5. Following (Yu et al., 2018), we adopt the multi-head attention mechanism defined in (Vaswani et al., 2017b), which calculates the weighted sum of all positions, for each position in the input, called a query or keys, based on the similarity between the query and key as measured by the dot product. Each of these basic operations (Conv-Pri-Dig/self-attention/ffn) is placed inside a residual block in (Yu et al., 2018). For an input x and a given operation f , the output is f (layernorm(x)) + x, meaning there is a full identity path from the input to output of each block. As results, final representation is defined as: C p for passage and C q for questions.
Interaction-Attention Layer
Adaptive Bidirectional Attention. After multiple layers extract different levels of presentations of each word, we conduct Adaptive Bidirectional attention from question to passage and passage to question based on all layers of generated representations to build connection of all layers of representation, which is showed in Figure 3. We define a novel concept of 'history of semantic', which denotes multigranularity representations extracted by the model before the current layer. Thus history-of-semantic vectors can be defined as:
HOS p = [C w ; C c ; E p ;ÂLBERT P ; C p ;Ĉ p ] HOS q = [Q w ; Q c ; E q ;ÂLBERT Q ; C q ;Ĉ q ](4)
where C w /Q w and C c /Q c are word/char embedding matrixs for passage and question respectively. Adaptive. In order to obtain multi-level representations connection between the layers. We have designed the following function:
λ p × HOS T p = HOS T p λ q × HOS T q = HOS T q(5)
where the matrix λ is a trainable matrix. Notice that at the beginning of training, in order to retain the original semantics of each layer, we will initialize the first column of this matrix to all 1, the remaining columns are all 0.
However, since the data aggregation requires extensive computation resource, we specially train a set of selection parameters to reduce the dimension of vectors: α 1 , α 2 , ..., α k to choose the three most relevant semantic vectors of HOS p and HOS q . Thus, the input of bidirectional multilevel attention is defined as: HOS P for passages, HOS Q for questions.
Bidirectional Attention.
In this component, our model dynamically computes attention of the embeddings from previous layers each time , are allowed to flow through to the downstream modeling layer. Like most high performing models, such as (Xiong et al., 2016), (Seo et al., 2016), (Yu et al., 2018), we construct passage-question and question-passage attention respectively. The attention function in (Seo et al., 2016) is used to compute the similarity score between passages and questions. First, we calculate the similarities between each pair words in passages and questions, and rendering a similarity matrix H ∈ R n×m . H is computed as:
H = dropout(f attention [HOS P , HOS Q ])(6)
After that, we use the strategy of (Yu et al., 2018) to normalize each row of H by applying the softmax function, getting a matrixĤ. Then the passage-to-question attention is computed as M = H · HOS Q T .Following DCN (Xiong et al., 2017) , we compute the column normalized matrixH of H by softmax function, and the question-to-passage attention is S =Ĥ ·H T · HOS P T . At last, our model use a simple concatenation as following to get the final representation, which shows good performance in our experiments: O = [HOS P ; M ; HOS P M ; HOS P S].
Model Encoding Layer
The input of this layer is O, which encodes the question-aware representations of context words. This layer use the same building blocks as the Interaction Layer except that Conv-Pri-Dig layer number is 2 within a block and the total number of blocks are 4. Following (Yu et al., 2018), we share weights between each of the 3 repetitions of the model encoder.
Output Layer
This layer aims to calculate two scores for each word in the passage, corresponding to the probability that the answer begins and ends at that word. Like most existing models, the last output layer is taskspecific. We adopt the following strategy (Yu et al., 2018) to predict the probability of each position in the passage/paragraph being the start or end of an answer span, if the question is answerable. More concretely, the distributed probabilities of the starting and ending index for the answer span are defined as:
P begin = sof tmax(W 1 [B 1 ; B 2 ])(7)P end = sof tmax(W 2 [B 2 ; B 3 ])(8)
where W 1 and W 2 are two trainable variables and B 1 , B 2 , B 3 are respectively the outputs of the three model encoders, from bottom to top. If the question is unanswerable, the final output is an empty string. In addition, the proposed model can be customized to other comprehension tasks, e.g. selecting from the candidate answers, by changing the output layers accordingly.
Objective Following (Yu et al., 2018), the objective function is defined as the negative sum of the log probabilities of the predicted distributions indexed by true start and end indices, averaged over all the training examples:
L(θ) = − 1 N N i [log(p begin y 1 i ) + log(p end y 2 i )](9)
where y 1 i and y 2 i are respectively the ground truth starting and ending position of example i, and contains all the trainable variables. (Liu et al., 2018) 76.8/84.2 68.6/71.4 -BiDAF++ 77.6/84.9 65.6/68.7 69.5 QANet (Yu et al., 2018) 80.9/87.8 65.4/67.2 -Bert large (Devlin et al., 2018) 85.
Experiments
In this section, we conduct experiments to study the performance of our model. We will primarily benchmark our model on the SQuAD 2.0 dataset (Rondeau and Hazen, 2018) and SQuAD 1.0 dataset (Rajpurkar et al., 2016). We also conduct similar studies on COQA dataset (Reddy et al., 2019), a conventional Q&A dataset, to show that the effectiveness and efficiency of our model are general. Detailed introduction of these datasets and experimental settings can be found in Appendix A.
Results
The F1 and Exact Match (EM) are two metrics of accuracy for evaluating MRC models performance. F1 measures the part of the overlapping mark between the predicted answer and groundtruth, and if the prediction is exactly the same as the ground truth, the exact match score is 1, otherwise it is 0. We show the results in comparison with other single models in Table 1. From the table, the accuracy (EM/F1) performance of our model is on par with the state-of-the-art models. In detail, ABA-Net achieves the state-of-the-art results on SQuAD 1.0 and competitive performance on SQuAD 2.0 and COQA compared with other baseline models. More concretely, ABA-Net improves EM/F1 score to 90.4/95.8, compared with previous state-of-the-art models on SQuAD 1.0; ABA-Net also achieves an EM score of 89.6 and an F1 score of 91.7 on SQuAD 2.0. Furthermore, our model achieves an F1 (average) score of 88.5 on COQA, which is competitive with the best documented result. EM/F1 Base Model BiDAF++ 65.6/68.7 QANet (Yu et al., 2018) 65.4/67.2 SAN (Liu et al., 2018) 68.6/71.4 SDNet 76.7/79.8 SGNet 85.1/87.9 + Adaptive Bidirectional Attention BiDAF++ 67.2/70.0 QANet (Yu et al., 2018) 66.9/69.1 SAN (Liu et al., 2018) 70.1/73.0 SDNet 78.5/81.1 SGNet 86.7/89.2 Table 3: An comparison study of Adaptive Bidirectional Attention on SQuAD 2.0.
Ablation Study And Analysis
Ablation Study. We conduct ablation studies on components of the proposed model. The validation scores on the test set are shown in Table 2. As can be seen from the table, the Adaptive Bidirectional Attention plays a crucial part in our model: both F1 and EM drop dramatically by almost 2 percent if it is removed. Self-attention in the encoders is also a necessary component that contributes 1.4/1.2,1.6/1.3,1.4 to the ultimate performance on three datasets. More significantly, the combination of Capsule Network and self-attention is significantly better than the combination of RNNs or CNNs and self-attention.
Analysis.
We interpret these phenomena as follows: Adaptive Bidirectional Attention is able to distinguish and utilize multi-granularity representations of each word reasonably. The capsule network captures the local structure of the context and questions while the self-attention is able to model the global context representations, interactions and further focus on the most important part of the interaction. Hence they are complimentary to but cannot replace each other. Concretely, capsule networks have the ability for learning hierarchical relationships between consecutive layers by using routing processes without parameters and additionally improve the generalization capability. The use of separable convolutions instead of traditional convolutions also has a prominent contribution to performance, which can be seen by replacing the depthwise separable convolutions with conventional convolutions, resulting in slightly worse accuracy.
Discussions & Visualization
Effect of Adaptive Bidirectional Attention . We additionally perform experiments to prove the effectiveness of our proposed Adaptive Bidirectional Attention mechanism. Specifically, we add Adaptive Bidirectional Attention on some end-to-end MRC models to compare with their initial version (i.e. base models in Table 3) on SQuAD 2.0. As can be seen from Table 3, after adding Adaptive Bidirectional Attention, the performance of these models could be improved to varying degrees. This also proves the versatility of this attention mechanism.
Effect of Capsule Network . We additionally perform experiments to understand this mechanism, which is shown in Table 4. We first adopt normal convolutions to replace the Capsule Network and the effect of the model decreases heavily on three benchmark datasets. We attribute it to the fact that Capsule Network is better than CNNs in extracting rich text representation. Then we employ a shared two-layers BiLSTM to replace the Capsule Network, which gives 0.9/1.1 and 0.6/1.0 EM/F1 decrease, 0.3 F1 decrease in our experiments. Notice that, due to the limitations of the existing capsule network, its performance on the complex multi-round dialogue reading comprehension dataset(COQA) does not perform better than other two datasets.
Effect of the total number of stacked blocks . Our model uses stacked encoder blocks to form a multi-step interactions. As showed in Figure 4, we observe that the performance of our model drops heavily when encountered with different number of blocks. Under the premise that the number of blocks is 4, ABA-Net achieves the best results on three datasets. We interpret this phenomena as follows: When the number of encoder blocks is less than 4, the local and global text features captured by the model are not enough, and it is easy to overfit. On the contrary, if the number exceeds, the model extracts many repeated contextual representations , which will hurt the performance. Visualization. To have an insight that how Adaptive Bidirectional Attention works, we draw attention distributions of the Syntax-guided attention of SGNet and our proposed Adaptive Bidirectional attention, as shown in Figure 5. The visualization verifies that benefiting from Adaptive Bidirectional attention , our model is effective at distinguishing and utilizing different layers of presentation of each word in the context and the query , guiding the downstream layer to collect more relevant semantics to make predictions.
Conclusion
In this paper, we propose a novel end-to-end machine reading comprehension (MRC) model Adaptive Bidirectional Attention-Capsule Net-work (ABA-Net) for exploring and exploiting multi-granularity representations of source sequences. In particular, our ABA-Net can adaptively exploit the source representations of different levels to the predictor. More concretely, our core innovation is to propose a novel attention mechanism called Adaptive Bidirectional Attention, which guides attention learning and reasonably leverages source representations of different levels for question answering by dynamically capturing the connection between all layers of representations of each word. Furthermore, to explore better representations, we incorporate the capsule network and self-attention module as the building blocks of encoders. The experimental results show that our model achieves competitive performance compared to the state-of-the-art on three public benchmark datasets. The systematic analysis demonstrates the effectiveness of each component in our model and shows that our model can learn effective representations for MRC and is capable of understanding rich context and answering complex questions. Future work can include an extension of employing Adaptive Bidirectional Attention to other question answering tasks.
Figure 1 :
1An visualization example of a Question-Answer pair for a passage.
Figure 2 :
2An overview of the ABA-Net architecture (left). ABA-Net employs Adaptive Bidirectional
Figure 3 :
3An overview of Adaptive Bidirectional Attention. is the element-wise multiplication.
Figure 4 :
4F1 accuracy of various stacked encoder blocks on three benchmark datasets.
Figure 5 :
5Visualization of Syntax-guided attention (left) and Adaptive Bidirectional attention (right). For Syntax-guided attention, who focuses on M ichael and 23, and there is a great similarity between 23 and 24, which would mislead the model to make wrong judgment. In contrast, this phenomenon does not happen in our Adaptive Bidirectional attention.
Table 1 :
1The performances of single models on different datasets.SQuAD 1.0 Diff to B-M SQuAD 2.0 Diff to B-M COQA Diff to B-M
EM/F1
EM/F1
EM/F1
EM/F1
EM/F1 F1(average)
Base ABA-Net
90.2/95.4
89.6/91.7
88.4
-self-attention in encoders
88.8/94.2
-1.4/-1.2
88.0/90.4
-1.6/-1.3
87.0
-1.4
-Capsule Network in encoders
88.7/93.7
-1.5/-1.7
88.5/90.4
-1.1/-1.3
87.3
-1.1
-Adaptive Bidirectional Attention
88.6/93.3
-1.6/-2.1
87.8/89.5
-1.8/-2.2
85.9
-2.5
replace sep ♣ conv with normal conv
89.7/95.0
-0.5/-0.5
89.2/91.2
-0.4/-0.5
87.9
-0.3
Table 2 :
2An ablation study of our model. The reported results are obtained on the test set and we use the original ABA-Net as the base model. Dif f to B-M is the abbreviation of Dif f erence to Base M odel. sep ♣ conv is the abbreviation of depthwise separable convolutions.
Table 4 :
4An comparison study of the Capsule Network .
We map the symbolic/surface feature of P and Q into neural space via word embeddings, 16-dim part-of-speech (POS) tagging embeddings, 8-dim named-entity embeddings and 4-dim hard-rule features.2 Instead of adding a scoring layer to pre-trained models as proposed in many question answering models, we use the transformer output from ALBERT-large as contextualized embedding in our encoding layer.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.
Adaptive bi-directional attention: Exploring multi-granularity representations for machine reading comprehension. Nuo Chen, Fenglin Liu, Chenyu You, Peilin Zhou, Yuexian Zou, ICASSP. IEEENuo Chen, Fenglin Liu, Chenyu You, Peilin Zhou, and Yuexian Zou. 2021a. Adaptive bi-directional attention: Exploring multi-granularity representations for machine reading comprehension. In ICASSP, pages 7833-7837. IEEE.
From good to best: Two-stage training for cross-lingual machine reading comprehension. Nuo Chen, Linjun Shou, Min Gong, Jian Pei, Daxin Jiang, Nuo Chen, Linjun Shou, Min Gong, Jian Pei, and Daxin Jiang. 2021b. From good to best: Two-stage training for cross-lingual machine reading comprehension.
Self-supervised dialogue learning for spoken conversational question answering. Nuo Chen, Chenyu You, Yuexian Zou, abs/2106.02182CoRRNuo Chen, Chenyu You, and Yuexian Zou. 2021c. Self-supervised dialogue learning for spoken conversational question answering. CoRR, abs/2106.02182.
Bridging the gap between language models and cross-lingual sequence labeling. Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsNuo Chen, Linjun Shou, Ming Gong, Jian Pei, and Daxin Jiang. 2022. Bridging the gap between language models and cross-lingual sequence labeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1909-1923, Seattle, United States, July. Association for Computational Linguistics.
Xception: Deep learning with depthwise separable convolutions. François Chollet, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAIEEE Computer SocietyFrançois Chollet. 2017. Xception: Deep learning with depthwise separable convolutions. In 2017 IEEE Confer- ence on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 1800-1807. IEEE Computer Society.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirec- tional transformers for language understanding. CoRR, abs/1810.04805.
A fusion model-based label embedding and self-interaction attention for text classification. Yanru Dong, Peiyu Liu, Zhenfang Zhu, Qicai Wang, Qiuyue Zhang, IEEE Access. 8Yanru Dong, Peiyu Liu, Zhenfang Zhu, Qicai Wang, and Qiuyue Zhang. 2020a. A fusion model-based label embedding and self-interaction attention for text classification. IEEE Access, 8:30548-30559.
A sentiment analysis method of capsule network based on bilstm. Yongfeng Dong, Yu Fu, Liqin Wang, Yunliang Chen, Yao Dong, Jianxin Li, IEEE Access. 8Yongfeng Dong, Yu Fu, Liqin Wang, Yunliang Chen, Yao Dong, and Jianxin Li. 2020b. A sentiment analysis method of capsule network based on bilstm. IEEE Access, 8:37014-37020.
Fused GRU with semantic-temporal attention for video captioning. Lianli Gao, Xuanhan Wang, Jingkuan Song, Yang Liu, Neurocomputing. 395Lianli Gao, Xuanhan Wang, Jingkuan Song, and Yang Liu. 2020. Fused GRU with semantic-temporal attention for video captioning. Neurocomputing, 395:222-228.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in neural information processing systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information process- ing systems, pages 1693-1701.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot, arXiv:1608.03542Wikireading: A novel large-scale language understanding task over wikipedia. arXiv preprintDaniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. arXiv preprint arXiv:1608.03542.
Transforming auto-encoders. Geoffrey E Hinton, Alex Krizhevsky, D Sida, Wang, Artificial Neural Networks and Machine Learning -ICANN 2011 -21st International Conference on Artificial Neural Networks. Timo Honkela, Wlodzislaw Duch, Mark A. Girolami, and Samuel KaskiEspoo, FinlandSpringer6791Proceedings, Part IGeoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. 2011. Transforming auto-encoders. In Timo Honkela, Wlodzislaw Duch, Mark A. Girolami, and Samuel Kaski, editors, Artificial Neural Networks and Machine Learning -ICANN 2011 -21st International Conference on Artificial Neural Networks, Espoo, Finland, June 14-17, 2011, Proceedings, Part I, volume 6791 of Lecture Notes in Computer Science, pages 44-51. Springer.
Matrix capsules with EM routing. Geoffrey E Hinton, Sara Sabour, Nicholas Frosst, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netGeoffrey E. Hinton, Sara Sabour, and Nicholas Frosst. 2018. Matrix capsules with EM routing. In 6th Interna- tional Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Fusionnet: Fusing via fully-aware attention with application to machine comprehension. Hsin-Yuan, Chenguang Huang, Yelong Zhu, Weizhu Shen, Chen, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview. netHsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018. Fusionnet: Fusing via fully-aware attention with application to machine comprehension. In 6th International Conference on Learning Represen- tations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.
Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, S Daniel, Luke Weld, Zettlemoyer, arXiv:1705.03551arXiv preprintMandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
ALBERT: A lite BERT for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Learning methods for generic object recognition with invariance to pose and lighting. Yann Lecun, Jie Fu, Léon Huang, Bottou, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), with CD-ROM. Washington, DC, USAIEEE Computer SocietyYann LeCun, Fu Jie Huang, and Léon Bottou. 2004. Learning methods for generic object recognition with invariance to pose and lighting. In 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2004), with CD-ROM, 27 June -2 July 2004, Washington, DC, USA, pages 97-104. IEEE Computer Society.
Gf-net: Improving machine reading comprehension with feature gates. Harksoo Hyeon-Gu Lee, Kim, Pattern Recognit. Lett. 129Hyeon-gu Lee and Harksoo Kim. 2020. Gf-net: Improving machine reading comprehension with feature gates. Pattern Recognit. Lett., 129:8-15.
Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, Bing Qin, abs/2004.05080CoRRJiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020. Mol- weni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. CoRR, abs/2004.05080.
Stochastic answer networks for squad 2. Xiaodong Liu, Wei Li, Yuwei Fang, Aerin Kim, Kevin Duh, Jianfeng Gao, arXiv:1809.09194.0. arXiv preprintXiaodong Liu, Wei Li, Yuwei Fang, Aerin Kim, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for squad 2.0. arXiv preprint arXiv:1809.09194.
R-trans: RNN transformer network for chinese machine reading comprehension. Shanshan Liu, Sheng Zhang, Xin Zhang, Hui Wang, IEEE Access. 7Shanshan Liu, Sheng Zhang, Xin Zhang, and Hui Wang. 2019a. R-trans: RNN transformer network for chinese machine reading comprehension. IEEE Access, 7:27736-27745.
Neural machine reading comprehension: Methods and trends. Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, Weiming Zhang, Applied Sciences. 9183698Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, and Weiming Zhang. 2019b. Neural machine reading com- prehension: Methods and trends. Applied Sciences, 9(18):3698.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019d. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Alessandro Moschitti, Bo Pang, and Walter Daelemansthe 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarACLA meeting of SIGDAT, a Special Interest Group of the ACLJeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word represen- tation. In Alessandro Moschitti, Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543. ACL.
Squad: 100, 000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Jian Su, Xavier Carreras, and Kevin Duhthe 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAThe Association for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Jian Su, Xavier Carreras, and Kevin Duh, editors, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392. The Association for Computational Linguistics.
CoQA: A conversational question answering challenge. Siva Reddy, Danqi Chen, Christopher D Manning, Transactions of the Association for Computational Linguistics. 7Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, March.
Systematic error analysis of the stanford question answering dataset. Marc- , Antoine Rondeau, Timothy J Hazen, Proceedings of the Workshop on Machine Reading for Question Answering@ACL. Eunsol Choi, Minjoon Seo, Danqi Chen, Robin Jia, and Jonathan Berantthe Workshop on Machine Reading for Question Answering@ACLMelbourne, AustraliaAssociation for Computational LinguisticsMarc-Antoine Rondeau and Timothy J. Hazen. 2018. Systematic error analysis of the stanford question answering dataset. In Eunsol Choi, Minjoon Seo, Danqi Chen, Robin Jia, and Jonathan Berant, editors, Proceedings of the Workshop on Machine Reading for Question Answering@ACL 2018, Melbourne, Australia, July 19, 2018, pages 12-20. Association for Computational Linguistics.
Dynamic routing between capsules. Sara Sabour, Nicholas Frosst, Geoffrey E Hinton, abs/1710.09829CoRRSara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. CoRR, abs/1710.09829.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, abs/1508.07909CoRRRico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909.
Bidirectional attention flow for machine comprehension. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, Hannaneh Hajishirzi, abs/1611.01603CoRRMin Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603.
Bi-directional block self-attention for fast and memory-efficient sequence modeling. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Chengqi Zhang, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netTao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018. Bi-directional block self-attention for fast and memory-efficient sequence modeling. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.
. Klaus Rupesh Kumar Srivastava, Jürgen Greff, Schmidhuber, abs/1505.00387Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015. Highway networks. CoRR, abs/1505.00387.
Dureaderrobust: A chinese dataset towards evaluating the robustness of machine reading comprehension models. Hongxuan Tang, Jing Liu, Hongyu Li, Yu Hong, Hua Wu, Haifeng Wang, abs/2004.11142CoRRHongxuan Tang, Jing Liu, Hongyu Li, Yu Hong, Hua Wu, and Haifeng Wang. 2020. Dureaderrobust: A chinese dataset towards evaluating the robustness of machine reading comprehension models. CoRR, abs/2004.11142.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman GarnettLong Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017b. Attention is all you need. CoRR, abs/1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017b. Attention is all you need. CoRR, abs/1706.03762.
Gated self-matching networks for reading comprehension and question answering. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, Ming Zhou, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189-198.
Siamese capsule networks with global and local features for text classification. Yujia Wu, Jing Li, Jia Wu, Neurocomputing. 390Yujia Wu, Jing Li, Jia Wu, and Jun Chang. 2020. Siamese capsule networks with global and local features for text classification. Neurocomputing, 390:88-98.
Dynamic coattention networks for question answering. Caiming Xiong, Victor Zhong, Richard Socher, abs/1611.01604CoRRCaiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. CoRR, abs/1611.01604.
Dynamic coattention networks for question answering. Caiming Xiong, Victor Zhong, Richard Socher, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. OpenReview.netCaiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.
Investigating the transferring capability of capsule networks for text classification. Min Yang, Wei Zhao, Lei Chen, Qiang Qu, Zhou Zhao, Ying Shen, Neural Networks. 118Min Yang, Wei Zhao, Lei Chen, Qiang Qu, Zhou Zhao, and Ying Shen. 2019a. Investigating the transferring capability of capsule networks for text classification. Neural Networks, 118:247-261.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019b. Xl- net: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754-5764.
Towards data distillation for end-to-end spoken conversational question answering. Chenyu You, Nuo Chen, Fenglin Liu, Dongchao Yang, Yuexian Zou, abs/2010.08923CoRRChenyu You, Nuo Chen, Fenglin Liu, Dongchao Yang, and Yuexian Zou. 2020a. Towards data distillation for end-to-end spoken conversational question answering. CoRR, abs/2010.08923.
Contextualized attention-based knowledge transfer for spoken conversational question answering. Chenyu You, Nuo Chen, Yuexian Zou, arXiv:2010.11066arXiv preprintChenyu You, Nuo Chen, and Yuexian Zou. 2020b. Contextualized attention-based knowledge transfer for spoken conversational question answering. arXiv preprint arXiv:2010.11066.
Mrd-net: Multi-modal residual knowledge distillation for spoken question answering. Chenyu You, Nuo Chen, Yuexian Zou, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. Zhi-Hua Zhouthe Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21International Joint Conferences on Artificial Intelligence Organization, 8. Main TrackChenyu You, Nuo Chen, and Yuexian Zou. 2021a. Mrd-net: Multi-modal residual knowledge distillation for spo- ken question answering. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, pages 3985-3991. International Joint Conferences on Artificial Intelligence Organization, 8. Main Track.
Self-supervised contrastive cross-modality representation learning for spoken question answering. Chenyu You, Nuo Chen, Yuexian Zou, Findings of the Association for Computational Linguistics: EMNLP 2021. Chenyu You, Nuo Chen, and Yuexian Zou. 2021b. Self-supervised contrastive cross-modality representation learning for spoken question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 28-39.
. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, V Quoc, Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V.
Qanet: Combining local convolution with global self-attention for reading comprehension. Le, abs/1804.09541CoRRLe. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. CoRR, abs/1804.09541.
DCMN+: dual co-matching network for multi-choice reading comprehension. CoRR, abs. Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou, Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019a. DCMN+: dual co-matching network for multi-choice reading comprehension. CoRR, abs/1908.11511.
Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou, arXiv:1909.02209Semantics-aware bert for language understanding. arXiv preprintZhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2019b. Semantics-aware bert for language understanding. arXiv preprint arXiv:1909.02209.
Sg-net: Syntax-guided machine reading comprehension. Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceZhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, and Rui Wang. 2020a. Sg-net: Syntax-guided machine reading comprehension. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9636-9643. AAAI Press.
Retrospective reader for machine reading comprehension. Zhuosheng Zhang, Junjie Yang, Hai Zhao, arXiv:2001.09694arXiv preprintZhuosheng Zhang, Junjie Yang, and Hai Zhao. 2020b. Retrospective reader for machine reading comprehension. arXiv preprint arXiv:2001.09694.
Mc2: Multi-perspective convolutional cube for conversational machine reading comprehension. Xuanyu Zhang, ACL. Xuanyu Zhang. 2019. Mc2: Multi-perspective convolutional cube for conversational machine reading compre- hension. In ACL.
Towards scalable and reliable capsule networks for challenging NLP applications. Wei Zhao, Haiyun Peng, Steffen Eger, Erik Cambria, Min Yang, abs/1906.02829CoRRWei Zhao, Haiyun Peng, Steffen Eger, Erik Cambria, and Min Yang. 2019. Towards scalable and reliable capsule networks for challenging NLP applications. CoRR, abs/1906.02829.
Document modeling with graph attention networks for multi-grained machine reading comprehension. Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, Ting Liu, abs/2005.05806CoRRBo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, and Ting Liu. 2020. Document modeling with graph attention networks for multi-grained machine reading comprehension. CoRR, abs/2005.05806.
Sdnet: Contextualized attention-based deep network for conversational question answering. Chenguang Zhu, Michael Zeng, Xuedong Huang, abs/1812.03593CoRRChenguang Zhu, Michael Zeng, and Xuedong Huang. 2018. Sdnet: Contextualized attention-based deep network for conversational question answering. CoRR, abs/1812.03593.
Dual multi-head co-attention for multi-choice reading comprehension. Pengfei Zhu, Hai Zhao, Xiaoguang Li, abs/2001.09415CoRRPengfei Zhu, Hai Zhao, and Xiaoguang Li. 2020. Dual multi-head co-attention for multi-choice reading compre- hension. CoRR, abs/2001.09415.
| [] |
[
"Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations",
"Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations"
] | [
"Vladimir Araujo \nPontificia Universidad Católica de Chile\n2 KULeuven\n",
"Andrés Villa afvilla@uc.cl \nPontificia Universidad Católica de Chile\n2 KULeuven\n",
"Marcelo Mendoza mmendoza@inf.utfsm.clsien.moens@kuleuven.be \nUniversidad Técnica Federico Santa María\n\n",
"Marie-Francine Moens ",
"Alvaro Soto asoto@ing.puc.cl \nPontificia Universidad Católica de Chile\n2 KULeuven\n"
] | [
"Pontificia Universidad Católica de Chile\n2 KULeuven",
"Pontificia Universidad Católica de Chile\n2 KULeuven",
"Universidad Técnica Federico Santa María\n",
"Pontificia Universidad Católica de Chile\n2 KULeuven"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level. However, there has been limited progress in generating useful discourse-level representations. In this work, we propose to use ideas from predictive coding theory to augment BERT-style language models with a mechanism that allows them to learn suitable discourse-level representations. As a result, our proposed approach is able to predict future sentences using explicit top-down connections that operate at the intermediate layers of the network. By experimenting with benchmarks designed to evaluate discourse-related knowledge using pre-trained sentence representations, we demonstrate that our approach improves performance in 6 out of 11 tasks by excelling in discourse relationship detection. | 10.18653/v1/2021.emnlp-main.240 | [
"https://www.aclanthology.org/2021.emnlp-main.240.pdf"
] | 237,485,093 | 2109.04602 | 428673b922394938da458cf7fa4ed9e849ff1260 |
Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Vladimir Araujo
Pontificia Universidad Católica de Chile
2 KULeuven
Andrés Villa afvilla@uc.cl
Pontificia Universidad Católica de Chile
2 KULeuven
Marcelo Mendoza mmendoza@inf.utfsm.clsien.moens@kuleuven.be
Universidad Técnica Federico Santa María
Marie-Francine Moens
Alvaro Soto asoto@ing.puc.cl
Pontificia Universidad Católica de Chile
2 KULeuven
Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 2021
Current language models are usually trained using a self-supervised scheme, where the main focus is learning representations at the word or sentence level. However, there has been limited progress in generating useful discourse-level representations. In this work, we propose to use ideas from predictive coding theory to augment BERT-style language models with a mechanism that allows them to learn suitable discourse-level representations. As a result, our proposed approach is able to predict future sentences using explicit top-down connections that operate at the intermediate layers of the network. By experimenting with benchmarks designed to evaluate discourse-related knowledge using pre-trained sentence representations, we demonstrate that our approach improves performance in 6 out of 11 tasks by excelling in discourse relationship detection.
Introduction
Pre-trained language models are among the leading methods to learn useful representations for textual data. Several pre-training objectives have been proposed in recent years, such as causal language modeling (Radford et al., 2018(Radford et al., , 2019, masked language modeling (Devlin et al., 2019), and permutation language modeling (Yang et al., 2019). However, these approaches do not produce suitable representations at the discourse level (Huber et al., 2020).
Simultaneously, neuroscience studies have suggested that predictive coding (PC) plays an essential role in language development in humans (Ylinen et al., 2016;Zettersten, 2019). PC postulates that the brain is continually making predictions of incoming sensory stimuli (Rao and Ballard, 1999;Friston, 2005;Clark, 2013;Hohwy, 2013), with word prediction being the main mechanism (Berkum et al., 2005;Kuperberg and Jaeger, 2015). However, recent studies speculate that the predictive process could occur within and across utter-ances, fostering discourse comprehension (Kandylaki et al., 2016;Pickering and Gambi, 2018).
In this work, we propose to extend BERT-type models with recursive bottom-up and top-down computation based on PC theory. Specifically, we incorporate top-down connections that, according to PC, convey predictions from upper to lower layers, which are contrasted with bottom-up representations to generate an error signal that is used to guide the optimization of the model. Using this approach, we attempt to build feature representations that capture discourse-level relationships by continually predicting future sentences in a latent space. We evaluate our approach on DiscoEval (Chen et al., 2019) and SciDTB for discourse evaluation (Huber et al., 2020) to assess whether the embeddings produced by our model capture discourse properties of sentences without finetuning. Our model achieves competitive performance compared to baselines, especially in tasks that require to discover discourse-level relations.
2 Related Work 2.1 BERT for Sentence Representation Pre-trained self-supervised language models have become popular in recent years. BERT (Devlin et al., 2019) adopts a transformer encoder using a masked language modeling (MLM) objective for word representation. It also proposes an additional loss called next-sentence prediction (NSP) to train a model that understands sentence relationships. On the other hand, ALBERT (Lan et al., 2020) proposes a loss based primarily on coherence called sentence-order prediction (SOP).
SBERT (Reimers and Gurevych, 2019) uses a siamese structure to obtain semantically meaningful sentence embeddings, focusing on textual similarity tasks. ConveRT (Henderson et al., 2020) uses a dual-encoder to improve sentence embeddings for response selection tasks. These models focus on obtaining better representations for specialized sen-tence pair tasks, so they are not comparable with our which intended to be general-purpose.
More recently, SLM (Lee et al., 2020) proposes a sentence unshuffling approach for a fine understanding of the relations among the sentences at the discourse level. CONPONO (Iter et al., 2020) considers a discourse-level objective to predict the surrounding sentences given an anchor text. This work is related to our approach; the key difference is that our model predicts future sentences sequentially using a top-down pathway. We consider CONPONO as our main baseline.
Predictive Coding and Deep Learning
Recent work in computer vision takes inspiration from PC theory to build models for accurate (Han et al., 2018) and robust (Huang et al., 2020) image classification. PredNet (Lotter et al., 2017) proposes a network capable of predicting future frames in a video sequence by making local predictions at each level using top-down connections. CPC (Oord et al., 2018) is an unsupervised learning approach to extract useful representations by predicting text in a latent space. Our method takes inspiration from these models, considering top-down connections and predictive processing in a latent space.
Proposed Method
Model Details
Our model consists of a BERT-style model as a sentence encoder (ALBERT and BERT are used in this work) and a GRU model (Cho et al., 2014) that predicts next sentences (see Figure 1). Our intuition is that by giving the model the ability to predict future sentences using a top-down pathway, it will learn better relationships between sentences, thus improving sentence-level representations of each layer for downstream tasks.
The input is a sequence s 1 , s 2 , .., s n of sentences extracted from a paragraph. We encode sentence s t with encoder g enc that generates output z l t at time step t and layer l (l is from 1 to L). Note that vector z l t is obtained from the special token [CLS], which is commonly used as sentence representation. Next, an autoregressive model g ar produces a context vector c l t as a function of z l t and the context vectors of the upper layer c l+1 t and the previous step c l t−1 .
z l t = g enc (s t ), c l t = g ar (z l t , c l+1 t , c l t−1 )(1)
Then we introduce a predictive function f (.) to predict a future sentence. In other words, f (.) takes as input the context representation c l t from time step ..... Figure 1: Example of predicting one future sentence. Given an input sentence s t at time step t, the corresponding representation z l t is calculated at layer l. Then a context vector c l t is computed via a top-down pathway (left). Afterwards, a future sentenceẑ l t+1 is predicted to be compared to the actual representation z t+1 (right).
t and layer l, and predicts the latent representation z l t+1 at time step t + 1, i.e.:
z l t+1 = f (c l t )(2)
In the spirit of Seq2Seq (Sutskever et al., 2014), representations are predicted sequentially, which differs from the CONPONO model that predicts k future sentences with a unique context vector.
Loss Function
We rely on the InfoNCE loss proposed for the CPC model (Oord et al., 2018). This constructs a binary task in which the goal is to classify one real sample among many noise samples. InfoNCE encourages the predicted representationẑ to be close to the ground truth z.
In the forward pass, the ground truth representation z and the predicted representationẑ are computed at each layer of the model. So we denote the corresponding feature vectors as z j i andẑ j i where i denotes the temporal index and j is the layer index. A dot product computes the similarity between the predicted and ground truth pair. Then, we optimize a cross-entropy loss that distinguishes the positive pair out of all other negative pairs:
L nsm = − i,j log exp(ẑ j i · z j i ) m exp(ẑ j i · z j m ) (3)
There is only one positive pair (ẑ j i , z j i ) for a predicted sentenceẑ j i , which are the features at the same time step and the same layer. The rest of pairs (ẑ j i , z j m ) are negative pairs, where (i, j) = (m, j). In practice, we draw negative samples from the batch. This is a simple method and a more complex generation of negative samples could improve results. Our loss function, which we refer to as next-sentence modeling (L nsm ), is used in conjunction with the BERT masked language model loss (L mlm ). Accordingly, we train to minimize:
L = L nsm + L mlm (4)
Pre-training and Implementation Details
We extend ALBERT and BERT models, obtaining PredALBERT and PredBERT as a result. As mentioned above, our models are fed with a set of contiguous sentences s n that are processed one-ata-time. Note that the length of the conventional BERT input is 512 tokens. However, it is unlikely that a sentence will have that many tokens. We join 3 contiguous sentences to create a long sequence. Longer sequences are truncated, and shorter sequences are padded. We use an overlapping sentence between contiguous sentence groups. For instance, given a paragraph s 1 , s 2 , .., s 9 , the 1st sequence is s 1 , s 2 , s 3 , the 2nd sequence is s 3 , s 4 , s 5 , and so on. Our early experiments show that this setting improves the model's predictive ability in the validation set. We hypothesize that the model can predict up to 3 sentences by using information from the overlapping sentences.
We pre-train our models with the predictive mechanism set to predict the next 2 future sentences (k = 2). At time 1, our model represents sequence 1, then this vector feeds the top-down flow (GRU) generating a context representation in each layer that is used to predict sequence 2. Then, the model represents sequence 2 to contrast it with the predicted one. This is repeated one more time to reach k = 2 predicted future sequences. For a fair comparison, we train using the BookCorpus (Zhu et al., 2015) and Wikipedia datasets, as well as the BERT, ALBERT, and CONPONO models.
Note that top-down connections are only available during pre-training. At evaluation time, we discard the top-down connections keeping only the encoder, thus obtaining a model equivalent to BERT or ALBERT in terms of the parameters. Table 1 shows the number of parameters in our models. We used the Huggingface library (Wolf et al., 2020) to implement our models. We initialize the encoder model with BERT or ALBERT weights depending on the version. The autoregressive model was initialized with random weights. For model efficiency in both versions, we use parameter-sharing across layers in the autoregressive model. We trained the models for 1M steps with batch size 8. We use Adam optimizer with weight decay and learning rate of 5e-5. For the masked language modeling, we consider dynamic masking, where the masking pattern is generated every time we feed a sequence to the model. Unlike BERT, we mask 10% of all tokens in each sequence at random.
Experiments
Datasets
Our focus is to evaluate if the discourse properties of sentences are captured by our model without finetuning. DiscoEval (Chen et al., 2019) and SciDTB-DE (Huber et al., 2020) datasets include probing tasks designed for discourse evaluation, thus letting us know what discourse-related knowledge our model is capturing effectively.
DiscoEval: Suite of tasks to evaluate discourserelated knowledge in sentence representations. It includes 7 tasks: Sentence position (SP), Binary sentence ordering (BSO), Discourse coherence (DC), Sentence section prediction (SSP), Penn discourse tree bank (PDTB-E/I), and Rhetorical structure theory (RST). SP, BSO, DC, and SSP assess discourse coherence with binary classification, while PDTB and RST assess discourse relations between sentences through multi-class classification.
SciDTB-DE: Set of tasks designed to determine whether an encoder captures discourse properties from scientific texts. It considers 4 tasks: Swapped units detection (Swapped), Scrambled sentence detection (Scrambled), Relation detection (BinRel), and Relation semantics detection (SemRel). Both Swapped and Scrambled tasks were designed for clause coherence verification, while BinRel and SemRel for discourse relationship detection. (Devlin et al., 2019). We also evaluate CONPONO (Iter et al., 2020), which is the most related model to our approach. Because these models have more parameters than PredBERT, we also include AL-BERT (Lan et al., 2020) Base and Large, which are directly comparable to our model. For a fair and consistent comparison, we rerun all baseline evaluations. We use the pre-trained Huggingface models (Wolf et al., 2020) for BERT and ALBERT. In the case of CONPONO, we use a version pre-trained to predict 2 next surrounding sentences 1 .
Evaluation: In the case of DiscoEval, we use the original code provided by Chen et al. (2019). We observe that this configuration leads to CONPONO model results that differ from the reported on the original paper. On the other hand, following Huber et al. (2020), we use SentEval (Conneau and Kiela, 2018) toolkit for SciDTB-DE evaluation. In both cases, the process involves loading a pre-trained model with frozen weights and training a logistic regression on top of the sentence embeddings. To train, we use the average of sentence representations ([CLS]) from all the layers. Table 2 shows the results of our models. We observe improvements in discourse relation detection (PDTB, RST, BinRel, SemRel) and discourse coherence (DC) tasks compared to the best baseline (CONPONO). Across these tasks, PredALBERT-L outperforms by ∼1.34 points on average, We also highlight that PredALBERT-B/L achieves competitive performance with fewer parameters than BERT and CONPONO. Decreased performance of our models in the SP, BSO, SSP, Swap, and Scram tasks is due to the fact they are closely related to the baselines optimization objectives, which consist of sentence order prediction (ALBERT), topic prediction (BERT), or a combination of them (CONPONO). In contrast, our approach uses a next sentence prediction task in a generative way that encourages the capture of discourse relationships, improving its performance on PDTB, RST, BinRel, and SemRel tasks.
Results
Ablation Study
In order to verify the influence of PC mechanism on the pre-training result, we carry out ablation experiments. We use our PredALBERT-B as the Default model, which includes top-down connections and recurrence from layer 12 to layer 1. Ablations involve removing top-down connections and the recurrence of certain layers. Table 3 shows performance across all tasks for each benchmark.
The first experiment uses the PC mechanism on Half the layers, i.e., the GRU and predictive layer are present from layer 12 to layer 6. This variation exceeds the Default model by ∼0.03 in Disco-Eval and ∼0.14 in SciDTB-DE. The second experiment uses the PC mechanism only on the Last layer of the transformer. It means that the combination of the GRU and prediction layer is only present in layer 12. This reduces the performance by ∼0.91 in DiscoEval and ∼2.41 in SciDTB-DE.
Also, we conducted an additional experiment where we removed the top-down connections (w/o TDC) to the Default model. This is equiv- alent to modifying equation 1 by g ar (z l t , c l t−1 ). We found that this ablation severely affects the performance of the Default model performance by ∼2.89 in DiscoEval and ∼4.43 in SciDTB-DE.
Our findings indicate that top-down pathway is beneficial for improving discourse representations of BERT-type models. However, it is not clear in which layers it is crucial to have the PC mechanism. We hypothesize that this is related to the fact that the BERT-style models encode syntactic and semantic features in different layers (Jawahar et al., 2019;Aspillaga et al., 2021), so a specialized PC mechanism for syntax or semantics would be desirable. We left this study for future work.
What Does The Model Learn?
Because our model excels at detecting discourse relations, in this section, we explore whether the resulting vectors actually represent the role of a sentence in its discursive context. To illustrate what PredBERT learns, we follow the methodology proposed by Lee et al. (2020). We use labeled sentences with discourse relations as queries to retrieve the top 3 most similar sentences from an unlabeled corpus using cosine similarity. We obtained the queries from the MIT Discourse Relations Annotation Guide 2 and the unlabeled sentences from the Gutenberg dataset (Lahiri, 2014). We compute the representations as mentioned in Section 4.2. This process allowed us to verify that similar vectors share the same or equivalent discourse relations.
Temporal relation: Query = He knows a tasty meal when he eats one.
1. The last five words took Tuppence's fancy mightily, especially after a meagre breakfast and a supper of buns the night before. 2. I know a disinterested man when I see him.
3. He had about ten pounds when I found him. Sentence 1 has a succession relation due to the use of the word after. Sentence 3 shows a synchrony relation because it uses when as the query. Sentence 2 does not have a temporal relation. 2 https://bit.ly/3z45IG2
Comparison relation: Query = IBM's stock price rose, but the overall market fell.
1. The stock markets of the world gambled upon its chances, and its bonds at one time were high in favor. 2. Tommy's heart beat faster, but his casual pleasantness did not waver. 3. I guess I was just a mite hasty, but I've been feeling bad about this money question. Sentence 1 matched stock and market words but does not contain a comparison relation. Sentences 2 and 3 include a counter-expectation relation similar to the query sentence that uses the word but.
Contingency relation: Query = I refused to pay the cobbler the full $95 because he did poor work.
1. I did the labor of writing one address this year, and got thunder for my reward. 2. I don't believe in a law to prevent a man from getting rich; it would do more harm than good. 3. When I fixed a plan for an election in Arkansas I did it in ignorance that your convention was doing the same work. All sentences contain semantically related words like pay/reward and poor/rich. Sentences 1 and 2 include a cause relation explicit and implicit. This is related to the query that has pragmatic cause relation. Sentence 3 shows a temporal relation.
Conclusions
We introduce an approach based on PC theory, which extends BERT-style models with recursive bottom-up and top-down computation along with a discourse representation objective. Our models achieve competitive results in discourse analysis tasks, excelling in relations detection.
while PredBERT-B by ∼0.94. PredALBERT-B achieves competitive performance but does not outperform CONPONO. However, if we compare our models with their direct baselines (ALBERT-B/L, BERT-B), the increase is greater. PredALBERT-1 https://bit.ly/3mn3LQl B by ∼3.92, PredALBERT-L by ∼7.43, and PredBERT-B by ∼1.46 points on average. The Stuart-Maxwell tests demonstrated a significant difference between our best model PredALBERT-L and ALBERT-L (p = 0.009) or CONPONO (p = 0.05).
Table 2 :
2Accuracy results in the DiscoEval and SciDTB-DE datasets. We carry out the evaluation 10 times with different seeds and report the average across the trials. B and L indicate the base and large versions, respectively.4.2 Experimental Setup
Baselines: Following Chen et al. (2019); Huber
et al. (2020), we include the results of BERT Base
Table 3 :
3Results of ablation experiments with PredALBERT-B as the Default model.
AcknowledgementsThis work was supported in part by the Millennium Institute for Foundational Research on Data (IMFD), the European Research Council Advanced Grant 788506 and the TPU Research Cloud (TRC) program. The first author is grateful to Rodrigo Tufiño for helpful technical advice.
Inspecting the concept knowledge graph encoded by modern language models. Carlos Aspillaga, Marcelo Mendoza, Alvaro Soto, 10.18653/v1/2021.findings-acl.263Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsCarlos Aspillaga, Marcelo Mendoza, and Alvaro Soto. 2021. Inspecting the concept knowledge graph en- coded by modern language models. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 2984-3000, Online. Associa- tion for Computational Linguistics.
Anticipating upcoming words in discourse: Evidence from ERPs and reading times. J A Jos, Colin M Van Berkum, Pienie Brown, Valesca Zwitserlood, Peter Kooijman, Hagoort, 10.1037/0278-7393.31.3.443Journal of Experimental Psychology: Learning, Memory, and Cognition. 313Jos J. A. Van Berkum, Colin M. Brown, Pienie Zwitser- lood, Valesca Kooijman, and Peter Hagoort. 2005. Anticipating upcoming words in discourse: Evi- dence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3):443-467.
Evaluation benchmarks and learning criteria for discourse-aware sentence representations. Mingda Chen, Zewei Chu, Kevin Gimpel, 10.18653/v1/D19-1060Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMingda Chen, Zewei Chu, and Kevin Gimpel. 2019. Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 649- 662, Hong Kong, China. Association for Computa- tional Linguistics.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. Kyunghyun Cho, Caglar Bart Van Merriënboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, 10.3115/v1/D14-1179Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsKyunghyun Cho, Bart van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.
Whatever next? predictive brains, situated agents, and the future of cognitive science. Andy Clark, 10.1017/s0140525x12000477Behavioral and Brain Sciences. 363Andy Clark. 2013. Whatever next? predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3):181-204.
SentEval: An evaluation toolkit for universal sentence representations. Alexis Conneau, Douwe Kiela, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAAlexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
A theory of cortical responses. Karl Friston, 10.1098/rstb.2005.1622Philosophical Transactions of the Royal Society B: Biological Sciences. 360Karl Friston. 2005. A theory of cortical responses. Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1456):815-836.
Deep predictive coding network with local recurrent processing for object recognition. Kuan Han, Haiguang Wen, Yizhen Zhang, Di Fu, Eugenio Culurciello, Zhongming Liu, Advances in Neural Information Processing Systems. Curran Associates, Inc31Kuan Han, Haiguang Wen, Yizhen Zhang, Di Fu, Eu- genio Culurciello, and Zhongming Liu. 2018. Deep predictive coding network with local recurrent pro- cessing for object recognition. In Advances in Neural Information Processing Systems, volume 31, pages 9201-9213. Curran Associates, Inc.
ConveRT: Efficient and accurate conversational representations from transformers. Matthew Henderson, Iñigo Casanueva, Nikola Mrkšić, Pei-Hao Su, Tsung-Hsien Wen, Ivan Vulić, 10.18653/v1/2020.findings-emnlp.196Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsMatthew Henderson, Iñigo Casanueva, Nikola Mrkšić, Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vulić. 2020. ConveRT: Efficient and accurate conversa- tional representations from transformers. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2020, pages 2161-2174, Online. As- sociation for Computational Linguistics.
The Predictive Mind. Jakob Hohwy, 10.1093/acprof:oso/9780199682737.001.0001Oxford University PressJakob Hohwy. 2013. The Predictive Mind. Oxford Uni- versity Press.
Neural networks with recurrent generative feedback. Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Tsao, Anima Anandkumar, Advances in Neural Information Processing Systems. Curran Associates, Inc33Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Tan Nguyen, Doris Tsao, and Anima Anandkumar. 2020. Neural networks with recurrent generative feedback. In Advances in Neural Information Processing Sys- tems, volume 33, pages 535-545. Curran Associates, Inc.
Do sentence embeddings capture discourse properties of sentences from scientific abstracts ?. Laurine Huber, Chaker Memmadi, Mathilde Dargnat, Yannick Toussaint, 10.18653/v1/2020.codi-1.9Proceedings of the First Workshop on Computational Approaches to Discourse. the First Workshop on Computational Approaches to DiscourseOnline. Association for Computational LinguisticsLaurine Huber, Chaker Memmadi, Mathilde Dargnat, and Yannick Toussaint. 2020. Do sentence em- beddings capture discourse properties of sentences from scientific abstracts ? In Proceedings of the First Workshop on Computational Approaches to Discourse, pages 86-95, Online. Association for Computational Linguistics.
Pretraining with contrastive sentence objectives improves discourse performance of language models. Dan Iter, Kelvin Guu, Larry Lansing, Dan Jurafsky, 10.18653/v1/2020.acl-main.439Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsDan Iter, Kelvin Guu, Larry Lansing, and Dan Jurafsky. 2020. Pretraining with contrastive sentence objec- tives improves discourse performance of language models. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4859-4870, Online. Association for Computa- tional Linguistics.
What does BERT learn about the structure of language. Ganesh Jawahar, Benoît Sagot, Djamé Seddah, 10.18653/v1/P19-1356Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGanesh Jawahar, Benoît Sagot, and Djamé Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3651-3657, Florence, Italy. Associa- tion for Computational Linguistics.
Predicting "when" in discourse engages the human dorsal auditory stream: An fMRI study using naturalistic stories. K D Kandylaki, A Nagels, S Tune, T Kircher, R Wiese, M Schlesewsky, I Bornkessel-Schlesewsky, 10.1523/jneurosci.4100-15.2016Journal of Neuroscience. 3648K. D. Kandylaki, A. Nagels, S. Tune, T. Kircher, R. Wiese, M. Schlesewsky, and I. Bornkessel- Schlesewsky. 2016. Predicting "when" in discourse engages the human dorsal auditory stream: An fMRI study using naturalistic stories. Journal of Neuro- science, 36(48):12180-12191.
What do we mean by prediction in language comprehension? Language. Gina R Kuperberg, T Florian Jaeger, 10.1080/23273798.2015.1102299Cognition and Neuroscience. 311Gina R. Kuperberg and T. Florian Jaeger. 2015. What do we mean by prediction in language comprehen- sion? Language, Cognition and Neuroscience, 31(1):32-59.
Complexity of word collocation networks: A preliminary structural analysis. Shibamouli Lahiri, 10.3115/v1/E14-3011Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics. the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenAssociation for Computational LinguisticsShibamouli Lahiri. 2014. Complexity of word col- location networks: A preliminary structural analy- sis. In Proceedings of the Student Research Work- shop at the 14th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 96-105, Gothenburg, Sweden. Association for Computational Linguistics.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.
SLM: Learning a discourse language representation with sentence unshuffling. Haejun Lee, Drew A Hudson, Kangwook Lee, Christopher D Manning, 10.18653/v1/2020.emnlp-main.120Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsHaejun Lee, Drew A. Hudson, Kangwook Lee, and Christopher D. Manning. 2020. SLM: Learning a discourse language representation with sentence un- shuffling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1551-1562, Online. Associa- tion for Computational Linguistics.
Deep predictive coding networks for video prediction and unsupervised learning. William Lotter, Gabriel Kreiman, David D Cox, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsWilliam Lotter, Gabriel Kreiman, and David D. Cox. 2017. Deep predictive coding networks for video prediction and unsupervised learning. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. arXiv preprintAaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748.
Predicting while comprehending language: A theory and review. J Martin, Chiara Pickering, Gambi, 10.1037/bul0000158Psychological Bulletin. 14410Martin J. Pickering and Chiara Gambi. 2018. Predict- ing while comprehending language: A theory and re- view. Psychological Bulletin, 144(10):1002-1044.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. P N Rajesh, Dana H Rao, Ballard, 10.1038/4580Nature Neuroscience. 21Rajesh P. N. Rao and Dana H. Ballard. 1999. Predic- tive coding in the visual cortex: a functional interpre- tation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1):79-87.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in Neural Information Processing Systems. Curran Associates, Inc27Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems, volume 27. Curran Associates, Inc.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in Neural Information Processing Systems. Curran Associates, Inc32Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.
Predictive coding accelerates word recognition and learning in the early stages of language development. Sari Ylinen, Alexis Bosseler, Katja Junttila, Minna Huotilainen, 10.1111/desc.12472Developmental Science. 20612472Sari Ylinen, Alexis Bosseler, Katja Junttila, and Minna Huotilainen. 2016. Predictive coding accelerates word recognition and learning in the early stages of language development. Developmental Science, 20(6):e12472.
Learning by predicting: How predictive processing informs language development. Martin Zettersten, 10.1515/9783110596656-010Patterns in Language and Linguistics. Beatrix Busse and Ruth Moehlig-FalkeDe GruyterMartin Zettersten. 2019. Learning by predicting: How predictive processing informs language devel- opment. In Beatrix Busse and Ruth Moehlig-Falke, editors, Patterns in Language and Linguistics, pages 255-288. De Gruyter.
Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. Y Zhu, R Kiros, R Zemel, R Salakhutdinov, R Urtasun, A Torralba, S Fidler, 10.1109/ICCV.2015.112015 IEEE International Conference on Computer Vision (ICCV). Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Ur- tasun, A. Torralba, and S. Fidler. 2015. Aligning books and movies: Towards story-like visual expla- nations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 19-27.
| [] |
[
"An Empirical Exploration of Skip Connections for Sequential Tagging",
"An Empirical Exploration of Skip Connections for Sequential Tagging"
] | [
"Huijia Wu huijia.wu@nlpr.ia.ac.cn \nInstitute of Automation\nNational Laboratory of Pattern Recognition\nCAS\n\nUniversity of Chinese Academy of Sciences\n\n",
"Jiajun Zhang jjzhang@nlpr.ia.ac.cn \nInstitute of Automation\nNational Laboratory of Pattern Recognition\nCAS\n\nUniversity of Chinese Academy of Sciences\n\n",
"Chengqing Zong cqzong@nlpr.ia.ac.cn \nInstitute of Automation\nNational Laboratory of Pattern Recognition\nCAS\n\nCAS Center for Excellence in Brain Science and Intelligence Technology\n\n\nUniversity of Chinese Academy of Sciences\n\n"
] | [
"Institute of Automation\nNational Laboratory of Pattern Recognition\nCAS",
"University of Chinese Academy of Sciences\n",
"Institute of Automation\nNational Laboratory of Pattern Recognition\nCAS",
"University of Chinese Academy of Sciences\n",
"Institute of Automation\nNational Laboratory of Pattern Recognition\nCAS",
"CAS Center for Excellence in Brain Science and Intelligence Technology\n",
"University of Chinese Academy of Sciences\n"
] | [
"Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers"
] | In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging. We investigate three kinds of skip connections connecting to LSTM cells: (a) skip connections to the gates, (b) skip connections to the internal states and (c) skip connections to the cell outputs. We present comprehensive experiments showing that skip connections to cell outputs outperform the remaining two. Furthermore, we observe that using gated identity functions as skip mappings works pretty well. Based on this novel skip connections, we successfully train deep stacked bidirectional LSTM models and obtain state-ofthe-art results on CCG supertagging and comparable results on POS tagging. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | null | [
"https://www.aclweb.org/anthology/C16-1020.pdf"
] | 2,043,113 | 1610.03167 | f65262ad02772a5c2c3e28a061d751e51be3c044 |
An Empirical Exploration of Skip Connections for Sequential Tagging
December 11-17 2016
Huijia Wu huijia.wu@nlpr.ia.ac.cn
Institute of Automation
National Laboratory of Pattern Recognition
CAS
University of Chinese Academy of Sciences
Jiajun Zhang jjzhang@nlpr.ia.ac.cn
Institute of Automation
National Laboratory of Pattern Recognition
CAS
University of Chinese Academy of Sciences
Chengqing Zong cqzong@nlpr.ia.ac.cn
Institute of Automation
National Laboratory of Pattern Recognition
CAS
CAS Center for Excellence in Brain Science and Intelligence Technology
University of Chinese Academy of Sciences
An Empirical Exploration of Skip Connections for Sequential Tagging
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanDecember 11-17 2016
In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging. We investigate three kinds of skip connections connecting to LSTM cells: (a) skip connections to the gates, (b) skip connections to the internal states and (c) skip connections to the cell outputs. We present comprehensive experiments showing that skip connections to cell outputs outperform the remaining two. Furthermore, we observe that using gated identity functions as skip mappings works pretty well. Based on this novel skip connections, we successfully train deep stacked bidirectional LSTM models and obtain state-ofthe-art results on CCG supertagging and comparable results on POS tagging. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/
Introduction
In natural language processing, sequential tagging mainly refers to the tasks of assigning discrete labels to each token in a sequence. Typical examples include part-of-speech (POS) tagging and combinatory category grammar (CCG) supertagging. A regular feature of sequential tagging is that the input tokens in a sequence cannot be assumed to be independent since the same token in different contexts can be assigned to different tags. Therefore, the classifier should have memories to remember the contexts to make a correct prediction.
Bidirectional LSTMs (Graves and Schmidhuber, 2005) become dominant in sequential tagging problems due to the superior performance Vaswani et al., 2016;Lample et al., 2016). The horizontal hierarchy of LSTMs with bidirectional processing can remember the long-range dependencies without affecting the short-term storage. Although the models have a deep horizontal hierarchy (the depth is the same as the sequence length), the vertical hierarchy is often shallow, which may not be efficient at representing each token. Stacked LSTMs are deep in both directions, but become harder to train due to the feed-forward structure of stacked layers.
Skip connections (or shortcut connections) enable unimpeded information flow by adding direct connections across different layers (Raiko et al., 2012;Graves, 2013;Hermans and Schrauwen, 2013). However, there is a lack of exploration and analyzing various kinds of skip connections in stacked LSTMs. There are two issues to handle skip connections in stacked LSTMs: One is where to add the skip connections, the other is what kind of skip connections should be used to pass the information. To answer the first question, we empirically analyze three positions of LSTM blocks to receive the previous layer's output. For the second one, we present an identity mapping to receive the previous layer's outputs. Furthermore, following the gate design of LSTM (Hochreiter and Schmidhuber, 1997;Gers et al., 2000) and highway networks (Srivastava et al., 2015a;Srivastava et al., 2015b), we observe that adding a multiplicative gate to the identity function will help to improve performance.
In this paper, we present a neural architecture for sequential tagging. The input of the network are token representations. We concatenate word embeddings to character embeddings to represent the word and morphemes. A deep stacked bidirectional LSTM with well-designed skip connections is then used to extract the features needed for classification from the inputs. The output layer uses softmax function to output the tag distribution for each token.
Our main contribution is that we empirically evaluated the effects of various kinds of skip connections within stacked LSTMs. We present comprehensive experiments on the supertagging task showing that skip connections to the cell outputs using identity function multiplied with an exclusive gate can help to improve the network performance. Our model is evaluated on two sequential tagging tasks, obtaining state-of-the-art results on CCG supertagging and comparable results on POS tagging.
Related Work
Skip connections have been widely used for training deep neural networks. For recurrent neural networks, Schmidhuber (1992); El Hihi and Bengio (1995) introduced deep RNNs by stacking hidden layers on top of each other. Raiko et al. (2012); Graves (2013); Hermans and Schrauwen (2013) proposed the use of skip connections in stacked RNNs. However, the researchers have paid less attention to the analyzing of various kinds of skip connections, which is our focus in this paper.
The works closely related to ours are Srivastava et al. (2015b), , Kalchbrenner et al. (2015), , Zilly et al. (2016). These works are all based on the design of extra connections between different layers. Srivastava et al. (2015b) and mainly focus on feed-forward neural network, using well-designed skip connections across different layers to make the information pass more easily. The Grid LSTM proposed by Kalchbrenner et al. (2015) extends the one dimensional LSTMs to many dimensional LSTMs, which provides a more general framework to construct deep LSTMs. and propose highway LSTMs by introducing gated direct connections between internal states in adjacent layers and do not use skip connections, while we propose gated skip connections across cell outputs. Zilly et al. (2016) introduce recurrent highway networks (RHN) which use a single recurrent layer to make RNN deep in a vertical direction, in contrast to our stacked models.
Recurrent Neural Networks for Sequential Tagging
Consider a recurrent neural network applied to sequential tagging: Given a sequence x = (x 1 , . . . , x T ), the RNN computes the hidden state h = (h 1 , . . . , h T ) and the output y = (y 1 , . . . , y T ) by iterating the following equations:
h t = f (x t , h t−1 ; θ h )
(1)
y t = g(h t ; θ o )(2)
where t ∈ {1, . . . , T } represents the time. x t represents the input at time t, h t−1 and h t are the previous and the current hidden state, respectively. f and g are the transition function and the output function, respectively. θ h and θ o are network parameters. We use a negative log-likelihood cost to evaluate the performance, which can be written as:
C = − 1 N N n=1 log y t n(3)
where t n ∈ N is the true target for sample n, and y t n is the t-th output in the softmax layer given the inputs x n . The core idea of Long Short-Term Memory networks is to replace (1) with the following equation:
c t = f (x t , h t−1 ) + c t−1(4)
where c t is the internal state of the memory cell, which is designed to store the information for much longer time. Besides this, LSTM uses gates to avoid weight update conflicts.
Standard LSTMs process sequences in temporal order, which will ignore future context. Bidirectional LSTMs solve this problem by combining both the forward and the backward processing of the input sequences using two separate recurrent hidden layers:
− → h t = LSTM( − → x t , − − → h t−1 , − − → c t−1 ) (5) ← − h t = LSTM( ← − x t , ← − − h t−1 , ← − − c t−1 ) (6) y t = g( − → h t , ← − h t )(7)
where LSTM(·) is the LSTM computation. − → x t and ← − x t are the forward and the backward input sequence, respectively. The output of the two hidden layers − → h t and ← − h t in a birectional LSTM are connected to the output layer.
Stacked RNN is one type of deep RNNs, which refers to the hidden layers are stacked on top of each other, each feeding up to the layer above:
h l t = f l (h l−1 t , h l t−1 )(8)
where h l t is the t-th hidden state of the l-th layer.
Various kinds of Skip Connections
Skip connections in simple RNNs are trivial since there is only one position to connect to the hidden units. But for stacked LSTMs, the skip connections need to be carefully treated to train the network successfully. In this section, we analyze and compare various types of skip connections. At first, we give a detailed definition of stacked LSTMs, which can help us to describe skip connections. Then we start our construction of skip connections in stacked LSTMs. At last, we formulate various kinds of skip connections. Stacked LSTMs without skip connections can be defined as:
i l t f l t o l t s l t = sigm sigm sigm tanh W l h l−1
The cell is designed to store the previous information c l t−1 , which can be reset by a forget gate f l t . The new cell state is then obtained by adding the result to the current input. The cell outputs h l t are computed by multiplying the activated cell state by the output gate o l t , which learns when to access memory cell and when to block it. "sigm" and "tanh" are the sigmoid and tanh activation function, respectively. W l ∈ R 4n×2n is the weight matrix needs to be learned.
The hidden units in stacked LSTMs have two forms. One is the hidden units in the same layer {h l t , t ∈ 1, . . . , T }, which are connected through an LSTM. The other is the hidden units at the same time step {h l t , l ∈ 1, . . . , L}, which are connected through a feed-forward network. LSTM can keep the short-term memory for a long time, thus the error signals can be easily passed through {1, . . . , T }. However, when the number of stacked layers is large, the feed-forward network will suffer the gradient vanishing/exploding problems, which make the gradients hard to pass through {1, . . . , L}.
The core idea of LSTM is to use an identity function to make the constant error carrosel. also use an identity mapping to train a very deep convolution neural network with improved performance. All these inspired us to use an identity function for the skip connections. Rather, the gates of LSTM are essential parts to avoid weight update conflicts, which are also invoked by skip connections. Following highway gating, we use a gate multiplied with identity mapping to avoid the conflicts. Skip connections are cross-layer connections, which means that the output of layer l−2 is not only connected to the layer l−1, but also connected to layer l. For stacked LSTMs, h l−2 t can be connected to the gates, the internal states, and the cell outputs in layer l's LSTM blocks. We formalize these below:
Skip connections to the gates. We can connect h l−2 t to the gates through an identity mapping:
i l t f l t o l t s l t = sigm sigm sigm tanh W l I l h l−1 t h l t−1 h l−2 t (10)
where I l ∈ R 4n×n is the identity mapping.
Skip connections to the internal states. Another kind of skip connections is to connect h l−2 t to the cell's internal state c l t :
c l t = f l t c l t−1 + i l t s l t + h l−2 t (11) h l t = o l t tanh(c l t )(12)
Skip connections to the cell outputs. We can also connect h l−2 t to cell outputs:
c l t = f l t c l t−1 + i l t s l t (13) h l t = o l t tanh(c l t ) + h l−2 t(14)
Skip connections using gates. Consider the case of skip connections to the cell outputs. The cell outputs grow linearly during the presentation of network depth, which makes the h l t 's derivative vanish and hard to convergence. Inspired by the introduction of LSTM gates, we add a gate to control the skip connections through retrieving or blocking them:
i l t f l t o l t g l t s l t = sigm sigm sigm sigm tanh W l h l−1 t h l t−1
c l t = f l t equals 0, no skipped output can be passed through skip connections, which is equivalent to traditional stacked LSTMs. Otherwise, it behaves like a feed-forward LSTM using gated identity connections. Here we omit the case of adding gates to skip connections to the internal state, which is similar to the above case.
Skip connections in bidirectional LSTM. Using skip connections in bidirectional LSTM is similar to the one used in LSTM, with a bidirectional processing:
− → c l t = − → f − − → c l t−1 + − → i − → s l t − → h l t = − → o tanh( − → c l t ) + − → g − − → h l−2 t ← − c l t = ← − f ← − − c l t−1 + ← − i ← − s l t ← − h l t = ← − o tanh( ← − c l t ) + ← − g ← − − h l−2 t(16)
Neural Architecture for Sequential Tagging
Sequential tagging can be formulated as P (t|w; θ), where w = [w 1 , . . . , w T ] indicates the T words in a sentence, and t = [t 1 , . . . , t T ] indicates the corresponding T tags. In this section we introduce an neural architecture for P (·), which includes an input layer, a stacked hidden layers and an output layer. Since the stacked hidden layers have already been introduced, we only introduce the input and the output layer here.
Network Inputs
Network inputs are the representation of each token in a sequence. There are many kinds of token representations, such as using a single word embedding, using a local window approach, or a combination of word and character-level representation. Here our inputs contain the concatenation of word representations, character representations, and capitalization representations.
Word representations. All words in the vocabulary share a common look-up table, which is initialized with random initializations or pre-trained embeddings. Each word in a sentence can be mapped to an embedding vector w t . The whole sentence is then represented by a matrix with columns vector [w 1 , w 2 , . . . , w T ]. We use a context window of size d surrounding with a word w t to get its context information. Following Wu et al. (2016), we add logistic gates to each token in the context window. The word representation is computed as w t = [r t− d/2 w t− d/2 ; . . . ; r t+ d/2 w t+ d/2 ], where r t := [r t− d/2 , . . . , r t+ d/2 ] ∈ R d is a logistic gate to filter the unnecessary contexts, w t− d/2 , . . . , w t+ d/2 is the word embeddings in the local window.
Character representations. Prefix and suffix information about words are important features in sequential tagging. Inspired by Fonseca et al. (2015) et al, which uses a character prefix and suffix with length from 1 to 5 for part-of-speech tagging, we concatenate character embeddings in a word to get the character-level representation. Concretely, given a word w consisting of a sequence of characters [c 1 , c 2 , . . . , c lw ], where l w is the length of the word and L(·) is the look-up table for characters. We concatenate the leftmost most 5 character embeddings L(c 1 ), . . . , L(c 5 ) with its rightmost 5 character embeddings L(c lw−4 ), . . . , L(c lw ). When a word is less than five characters, we pad the remaining characters with the same special symbol.
Capitalization representations. We lowercase the words to decrease the size of word vocabulary to reduce sparsity, but we need an extra capitalization embeddings to store the capitalization features, which represent whether or not a word is capitalized.
Network Outputs
For sequential tagging, we use a softmax activation function g(·) in the output layer:
y t = g(W hy [ − → h t ; ← − h t ])(17)
where y t is a probability distribution over all possible tags. y k (t) = exp(h k ) k exp(h k ) is the k-th dimension of y t , which corresponds to the k-th tags in the tag set. W hy is the hidden-to-output weight.
Experiments
Combinatory Category Grammar Supertagging
Combinatory Category Grammar (CCG) supertagging is a sequential tagging problem in natural language processing. The task is to assign supertags to each word in a sentence. In CCG the supertags stand for the lexical categories, which are composed of the basic categories such as N , N P and P P , and complex categories, which are the combination of the basic categories based on a set of rules. Detailed explanations of CCG refers to (Steedman, 2000;Steedman and Baldridge, 2011).
The training set of this task only contains 39604 sentences, which is too small to train a deep model, and may cause over-parametrization. But we choose it since it has been already proved that a bidirectional recurrent net fits the task by many authors (Lewis et al., 2016;Vaswani et al., 2016).
Dataset and Pre-processing
Our experiments are performed on CCGBank (Hockenmaier and Steedman, 2007), which is a translation from Penn Treebank (Marcus et al., 1993) to CCG with a coverage 99.4%. We follow the standard splits, using sections 02-21 for training, section 00 for development and section 23 for the test. We use a full category set containing 1285 tags. All digits are mapped into the same digit '9', and all words are lowercased.
Network Configuration
Initialization. There are two types of weights in our experiments: recurrent and non-recurrent weights. For non-recurrent weights, we initialize word embeddings with the pre-trained 200-dimensional GolVe vectors (Pennington et al., 2014). Other weights are initialized with the Gaussian distribution
Model Dev
Test Clark and Curran (2007) 91.5 92.0 Lewis et al. (2014) 91.3 91.6 Lewis et al. (2016) 94.1 94.3 Xu et al. (2015) 93.1 93.0 Xu et al. (2016) 93.49 93.52 Vaswani et al. (2016) 94.24 94.5 7-layers + skip output + gating 94.51 94.67 7-layers + skip output + gating (no char) 94.33 94.45 7-layers + skip output + gating (no dropout) 94.06 94.0 9-layers + skip output + gating 94.55 94.69 Table 1: 1-best supertagging accuracy on CCGbank. "skip output" refers to the skip connections to the cell output, "gating" refers to adding a gate to the identity function, "no char" refers to the models that do not use the character-level information, "no dropout" refers to models that do not use dropout.
N (0, 1 √ fan-in )
scaled by a factor of 0.1, where fan-in is the number of units in the input layer. For recurrent weight matrices, following Saxe et al. (2013) we initialize with random orthogonal matrices through SVD to avoid unstable gradients. Orthogonal initialization for recurrent weights is important in our experiments, which takes about 2% relative performance enhancement than other methods such as Xavier initialization (Glorot and Bengio, 2010).
Hyperparameters. For the word representations, we use a small window size of 3 for the convolutional layer. The dimension of the word representation after the convolutional operation is 600. The size of character embedding and capitalization embeddings are set to 5. The number of cells of the stacked bidirectional LSTM is set to 512. We also tried 400 cells or 600 cells and found this number did not impact performance so much. All stacked hidden layers have the same number of cells. The output layer has 1286 neurons, which equals to the number of tags in the training set with a RARE symbol.
Training. We train the networks using the back-propagation algorithm, using stochastic gradient descent (SGD) algorithm with an equal learning rate 0.02 for all layers. We also tried other optimization methods, such as momentum (Plaut and others, 1986), Adadelta (Zeiler, 2012), or Adam (Kingma and Ba, 2014), but none of them perform as well as SGD. Gradient clipping is not used. We use on-line learning in our experiments, which means the parameters will be updated on every training sequences, one at a time. We trained the 7-layer network for roughly 2 to 3 days on one NVIDIA TITAN X GPU using Theano 1 (Team et al., 2016).
Regularization. Dropout (Srivastava et al., 2014) is the only regularizer in our model to avoid overfitting. Other regularization methods such as weight decay and batch normalization do not work in our experiments. We add a binary dropout mask to the local context windows on the embedding layer, with a drop rate p of 0.25. We also apply dropout to the output of the first hidden layer and the last hidden layer, with a 0.5 drop rate. At test time, weights are scaled with a factor 1 − p. Table 1 shows the comparisons with other models for supertagging. The comparisons do not include any externally labeled data and POS labels. We use stacked bidirectional LSTMs with gated skip connections for the comparisons, and report the highest 1-best supertagging accuracy on the development set for final testing. Our model presents state-of-the-art results compared to the existing systems. The character-level information (+ 3% relative accuracy) and dropout (+ 8% relative accuracy) are necessary to improve the performance.
Results
Experiments on Skip Connections
We experiment with a 7-layer model on CCGbank to compare different kinds of skip connections introduced in Section 4. Our analysis mainly focuses on the identity function and the gating mechanism. The comparisons (Table 2) are summarized as follows:
No skip connections. When the number of stacked layers is large, the performance will degrade without skip connections. The accuracy in a 7-layer stacked model without skip connections is 93.94% (Table 2), which is lower than the one using skip connections.
Various kinds of skip connections. We experiment with the gated identity connections between internal states introduced in , but the network performs not good (Table 2, 93.14%). We also implement the method proposed in Zilly et al. (2016), which we use a single bidirectional RNH layer with a recurrent depth of 3 with a slightly modification 2 . Skip connections to the cell outputs with identity function and multiplicative gating achieves the highest accuracy (Table 2, 94.51%) on the development set. We also observe that skip to the internal states without gate get a slightly better performance ( Table 2, 94.33%) than the one with gate (94.24%) on the development set. Here we recommend to set the forget bias to 0 to get a better development accuracy.
Identity mapping. We use the sigmoid function to the previous outputs to break the identity link, in which we replace g t h l−1 t in Eq. (15)
with g t σ(h l−1 t ), where σ(x) = 1 1+e −x .
The result of the sigmoid function is 94.02% (Table 2), which is poor than the identity function. We can infer that the identity function is more suitable than other scaled functions such as sigmoid or tanh to transmit information.
Exclusive gating. Following the gating mechanism adopted in highway networks, we consider adding a gate g t to make a flexible control to the skip connections. Our gating function is g l t = σ(W l g h l t−1 + U l g h l−2 t ). Gated identity connections are essential to achieving state-of-the-art result on CCGbank. Table 2: Accuracy on CCGbank using 7-layer stacked bidirectional LSTMs, with different types of skip connections. b f is the bias of the forget gate. Table 3 compares the effect of the depth in the stacked models. We can observe that the performance is getting better with the increased number of layers. But when the number of layers exceeds 9, the performance will be hurt. In the experiments, we found that the number of stacked layers between 7 and 9 are the best choice using skip connections. Notice that we do not use layer-wise pretraining (Bengio et al., 2007;Simonyan and Zisserman, 2014), which is an important technique in training deep networks.
Experiments on Number of Layers
2 Our original implementation of Zilly et a. (2016) with a recurrent depth of 3 fails to converge. The reason might be due to the explosion of s t L under addition. To avoid this, we replace s t L with ot * tanh(s t L ) in the last recurrent step.
Further improvements might be obtained with this method to build a deeper network with improved performance. Table 3: Accuracy on CCGbank using gated identity connections to cell outputs, with different number of stacked layers.
Part-of-speech Tagging
Part-of-speech tagging is another sequential tagging task, which is to assign POS tags to each word in a sentence. It is very similar to the supertagging task. Therefore, these two tasks can be solved in a unified architecture. For POS tagging, we use the same network configurations as supertagging, except the word vocabulary size and the tag set size. We conduct experiments on the Wall Street Journal of the Penn Treebank dataset, adopting the standard splits (sections 0-18 for the train, sections 19-21 for validation and sections 22-24 for testing).
Model
Test Søgaard (2011) 97.5 Ling et al. (2015) 97.36 Wang et al. (2015) 97.78 Vaswani et al. (2016) 97.4 7-layers + skip output + gating 97.45 9-layers + skip output + gating 97.45 Table 4: Accuracy for POS tagging on WSJ.
Although the POS tagging result presented in Table 4 is slightly below the state-of-the-art, we neither do any parameter tunings nor change the network architectures, just use the one getting the best development accuracy on the supertagging task. This proves the generalization of the model and avoids heavy work of model re-designing.
Conclusions
This paper investigates various kinds of skip connections in stacked bidirectional LSTM models. We present a deep stacked network (7 or 9 layers) that can be easily trained and get improved accuracy on CCG supertagging and POS tagging. Our experiments show that skip connections to the cell outputs with the gated identity function performs the best. Our explorations could easily be applied to other sequential processing problems, which can be modelled with RNN architectures.
with gate, b f = 5, Eq. (15) 94.23 94.81 with gate, b f = 0, Eq. (15) 94.51 94.67 sigmoid mapping: g t σ(h l−1 t ) 94.02 94.18Case
Variant
Dev
Test
H-LSTM, Zhang et al.(2016) -
93.14 93.52
RHN, Zilly et al. (2016)
L = 3, with output gates
94.28 94.24
no skip connections
-
93.94 94.26
to the gates, Eq. (10)
-
93.9
94.22
to the internals
no gate, Eq. (11)
94.33 94.63
with gate
94.24 94.52
to the outputs
no gate, Eq. (14)
93.89 93.98
t h l t−1 c l t = f l t c l t−1 + i l t s l t h l t = o l t tanh(c l t )(9)During forward pass, LSTM needs to calculate c l t and h l t , which is the cell's internal state and the cell outputs state, respectively. To get c l t , s l t needs to be computed to store the current input. Then this result is multiplied by the input gate i l t , which decides when to keep or override information in memory cell c l t .
t c l t−1 + i l t s l t h l t = o l t tanh(c l t ) + g l t h l−2 t(15)where g l t is the gate which can be used to access the skipped output h l−2 t or block it. When g l
http://deeplearning.net/software/theano/
AcknowledgementsThe research work has been funded by the Natural Science Foundation of China under Grant No. 61333018 and supported by the Strategic Priority Research Program of the CAS under Grant No. XDB02070007. We thank the anonymous reviewers for their useful comments that greatly improved the manuscript.
Greedy layer-wise training of deep networks. Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, Advances in neural information processing systems. 19153Yoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. 2007. Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19:153.
Wide-coverage efficient statistical parsing with ccg and log-linear models. Stephen Clark, James R Curran, Computational Linguistics. 334Stephen Clark and James R Curran. 2007. Wide-coverage efficient statistical parsing with ccg and log-linear models. Computational Linguistics, 33(4):493-552.
Hierarchical recurrent neural networks for long-term dependencies. Salah El Hihi, Yoshua Bengio, NIPS. Citeseer400409Salah El Hihi and Yoshua Bengio. 1995. Hierarchical recurrent neural networks for long-term dependencies. In NIPS, volume 400, page 409. Citeseer.
Evaluating word embeddings and a revised corpus for part-of-speech tagging in portuguese. João Erick R Fonseca, G Luís, Sandra Maria Rosa, Aluísio, Journal of the Brazilian Computer Society. 211Erick R Fonseca, João Luís G Rosa, and Sandra Maria Aluísio. 2015. Evaluating word embeddings and a revised corpus for part-of-speech tagging in portuguese. Journal of the Brazilian Computer Society, 21(1):1-14.
Learning to forget: Continual prediction with lstm. Jürgen Felix A Gers, Fred Schmidhuber, Cummins, Neural computation. 1210Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. 2000. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451-2471.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Aistats. 9Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural net- works. In Aistats, volume 9, pages 249-256.
Framewise phoneme classification with bidirectional lstm and other neural network architectures. Alex Graves, Jürgen Schmidhuber, Neural Networks. 185Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602-610.
Generating sequences with recurrent neural networks. Alex Graves, arXiv:1308.0850arXiv preprintAlex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, arXiv:1512.03385Deep residual learning for image recognition. arXiv preprintKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385.
Training and analysing deep recurrent neural networks. Michiel Hermans, Benjamin Schrauwen, Advances in Neural Information Processing Systems. Michiel Hermans and Benjamin Schrauwen. 2013. Training and analysing deep recurrent neural networks. In Advances in Neural Information Processing Systems, pages 190-198.
Lstm can solve hard long time lag problems. Sepp Hochreiter, Jürgen Schmidhuber, Advances in neural information processing systems. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Lstm can solve hard long time lag problems. Advances in neural information processing systems, pages 473-479.
Ccgbank: a corpus of CCG derivations and dependency structures extracted from the penn treebank. Julia Hockenmaier, Mark Steedman, Computational Linguistics. 333Julia Hockenmaier and Mark Steedman. 2007. Ccgbank: a corpus of CCG derivations and dependency structures extracted from the penn treebank. Computational Linguistics, 33(3):355-396.
Nal Kalchbrenner, Ivo Danihelka, Alex Graves, arXiv:1507.01526Grid long short-term memory. arXiv preprintNal Kalchbrenner, Ivo Danihelka, and Alex Graves. 2015. Grid long short-term memory. arXiv preprint arXiv:1507.01526.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980arXiv preprintDiederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, arXiv:1603.01360Neural architectures for named entity recognition. arXiv preprintGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.
Improved CCG parsing with semi-supervised supertagging. Mike Lewis, Mark Steedman, Transactions of the Association for Computational Linguistics. 2Mike Lewis and Mark Steedman. 2014. Improved CCG parsing with semi-supervised supertagging. Transactions of the Association for Computational Linguistics, 2:327-338.
Lstm ccg parsing. Mike Lewis, Kenton Lee, Luke Zettlemoyer, Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics. the 15th Annual Conference of the North American Chapter of the Association for Computational LinguisticsMike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. Lstm ccg parsing. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics.
Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, Isabel Trancoso, arXiv:1508.02096Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprintWang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. arXiv preprint arXiv:1508.02096.
Building a large annotated corpus of english: The penn treebank. P Mitchell, Mary Ann Marcus, Beatrice Marcinkiewicz, Santorini, Computational linguistics. 192Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. 14Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word represen- tation. In EMNLP, volume 14, pages 1532-43.
Experiments on learning by back propagation. C David, Plaut, David C Plaut et al. 1986. Experiments on learning by back propagation.
Deep learning made easier by linear transformations in perceptrons. Tapani Raiko, Harri Valpola, Yann Lecun, AISTATS. 22Tapani Raiko, Harri Valpola, and Yann LeCun. 2012. Deep learning made easier by linear transformations in perceptrons. In AISTATS, volume 22, pages 924-932.
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. M Andrew, James L Saxe, Surya Mcclelland, Ganguli, arXiv:1312.6120arXiv preprintAndrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120.
Learning complex, extended sequences using the principle of history compression. Jürgen Schmidhuber, Neural Computation. 42Jürgen Schmidhuber. 1992. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242.
Karen Simonyan, Andrew Zisserman, arXiv:1409.1556Very deep convolutional networks for large-scale image recognition. arXiv preprintKaren Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recogni- tion. arXiv preprint arXiv:1409.1556.
Semisupervised condensed nearest neighbor for part-of-speech tagging. Anders Søgaard, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papersAssociation for Computational Linguistics2Anders Søgaard. 2011. Semisupervised condensed nearest neighbor for part-of-speech tagging. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 48-52. Association for Computational Linguistics.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.
Training very deep networks. K Rupesh, Klaus Srivastava, Jürgen Greff, Schmidhuber, Advances in neural information processing systems. Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015a. Training very deep networks. In Advances in neural information processing systems, pages 2377-2385.
. Klaus Rupesh Kumar Srivastava, Jürgen Greff, Schmidhuber, arXiv:1505.00387Highway networks. arXiv preprintRupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. 2015b. Highway networks. arXiv preprint arXiv:1505.00387.
Combinatory categorial grammar. Non-Transformational Syntax: Formal and Explicit Models of Grammar. Mark Steedman, Jason Baldridge, Wiley-BlackwellMark Steedman and Jason Baldridge. 2011. Combinatory categorial grammar. Non-Transformational Syntax: Formal and Explicit Models of Grammar. Wiley-Blackwell.
The syntactic process. Mark Steedman, MIT Press24Mark Steedman. 2000. The syntactic process, volume 24. MIT Press.
Theano: A python framework for fast computation of mathematical expressions. Theano Development Team, Rami Alrfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frdric Bastien, Justin Bayer, Anatoly Belikov, Theano Development Team, Rami Alrfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frdric Bastien, Justin Bayer, and Anatoly Belikov. 2016. Theano: A python frame- work for fast computation of mathematical expressions.
Supertagging with lstms. Ashish Vaswani, Yonatan Bisk, Kenji Sagae, Ryan Musa, Proceedings of the Human Language Technology Conference of the NAACL. the Human Language Technology Conference of the NAACLAshish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with lstms. In Proceedings of the Human Language Technology Conference of the NAACL.
Part-of-speech tagging with bidirectional long short-term memory recurrent neural network. Peilu Wang, Yao Qian, K Frank, Lei Soong, Hai He, Zhao, arXiv:1510.06168arXiv preprintPeilu Wang, Yao Qian, Frank K Soong, Lei He, and Hai Zhao. 2015. Part-of-speech tagging with bidirectional long short-term memory recurrent neural network. arXiv preprint arXiv:1510.06168.
Huijia Wu, Jiajun Zhang, Chengqing Zong, arXiv:1610.02749A dynamic window neural network for ccg supertagging. arXiv preprintHuijia Wu, Jiajun Zhang, and Chengqing Zong. 2016. A dynamic window neural network for ccg supertagging. arXiv preprint arXiv:1610.02749.
CCG supertagging with a recurrent neural network. Wenduan Xu, Michael Auli, Stephen Clark, Short Papers2250Wenduan Xu, Michael Auli, and Stephen Clark. 2015. CCG supertagging with a recurrent neural network. Volume 2: Short Papers, page 250.
Expected f-measure training for shift-reduce parsing with recurrent neural networks. Wenduan Xu, Michael Auli, Stephen Clark, Proceedings of NAACL-HLT. NAACL-HLTWenduan Xu, Michael Auli, and Stephen Clark. 2016. Expected f-measure training for shift-reduce parsing with recurrent neural networks. In Proceedings of NAACL-HLT, pages 210-220.
Depth-gated lstm. Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, Chris Dyer, Presented at Jelinek Summer Workshop on August. 141Kaisheng Yao, Trevor Cohn, Katerina Vylomova, Kevin Duh, and Chris Dyer. 2015. Depth-gated lstm. In Presented at Jelinek Summer Workshop on August, volume 14, page 1.
D Matthew, Zeiler, arXiv:1212.5701Adadelta: an adaptive learning rate method. arXiv preprintMatthew D Zeiler. 2012. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701.
Highway long short-term memory rnns for distant speech recognition. Yu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, James Glass, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEYu Zhang, Guoguo Chen, Dong Yu, Kaisheng Yaco, Sanjeev Khudanpur, and James Glass. 2016. Highway long short-term memory rnns for distant speech recognition. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5755-5759. IEEE.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, Jürgen Schmidhuber, arXiv:1607.03474Recurrent highway networks. arXiv preprintJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. 2016. Recurrent highway networks. arXiv preprint arXiv:1607.03474.
| [] |
[
"Massively Multilingual Adversarial Speech Recognition",
"Massively Multilingual Adversarial Speech Recognition"
] | [
"Oliver Adams oadams1@jhu.edu \nDepartment of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA\n",
"Matthew Wiesner wiesner@jhu.edu \nDepartment of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA\n",
"Shinji Watanabe shinjiw@jhu.edu \nDepartment of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA\n",
"David Yarowsky yarowsky@jhu.edu \nDepartment of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA\n"
] | [
"Department of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA",
"Department of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA",
"Department of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA",
"Department of Computer Science\nJohns Hopkins University\nBaltimoreMDUSA"
] | [] | We report on adaptation of multilingual endto-end speech recognition models trained on as many as 100 languages. Our findings shed light on the relative importance of similarity between the target and pretraining languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography. In this context, experiments demonstrate the effectiveness of two additional pretraining objectives in encouraging language-independent encoder representations: a context-independent phoneme objective paired with a language-adversarial classification objective.2 festvox.org/cmu_wilderness/index.html | 10.18653/v1/n19-1009 | [
"https://arxiv.org/pdf/1904.02210v1.pdf"
] | 102,353,552 | 1904.02210 | 1c0754e9acf84315b5f1b82f8984e59264e1d810 |
Massively Multilingual Adversarial Speech Recognition
Oliver Adams oadams1@jhu.edu
Department of Computer Science
Johns Hopkins University
BaltimoreMDUSA
Matthew Wiesner wiesner@jhu.edu
Department of Computer Science
Johns Hopkins University
BaltimoreMDUSA
Shinji Watanabe shinjiw@jhu.edu
Department of Computer Science
Johns Hopkins University
BaltimoreMDUSA
David Yarowsky yarowsky@jhu.edu
Department of Computer Science
Johns Hopkins University
BaltimoreMDUSA
Massively Multilingual Adversarial Speech Recognition
We report on adaptation of multilingual endto-end speech recognition models trained on as many as 100 languages. Our findings shed light on the relative importance of similarity between the target and pretraining languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography. In this context, experiments demonstrate the effectiveness of two additional pretraining objectives in encouraging language-independent encoder representations: a context-independent phoneme objective paired with a language-adversarial classification objective.2 festvox.org/cmu_wilderness/index.html
Introduction
The main difficulty in creating automatic speech recognition (ASR) systems for a large number of the world's 7,000 languages is a lack of training data. Such data comes in the form of speech paired with transcriptions, a pronunciation lexicon, and text for language model training. A common technique in data-constrained settings is to learn language-independent representations of speech via multilingual training. Popular approaches include the use of multilingual bottleneck features (Vesely et al., 2012) as well as multilingual model training before fine-tuning to a target language (Scanzio et al., 2008;Vu et al., 2012).
Prior work in multilingual and cross-lingual speech recognition has been restricted to a small handful of the world's most-spoken languages, relying on multilingual corpora such as Global-Phone (Schultz, 2002), the IARPA Babel corpora (Gales et al., 2014), or the VoxForge 1 corpora. Most work typically only reports on models trained on a subset of these languages.
In this paper we explore pretraining multilingual ASR models using speech from as many as 1 voxforge.org 100 languages from the CMU Wilderness Multilingual Speech Dataset (Black, 2019). 2 To the best of our knowledge, this is the greatest number of languages that has been used in multilingual ASR model training to date. We perform experiments to guide the choice of languages used when pretraining the model and assess the relative importance of similarity between the pretraining languages and target language in terms of geographic location, phonology, phonetic inventory, language family and orthography.
We examine these variables in the context of two experimental setups: one where models are adapted to target language and target speakers, and one where models are adapted to target language but non-target speakers. The first task is relevant to language documentation contexts, which often involves transcribing speech of specific speakers for which there already exists some transcribed speech as training data (Michaud et al., 2018). The second case is relevant to incident response as modelled by LORELEI (Strassel and Tracey, 2016), where there may only be a single target-language consultant available for which transcribed speech can be elicited, but the goal is to have an ASR model that generalizes to multiple speakers.
Multilingual ASR training on such a scale presents challenges because of this language diversity.
In order to guide the model to learn language-independent representations that are more amenable to adaptation, we experiment with two auxiliary training tasks. The first is context-independent phoneme sequence prediction to help bridge orthographic inconsistencies between languages. The second is a domainadversarial classification objective (Ganin et al., 2016) over languages to encourage invariance of the model with respect to language-specific phenomena. The hierarchical combination of grapheme and phoneme objectives has only been used in monolingual end-to-end frameworks (Krishna et al., 2018;Rao and Sak, 2017). Languageadversarial training in ASR (Yi et al., 2018) has not been done at this scale before, nor in an endto-end framework.
Our experiments are designed to answer the following questions:
1. Is there benefit in scaling multilingual model training to a large number of languages?
2. In what circumstances, if any, does the addition of a phoneme and/or languageadversarial objective improve multilingual models?
3. How should we choose languages with which to pretrain a multilingual model?
4. Do the answers to the above questions change when adapting to target versus non-target speakers in the target language?
We find that using the auxiliary objectives in pretraining facilitates model transfer to unseen languages, especially when the pretraining languages are very dissimilar (Section 6). When the target speakers are seen in adaptation (Section 7), similarity of the pretraining languages and the target language is more important than quantity of pretraining languages. Choosing as pretraining languages geographically proximal languages tends to help more than phonetically and phonologically similar but otherwise distant languages. However, when adapting to a handful of non-target speakers of the target language (Section 8), the domain mismatch caused by the unseen speaker, language, or recording environment degrades performance. Exposing the model to as many pretraining languages as possible becomes vital to minimize this mismatch. Results on this task demonstrate that a massively multilingual seed model substantially outperforms other seed models trained on languages similar to the target. We will provide an ESPnet recipe to train and test our models.
Related Work
This paper builds on work on multilingual ASR, end-to-end ASR, and adversarial learning.
Multilingual transfer in ASR often relies on using bottle-neck features (Vesely et al., 2012;Vu et al., 2012;Karafiát et al., 2018) and adapting an acoustic model trained on one language to effectively recognize the sounds of other languages (Schultz and Waibel, 2001;Le and Besacier, 2005;Stolcke et al., 2006;Tóth et al., 2008;Plahl et al., 2011;Thomas et al., 2012;Imseng et al., 2014;Do et al., 2014;Heigold et al., 2013;Scharenborg et al., 2017). However, while most work uses less than 10 languages for model training, we include up to 100 languages in training.
End-to-end ASR has recently become popular, with approaches such as attention-based encoderdecoder models (Chorowski et al., 2015;Chan et al., 2015), the connectionist temporal classification (CTC) objective of Graves et al. (2006Graves et al. ( , 2013, or a combination of both (Kim et al., 2016;Hori et al., 2017). These approaches have also been deployed in multilingual settings (Toshniwal et al., 2017;Chiu et al., 2018;Müller et al., 2017;Dalmia et al., 2018;Watanabe et al., 2017a). Our baseline approach to multilingual knowledge transfer is most similar to Inaguma et al. (2018), and involves training a hybrid CTC-attention seed model. Hierarchical and multi-task approaches including combining grapheme and phoneme prediction in monolingual contexts (Rao and Sak, 2017;Krishna et al., 2018) at different levels of the network, or using sub-word units of varying granularity , have been shown to improve ASR performance. In this paper we extend the approach of hierarchical placement of additional objectives in order to enforce language independent, transferable models.
Domain-adversarial training is one such method for encouraging the model to learn language independent representations. A key contribution of this paper is the use of a domainadversarial classification objective (Ganin et al., 2016) over many languages in order to encourage the model to learn representations that are invariant to language. Domain-adversarial training incorporates an auxiliary domain classification task, but negates gradients for encoder weights before the parameter update in order to guide the encoder to produce hidden representations that fool the classifier: i.e. they minimize information about the language while still facilitating the primary task of speech recognition.
Domain-adversarial training has been used in speech recognition to learn features invariant to noise conditions (Shinohara, 2016), accents (Sun, 2018), and sex (Tripathi et al., 2018). Most closely related to our work is that of Yi et al. (2018), who use a language-adversarial objective when preparing multilingual bottleneck features from four languages for a hidden Markov model (HMM) ASR pipeline. In contrast, our work uses an adversarial objective across many languages, pairing it with a context-independent phoneme objective in an endto-end framework.
Data
We scraped the data that forms the CMU Wilderness dataset, using a freely available script. 3 This dataset consists of dramatized readings of the Bible in hundreds of languages. Each reading is ascribed a rating based on alignment quality which fits into one of these classes: very good, good, okay, and not okay.
The script used to preprocess the data uses a universal pronunciation module in Festival (Taylor et al., 1998) 4 to produce pronunciation lexicons using an approach based on that of UniTran (Yoon et al., 2007), which we use to create phonemic transcriptions.
Characteristics of the Speech
The dataset consists of readings of the Bible, with readings typically of just a few speakers, mostly male. These are often dramatized, with sound effects and background music. For many purposes this could be considered a limitation of the data. Although the characteristics of the speech are unique, it allows us to investigate multilingual models over many languages without the confounds of an overly noisy environment. It is not unreasonable to expect our findings to generalize to other speech recognition domains.
Evaluation Languages
While the dataset includes only a single reading of the Bible for most languages, there are a number with two or more. We evaluate on languages for which we can find two or more readings. This is so that we can compare adaptation to a target language but not the speakers of the target reading (we refer to this task as language adaptation, as explored in Section 8) with adaptation to the target language as well as the target reading (we refer to this task as reading adaptation). We additionally restricted the evaluation languages to those that have at least one good or very good reading in terms of alignment quality. Table 1 presents the evaluation languages and readings grouped by family or geographic location, along with their durations.
Auxiliary Training Objectives
In addition to scaling ASR training to 100 languages, a key contribution of our work is the use of a context-independent phoneme objective paired with a language-adversarial classification objective in a end-to-end grapheme-based neural network, as illustrated in Figure 1.
Baseline Model
Our experiments are conducted within the framework of a hybrid CTC-attention end-to-end neural model using ESPnet (Watanabe et al., 2017b), which uses an encoder-decoder architecture implemented in PyTorch (Paszke et al., 2017). The encoder we use consists of VGG-like convolution layers (Simonyan and Zisserman, 2014;Sercu et al., 2016) followed by a multilayer bidirectional long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997). The decoder uses location-based attention (Chorowski et al., 2015) and an LSTM. In addition to the attention, the decoder also incorporates CTC probabilities over graphemes to encourage monotonicity in decoding.
Phoneme Objective
The end-to-end neural model performs direct grapheme prediction without recourse to a pronunciation lexicon as traditional hybrid HMM-DNN models do. Since different orthographies may be mutually disjoint or only weakly related to the phonetic content of the input speech, we use a context-independent phoneme CTC objective to encourage learning of representations independent of such orthographic idiosyncrasies. We performed limited preliminary experiments to determine how best to use the phoneme objective, which corroborated recent work in hierarchical training objectives that supports inserting the phoneme objective in the layers below the final layer (Krishna et al., 2018). We also found that using the phoneme objective during adaptation was harmful and therefore in all reported experiments we use it only during multilingual pretraining.
Language-Adversarial Pretraining
For language-adversarial training we used a loglinear classifier over all languages seen in pretraining. An utterance-level mean of the penultimate encoder layer states is fed into the classifier. For each batch in training we update the network using the interpolated grapheme and phoneme objectives before a separate update step using the adversarial objective.
We follow the learning rate scheduling of Ganin et al. (2016), where the weight of the adversarial objective relative to the speech recognition tasks follows λ(p) = 2 1+exp(−10p) − 1 over the course of training, where p ∈ [0, 1] is a measure of training progress. We drop the adversarial objective during target language adaptation.
Experimental Setup
Language Versus Reading Adaptation
We chose as target adaptation languages those languages for which we have multiple readings of the Bible. This allows us to assess adaptation of the pretrained multilingual model in two scenarios: language adaptation and reading adaptation. In reading adaptation, it is adapted to data from each reading of the target language, including the reading from which we select held-out evaluation utterances. In language adaptation it is adapted only x Encoder Encoder Last Layer Attention Decoder y 1 , y 2 , . . . , y n y 1 , y 2 , . . . , y n CTC Phoneme CTC φ 1 , φ 2 , . . . , φ m Adv L x Figure 1: The end-to-end architecture used during pretraining. x is the input speech features, y 1 , y 2 , . . . , y n is a character sequence the model is trained to output (eg. "knife"). φ 1 , φ 2 , . . . , φ m is a phoneme sequence the model is trained to output (eg. /naIf/), and L x is the language identity of the input speech x.
to readings that are not represented in the evaluation set. This last case, of adapting to just one or several speakers of a new language (in order to ultimately have a system that generalizes beyond those speakers in the language) is not common in speech recognition experimentation. Results and findings for language adaptation will be presented in Section 8.
Training Settings
We established training, validation and test sets for each reading using a random 80/10/10 split. When pretraining or adapting the multilingual systems, we used the combined training sets of the constituent readings.
We used 80-dimensional log Mel filterbank features with 3-dimensional pitch features. We tuned hyperparameters for these models using one Aymara reading. 5 We found that a 4 layer encoder, 1 layer decoder with 768 for the encoder hidden size and projections, decoder hidden size, and attention hidden size yielded equal-best results with deeper models. These settings were then used for training the models used in our experiments.
For the training objective, we linearly interpolated the the attentional decoder cross-entropy loss with the grapheme CTC and phoneme CTC objectives. Equal weight was given to all three since we found that to be effective in preliminary experiments. Note however, that the effective weight of the adversarial objective effectively changes over the course of training because of the learning rate scheduling mentioned in §4.3. We trained for 15 epochs in all cases except where otherwise noted.
Note that during adaptation we initialize the model using both the multilingual encoder and decoder. We found this to work best in preliminary experimentation on a Spanish reading.
Preliminary Investigation of the Auxiliary Objectives
In this section we evaluate the use of the auxiliary phoneme and language-adversarial objectives described in Section 4 on two divergent groups of languages that are distinct along a number of dimensions, including orthography, language family and phonology, in order to assess the auxiliary objectives' capacity to bridge the divide between these languages during pretraining. This serves as an initial exploration before further experiments in Section 7 and Section 8, where we choose from a broader set of pretraining languages.
Pretraining languages We pretrained models on two groups of languages separately and together. The first consists of six languages from the Quechuan language family, including subvarieties of Quechua I and II (qub, quf, qvs, qvw, qwh and qvh). We henceforth refer to this group as QUE. The second consists of six languages that use the Cyrillic script and we refer to this group as CYR. These languages include Nogai (nog), Bashkir (bak), Gagauz (gag), Khakas (kjh), Crimean Tatar (crh), and Russian (rus). With the exception of Russian, these languages are all Turkic. The character sets do not overlap between QUE and CYR and this was a deliberate choice in this preliminary experiment to maximize the differences between the two groups.
Evaluation languages To test the pretrained models in varied contexts, we evaluate our models on three languages: Central Aymara (ayr), South Bolivian Quechua (SB Quechua; quh), and Indonesian (ind). These languages vary in a number of dimensions: SB Quechua is very closely related to QUE, while Indonesian is distant; Aymara is phonologically very similar to Quechuan languages, but is considered to be from a different family; Aymara had a high monolingual baseline error rate, while the others are lower; and Indonesian has three readings while the others have two. However, all evaluation languages use the Latin script. Note that in this section we assess performance in the reading adaptation case, while Section 8 presents results on the held-out reading case.
Experiments Table 2 compares the performance of monolingual target-language models to models adapted to the target language after being pretrained on QUE, CYR and their combination, QUE+CYR. CYR pretraining underperforms pretraining with QUE for all evaluation languages likely due to the orthographic mismatch with all of the evaluation languages. The model pretrained on QUE+CYR also underperforms QUE. Introducing the auxiliary phoneme and languageadversarial objectives helps to overcome this performance loss, making the QUE+CYR-pretrained model the best for adaptation to Aymara and Indonesian. QUE remained the best pretraining set for adaptation to SB Quechua, which is unsurprising given how well represented SB Quechua is by the languages included in the Quechuan language group. This suggests that when a substantial Right: the phoneme and language-adversarial objectives are added in, causing phoneme clusters between languages to gather closer together, and language to become less relevant in cluster placement.
amount of data in very closely related languages is available (in this case, close to 100 hours of QUE data), then there is little to be gained from highly unrelated languages. When pretraining on QUE and CYR separately, the auxiliary objectives underperformed baseline multilingual pretraining on average. The variation in languages within these groups is far less than the variation between groups. Given that the phoneme and adversarial objectives are intended to overcome variation between pretraining languages, this result indicates that there must be a sufficient level of diversity in the pretraining languages before the auxiliary objectives are of benefit when adapting to certain target languages.
Results from pretraining on QUE+CYR showed either objective to help on average, and that the effects are complementary. Because of this, we opted to include them together in subsequent experimentation. We evaluated this best performing model on the larger set of other evaluation languages. Results in Table 3 show that in all cases multilingual pretraining of QUE+CYR with the auxiliary objectives outperformed its counterpart without the objectives (which frequently undeperformed the monolingual model), and in all but one case this led to an improvement over the monolingual baseline. 6 To gain insight into how the auxiliary objectives change the representation of speech learnt by the models, we applied 2D t-SNE dimensionality reduction (Van Der Maaten and Hinton, 2008). Figure 2 plots the representations of two phonemes in three languages learnt by the encoder 7 in the case without and with the auxiliary objectives. In the multilingual pretraining baseline, six clusters are represented for each language-phoneme combination. These appear stratified by language, with different phoneme clusters within languages close to one another. With the auxiliary objectives, phoneme clusters between languages move closer to one another, while language identity becomes less relevant in determining which phoneme clusters neighbour one another. In the latter plot, the Nogai phonemes become separated by a Russian /A/. This is particularly salient since the Nogai speaker was female, while the Russian speaker had a deep male voice.
Reading Adaptation
In the previous section we explored the use of two dissimilar groups of languages in a multilingual setup. Multilingual pretraining of languages from a different language family and script benefitted from an explicit phoneme objective and adversarial objective when there was sufficient diversity in the pretraining languages. However, a change in orthography was conflated with a change in language family, geographic location, and phono- 13.8 (-0.7%) 13.1 12.1 (-7.6%) 15.8 14.8 (-6.3%) Avg. rel. ∆: (-7.8%) Avg. rel. ∆: (-1.0%) Avg. rel. ∆: (-2.3%) Avg. rel. ∆: (-2.9%) Table 3: Word error rate (%) comparison of adaptation of models pretrained on: Quechuan and Cyrillic-script languages (QUE+CYR), languages phonologically and phonetically similar to the target (PHON/INV), geographically proximate languages (GEO), and a massively multilingual set of languages (100-LANG). In each case we compared the average relative WER change when adding auxiliary phoneme and language-adversarial objectives (+phn+adv). Dashed entries had phonology and phonetic inventories that weren't well attested in URIEL, so were not assessed. logical/phonetic characteristics. In this section, we investigate which factors are most important in choosing languages for multilingual pretraining and how useful it is to scale up model pretraining to many languages. This exploration is conducted in the reading adaptation scenario; language adaptation with unseen target speakers is addressed in Section 8. Beyond answering these questions, this investigation reveals more information about the utility of the proposed auxiliary objectives in different scenarios.
Phonology & Geography We test across a number of evaluation languages (c.f. Table 1) by determining, for each evaluation language, groups of pretraining languages that are similar to the evaluation languages in different ways. In order to determine language similarity in a principled way we used URIEL and lang2vec (Littell et al., 2017) to produce feature vectors for each language based on information from several linguistic resources before calculating their cosine similarity. For each language we used two feature vectors. The first is a concatenation of the lang2vec phonology average and inventory average vectors, characterizing phonological properties and phonetic inventory. The second represents geographic location. We denote these two groups PHON/INV and GEO respectively. 8 Geographic proximity may serve as a proxy for other similarities not captured in PHON/INV, including language family, orthographic similarity, and the likelihood of exchanged loan words.
We filtered for languages in the dataset with good or very good alignments before ranking them by cosine similarity with the evaluation languages in terms of phonological and phonetic similarity as well as geographical proximity. To create each of the pretraining sets, we took between 7 and 14 of the top languages, matching approximately the total duration of the phonetically/phonologically similar groups with the geographically proximate language groups. 9 For most languages, there is no overlap between the GEO and PHON/INV sets.
Massively multilingual model As a further point of comparison, we pretrain a model on around 100 languages (denoted 100-LANG), for approximately 1650 training hours in total. 10
Auxiliary Objectives Findings
The results in Table 3 extend on our findings in Section 6, continuing to support the benefit of the use of the auxiliary objectives while shedding more light on the type of language variability the objectives help to overcome. GEO and 100-LANG guage family vectors since most of the Quechuan languages were not captured as being highly similar to SB Quechua. 9 An exhaustive list of the CMU Wilderness language codes for each pretraining group can be found in Appendix A, along with durations of each pretraining set. benefitted comparably from the objectives on average, while PHON/INV did less so. QUE+CYR benefitted the most. This suggests that the objectives may help more when pretraining languages are orthographically, phonetically and phonologically diverse.
Unlike the other languages, the Swedish PHON/INV vectors were not well attested. As a result the Swedish PHON/INV group has languages with a similar phonetic inventory that were also unattested phonologically. This model underperformed the monolingual model by a large margin, suggesting that similarity of phonetic inventory alone may not be so useful alone without similarity of phonological features. Models pretrained on this set also benefitted the most from the auxiliary objectives. It may be the case that the auxiliary objectives push together representations of allophones within languages, and pronunciation variations of the same phonemes between languages. When Swedish is discounted, the average relative improvement when adding auxiliary objectives for PHON/INV becomes negligable.
The PHON/INV configurations are hurt by the auxiliary objectives for SB Quechua and Aymara and Indonesian. The PHON/INV sets for the first two of these languages emphasized Quechuan languages, and this corroborates the indication in Section 6 that the auxiliary objectives may not help so much when pretraining languages are similar. On the other hand, the Indonesian PHON/INV included Afro-Asiatic and Niger-Congo languages, as well an Indo-European language and Huave, a language isolate from Mexico, yet it was not improved by auxiliary objectives.
Choice of Pretraining Languages
The average relative word error rate (WER) change for GEO against PHON/INV was -2.2% without auxiliary objectives, and -4.4% with them, 11 suggesting that features correlated with geography are useful for guiding pretraining language selection. Counter-examples were Aymara, SB Quechua and Malagasy, which performed worse when pretrained on GEO. In the case of SB Quechua, only one Quechuan language was represented in GEO (Inga), while PHON/INV had three (qub, qvh, quf). Madagascar is far removed from where most Austronesian languages are spoken, so Malagasy's GEO set were almost all Niger-Congo 11 Discounting Swedish, this becomes +0.2% and -3.1%. Adapting to the full dataset, the auxiliary objectives underperformed both the monolingual and baselines, but yields an advantage when the model is adapted to less target language data. languages, while the PHON/INV had a diverse array of Austronesian, Indo European, Afro-Asiatic, Sino-Tibetan and Mayan languages. However, on average, these results suggest that geographical proximity is a decent guide to pretraining language selection. Another advantage is that it requires no explicit phonological features, making it applicable to a much larger number of languages.
The average relative WER change of 100-LANG against MONO was +1.3%, indicating that massively multilingual pretraining by itself not useful if the target speakers are seen in training. Using the auxiliary objectives overcame the difference, resulting in a -1.6% average relative WER change. However, pretraining with GEO+phn+adv yielded an average relative delta of -7.4% over the monolingual model. Though more languages help, they are not necessarily better than geographically proximal languages (however, results are very different when not adapting to target speakers: see Section 8).
In two cases pretraining with 100-LANG was hindered by the auxiliary objective. In one of these cases, Swedish, both 100-LANG variations substantially underperformed the monolingual baseline. One possible reason is that there is enough target language and speaker data that the multilingual pretraining and auxiliary objectives offer no benefit. We scaled training/adaptation data for Swedish from under 1 hour. Figure 3 indicates that in this case the auxiliary objectives do lead to better initialization, with gains being lost only when around 5 hours of target language and reading data are seen.
Language Adaptation
Previous sections have addressed the reading adaptation scenario, where the ASR model is adapted to speech from the target reading (ie. where target speakers have been heard in adaptation). In this section we evaluate in a language adaptation scenario, adapting to readings in the target language, but not the target reading. The question of how well a multilingual model can be adapted to a language on the basis of recordings from a small number of target-language speakers is relevant to incident response situations such as those modelled by LORELEI (Strassel and Tracey, 2016), where a single language consultant is available for which recorded speech can be made. We performed experiments analogous to those of the previous sections where the evaluation reading was not seen in training or adaptation. This is a challenging task as the model must generalize to multiple speakers of a language on the basis of seeing only several in training. Most of the findings corroborate what was found in the previous sections. Here we highlight differences.
Massively multilingual pretraining led to substantially better performance than other methods, unlike in the reading adaptation task. For each evaluation language, the 100-LANG model outperformed the next best method, with one exception: Indonesian. In that case GEO set performed the best, as the languages were not only geographically proximate, but also consisted entirely of other Austronesian languages. The takeaway (c.f. Table 4) is that you should always use more pretraining languages unless you know your target speakers, as in the reading adaptation scenario.
Auxiliary objectives remained useful on the whole. However, while the difference in WER achieved when adding the auxiliary objectives was similar to those reported in Section 7 for PHON/INV and 100-LANG, GEO and QUE+CYR no longer achieved improvements. QUE+CYR notably only achieved a -0.2% average relative WER change when adding the auxiliary objectives, while achieving -7.8% in the reading adaptation case. While the auxiliary objectives remained useful on the whole, their effect was dwarfed by the value of adding more languages. 29.9 (-8.6%) Avg. rel. ∆ of 100-LANG wrt. next best method: (-6.0%) Table 4: Adaptation to the non-target reading in the target language. All language sets use the auxiliary training objectives, which again exhibited an relative gain over the corresponding model without. The relative deltas of 100-LANG are with respect to the next closest model on a language-by-language basis.
Phonology versus Geography GEO sets with or without auxiliary objectives lost their edge over PHON/INV, with high variance in scores. The amount of training data becomes the dominating variable affecting WER.
Conclusions
We have explored the utility of pretraining multilingual models on a variety of language sets, scaling to as as many as 100 languages. Our experiments have demonstrated the value of auxiliary phoneme and language-adversarial pretraining objectives in a multilingual end-to-end ASR framework, particularly when the pretraining languages are diverse. Our results suggest how to pick pretraining languages when target speakers are seen in the adaptation data: find geographically proximal languages. When adapting to just several non-target speakers, exposure to more speech in pretraining is the most important thing for model generality, even if from a wide range of dissimilar languages.
Figure 2 :
2t-SNE representation of encoder states corresponding to /A/ and /i/ across Quechua (Huamalies Dos de Mayo; qvh), Russian (rus), and Nogai (nog). Left: the model without the phoneme and adversarial objective.
Figure 3 :
3Scaling training/adaptation data for Swedish.
Table 2 :
2Word error rate (%) comparison of multilingual models adapted to target languages, with and without
auxiliary training objectives (relative change in parentheses). Additionally including Cyrillic-script languages
in pretraining (CYR) doesn't consistently improve over a model pretrained on Quechuan languages (QUE) unless
additional phoneme and language-adversarial objectives (+phn and +adv) are used in combination (+phn+adv).
The auxiliary objectives help when pretraining languages are varied, but hinder when they are very similar. The
final four columns suggest that the objectives are complementary. Average relative word error rate change for each
pretraining set when adding in the auxiliary objectives (versus no aditional objectives) is indicated by Avg. rel. ∆.
https://github.com/festvox/ datasets-CMU_Wilderness 4 http://www.cstr.ed.ac.uk/projects/ festival/
CMU Wilderness reading ID: AYMSBU.
However, this doesn't hold in the language adaptation scenario, where the auxiliary objectives help QUE+CYR only slightly; see Section 8.
We established the correspondence between encoder states and phonemes by using forced alignment with Kaldi(Povey et al., 2011), taking the encoder state at the mid-point of the duration on the phonemes.
We didn't create PHON/INV sets for Ixil and Garap because their phonological features and phonetic inventories were not well attested, and we didn't use the lang2vec lan-
These models were pretrained for 6 epochs.
AcknowledgmentsWe would like to thank Tim Baldwin for an off-hand comment that planted the languageadversarial idea in the first author's head, and to Trevor Cohn for some related discussion. Thanks also go to Alexis Michaud and the reviewers for comments.A List of readings in each language setBelow is a collection of lists of the CMU Wilderness reading codes that comprise different groupings. This includes the target language readings; the Quechuan group; the Cyrillic-script group; the phonologically similar and geographically similar sets for each target language; and the massively multilingual set.
CMU Wilderness Multilingual Speech Dataset. Alan W Black, ICASSP. Alan W Black. 2019. CMU Wilderness Multilingual Speech Dataset. In ICASSP.
Listen, attend and spell. William Chan, Navdeep Jaitly, Quoc V Le, Oriol Vinyals, ICASSP. William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2015. Listen, attend and spell. In ICASSP.
State-of-the-art speech recognition with sequence-to-sequence models. Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Katya Gonina, ICASSP. Chung-Cheng Chiu, Tara N Sainath, Yonghui Wu, Ro- hit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Katya Gonina, et al. 2018. State-of-the-art speech recogni- tion with sequence-to-sequence models. In ICASSP.
Attention-based models for speech recognition. Dzmitry Jan K Chorowski, Dmitriy Bahdanau, Kyunghyun Serdyuk, Yoshua Cho, Bengio, Advances in Neural Information Processing Systems. 28Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recogni- tion. Advances in Neural Information Processing Systems 28, pages 577-585.
Sequence-based multilingual low resource speech recognition. Siddharth Dalmia, Ramon Sanabria, Florian Metze, Alan W Black, ICASSP. Siddharth Dalmia, Ramon Sanabria, Florian Metze, and Alan W Black. 2018. Sequence-based multi- lingual low resource speech recognition. In ICASSP.
Cross-lingual phone mapping for large vocabulary speech recognition of underresourced languages. Xiong Van Hai Do, Xiao, Haizhou Eng Siong Chng, Li, 10.1587/transinf.E97.D.285IEICE Transactions on Information and Systems. 297Van Hai Do, Xiong Xiao, Eng Siong Chng, and Haizhou Li. 2014. Cross-lingual phone mapping for large vocabulary speech recognition of under- resourced languages. IEICE Transactions on Infor- mation and Systems, E97-D(2).
Speech recognition and keyword spotting for low-resource languages: BABEL project research at CUED. J F Mark, Kate M Gales, Anton Knill, Ragni, P Shakti, Rath, Spoken Language Technologies for Under-Resourced Languages. Mark J F Gales, Kate M Knill, Anton Ragni, and Shakti P Rath. 2014. Speech recognition and key- word spotting for low-resource languages: BABEL project research at CUED. In Spoken Language Technologies for Under-Resourced Languages.
Domain-adversarial training of neural networks. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, Journal of Machine Learning Research. 171Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. Journal of Machine Learning Research, 17(1).
A.-R Graves, G Mohamed, Hinton, 10.1109/ICASSP.2013.6638947Speech recognition with deep recurrent neural networks. ICASSP. A Graves, A.-R. Mohamed, and G Hinton. 2013. Speech recognition with deep recurrent neural net- works. ICASSP.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. Alex Graves, Santiago Fernandez, Faustino Gomez, Jurgen Schmidhuber, 10.1145/1143844.1143891ICMLAlex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks. ICML.
Multilingual acoustic models using distributed deep neural networks. Georg Heigold, Vincent Vanhoucke, Alan Senior, Patrick Nguyen, Marc'aurelio Ranzato, Matthieu Devin, Jeffrey Dean, ICASSP. Georg Heigold, Vincent Vanhoucke, Alan Senior, Patrick Nguyen, Marc'Aurelio Ranzato, Matthieu Devin, and Jeffrey Dean. 2013. Multilingual acous- tic models using distributed deep neural networks. In ICASSP.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 89Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8).
Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN encoder and RNN-LM. Takaaki Hori, Shinji Watanabe, Yu Zhang, William Chan, INTERSPEECH. Takaaki Hori, Shinji Watanabe, Yu Zhang, and William Chan. 2017. Advances in joint CTC-attention based end-to-end speech recognition with a deep CNN en- coder and RNN-LM. In INTERSPEECH.
Using out-of-language data to improve an under-resourced speech recognizer. David Imseng, Petr Motlicek, Hervé Bourlard, Philip N Garner, Speech Communication. 56David Imseng, Petr Motlicek, Hervé Bourlard, and Philip N Garner. 2014. Using out-of-language data to improve an under-resourced speech recognizer. Speech Communication, 56.
Transfer learning of language-independent end-to-end ASR with language model fusion. Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, Shinji Watanabe, arXiv:1811.02134Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, and Shinji Watanabe. 2018. Transfer learning of language-independent end-to-end ASR with language model fusion. arXiv:1811.02134.
Analysis of multilingual sequence-to-sequence speech recognition systems. Martin Karafiát, Karthick Murali, Shinji Baskar, Takaaki Watanabe, Matthew Hori, Jan Wiesner, arXiv:1811.03451Honza"Černocký. Martin Karafiát, Murali Karthick Baskar, Shinji Watanabe, Takaaki Hori, Matthew Wiesner, and Jan "Honza"Černocký. 2018. Analysis of multilin- gual sequence-to-sequence speech recognition sys- tems. arXiv:1811.03451.
Joint CTC-attention based end-to-end speech recognition using multi-task learning. Suyoun Kim, Takaaki Hori, Shinji Watanabe, ICASSP. Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2016. Joint CTC-attention based end-to-end speech recognition using multi-task learning. In ICASSP.
Hierarchical multitask learning for CTC-based speech recognition. Kalpesh Krishna, Shubham Toshniwal, Karen Livescu, arXiv:1807.06234Kalpesh Krishna, Shubham Toshniwal, and Karen Livescu. 2018. Hierarchical multitask learning for CTC-based speech recognition. arXiv:1807.06234.
First steps in fast acoustic modeling for a new target language: application to Vietnamese. Bac Viet, Laurent Le, Besacier, ICASSP. Viet Bac Le and Laurent Besacier. 2005. First steps in fast acoustic modeling for a new target language: application to Vietnamese. In ICASSP.
URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. Patrick Littell, David Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, Lori Levin, EACL. Patrick Littell, David Mortensen, Ke Lin, Kather- ine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as ty- pological, geographical, and phylogenetic vectors. In EACL.
Integrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit. Language Documentation & Conservation. Alexis Michaud, Oliver Adams, Trevor Anthony Cohn, Graham Neubig, Séverine Guillaume, 12Alexis Michaud, Oliver Adams, Trevor Anthony Cohn, Graham Neubig, and Séverine Guillaume. 2018. In- tegrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit. Language Documenta- tion & Conservation, 12.
Phonemic and Graphemic Multilingual CTC Based Speech Recognition. Markus Müller, Sebastian Stüker, Alex Waibel, arXiv:1711.04564Markus Müller, Sebastian Stüker, and Alex Waibel. 2017. Phonemic and Graphemic Multilingual CTC Based Speech Recognition. arXiv:1711.04564.
Automatic differentiation in pytorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, NeurIPSAdam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. NeurIPS.
Cross-lingual portability of Chinese and English neural network features for French and German LVCSR. Christian Plahl, Ralf Schlüter, Hermann Ney, Automatic Speech Recognition and Understanding (ASRU), IEEE Workshop on. Christian Plahl, Ralf Schlüter, and Hermann Ney. 2011. Cross-lingual portability of Chinese and English neural network features for French and German LVCSR. In Automatic Speech Recognition and Un- derstanding (ASRU), IEEE Workshop on.
Yanmin Qian, Petr Schwarz, and Others. 2011. The Kaldi speech recognition toolkit. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Automatic Speech Recognition and Understanding (ASRU), IEEE workshop on. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, and Others. 2011. The Kaldi speech recognition toolkit. In Automatic Speech Recogni- tion and Understanding (ASRU), IEEE workshop on.
Multi-accent speech recognition with hierarchical grapheme based models. Kanishka Rao, Haim Sak, ICASSP. Kanishka Rao and Haim Sak. 2017. Multi-accent speech recognition with hierarchical grapheme based models. ICASSP.
Hierarchical multi task learning with CTC. Ramon Sanabria, Florian Metze, IEEE Spoken Language Technology Workshop (SLT). Ramon Sanabria and Florian Metze. 2018. Hierarchi- cal multi task learning with CTC. In IEEE Spoken Language Technology Workshop (SLT).
On the use of a multilingual neural network front-end. Stefano Scanzio, Pietro Laface, Luciano Fissore, Roberto Gemello, Franco Mana, INTERSPEECH. Stefano Scanzio, Pietro Laface, Luciano Fissore, Roberto Gemello, and Franco Mana. 2008. On the use of a multilingual neural network front-end. In INTERSPEECH.
Building an ASR system for a low-research language through the adaptation of a high-resource language ASR system: preliminary results. Odette Scharenborg, Francesco Ciannella, Shruti Palaskar, Alan Black, Florian Metze, Lucas Ondel, Mark Hasegawa-Johnson, International Conference on Natural Language, Signal and Speech Processing. ICNLSSPOdette Scharenborg, Francesco Ciannella, Shruti Palaskar, Alan Black, Florian Metze, Lucas Ondel, and Mark Hasegawa-Johnson. 2017. Building an ASR system for a low-research language through the adaptation of a high-resource language ASR system: preliminary results. In International Conference on Natural Language, Signal and Speech Processing (ICNLSSP).
GlobalPhone: a multilingual speech and text database developed at Karlsruhe University. Tanja Schultz, Seventh International Conference on Spoken Language Processing. Tanja Schultz. 2002. GlobalPhone: a multilingual speech and text database developed at Karlsruhe University. In Seventh International Conference on Spoken Language Processing.
Experiments on cross-language acoustic Modeling. Tanja Schultz, Alex Waibel, EU- ROSPEECH'01Tanja Schultz and Alex Waibel. 2001. Experi- ments on cross-language acoustic Modeling. EU- ROSPEECH'01.
Bidirectional recurrent neural networks. Mike Schuster, K Kuldip, Paliwal, IEEE Transactions on Signal Processing. 1145Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11).
Very deep multilingual convolutional neural networks for LVCSR. Tom Sercu, Christian Puhrsch, Brian Kingsbury, Yorktown Heights, ICASSP. Tom Sercu, Christian Puhrsch, Brian Kingsbury, and Yorktown Heights. 2016. Very deep multilingual convolutional neural networks for LVCSR. In ICASSP.
Adversarial multi-task learning of deep neural networks for robust speech recognition. Yusuke Shinohara, 10.21437/Interspeech.2016-879INTERSPEECH. Yusuke Shinohara. 2016. Adversarial multi-task learn- ing of deep neural networks for robust speech recog- nition. In INTERSPEECH.
Karen Simonyan, Andrew Zisserman, arXiv:1409.1556Very deep convolutional networks for large-scale image recognition. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
Cross-domain and cross-language portability of acoustic features estimated by multilayer perceptrons. A Stolcke, F Grezl, Mei-Yuh Hwang, Xin Lei, N Morgan, D Vergyri, 10.1109/ICASSP.2006.1660022ICASSP. A. Stolcke, F. Grezl, Mei-Yuh Hwang, Xin Lei, N. Morgan, and D. Vergyri. 2006. Cross-domain and cross-language portability of acoustic features estimated by multilayer perceptrons. In ICASSP.
LORELEI language packs: data, tools, and resources for technology development in low resource languages. Stephanie Strassel, Jennifer Tracey, LRECStephanie Strassel and Jennifer Tracey. 2016. LORELEI language packs: data, tools, and re- sources for technology development in low resource languages. LREC.
Domain adversarial training for accented speech recognition. ICASSP. Sining SunSining Sun. 2018. Domain adversarial training for ac- cented speech recognition. In ICASSP.
The architecture of the Festival speech synthesis system. Paul Taylor, Alan W Black, Richard Caley, ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis. Paul Taylor, Alan W Black, and Richard Caley. 1998. The architecture of the Festival speech synthesis sys- tem. In ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis.
Multilingual MLP features for low-resource LVCSR systems. Samuel Thomas, Sriram Ganapathy, In ICASSP. Hynek Hermansky, and Speech ProcessingSamuel Thomas, Sriram Ganapathy, Hynek Herman- sky, and Speech Processing. 2012. Multilingual MLP features for low-resource LVCSR systems. In ICASSP.
Multilingual speech recognition with a single end-to-end model. Shubham Toshniwal, Tara N Sainath, Ron J Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, Kanishka Rao, ICASSP. Shubham Toshniwal, Tara N. Sainath, Ron J. Weiss, Bo Li, Pedro Moreno, Eugene Weinstein, and Kan- ishka Rao. 2017. Multilingual speech recognition with a single end-to-end model. In ICASSP.
Cross-lingual portability of MLP-based tandem features -a case study for English and Hungarian. László Tóth, Joe Frankel, Gábor Gosztolya, Simon King, INTERSPEECHLászló Tóth, Joe Frankel, Gábor Gosztolya, and Simon King. 2008. Cross-lingual portability of MLP-based tandem features -a case study for English and Hun- garian. INTERSPEECH.
Adversarial learning of raw speech features for domain invariant speech recognition. Aditay Tripathi, Aanchan Mohan, Saket Anand, Maneesh Singh, ICASSP. Aditay Tripathi, Aanchan Mohan, Saket Anand, and Maneesh Singh. 2018. Adversarial learning of raw speech features for domain invariant speech recog- nition. In ICASSP.
Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Technical reportLaurens Van Der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Technical report.
The languageindependent bottleneck features. Karel Vesely, Martin Karafiát, Frantisek Grezl, Marcel Janda, Ekaterina Egorova, Spoken Language Technology Workshop (SLT). IEEEKarel Vesely, Martin Karafiát, Frantisek Grezl, Marcel Janda, and Ekaterina Egorova. 2012. The language- independent bottleneck features. In Spoken Lan- guage Technology Workshop (SLT), 2012 IEEE.
Multilingual bottle-neck features and its application for under-resourced languages. Ngoc Thang Vu, Florian Metze, Tanja Schultz, The third International Workshop on Spoken Language Technologies for Under-resourced languages. Ngoc Thang Vu, Florian Metze, and Tanja Schultz. 2012. Multilingual bottle-neck features and its ap- plication for under-resourced languages. In The third International Workshop on Spoken Language Technologies for Under-resourced languages.
Language independent end-to-end architecture for joint language identification and speech recognition. Shinji Watanabe, Takaaki Hori, John R Hershey, Automatic Speech Recognition and Understanding Workshop (ASRU), IEEE Workshop on. Shinji Watanabe, Takaaki Hori, and John R Hershey. 2017a. Language independent end-to-end archi- tecture for joint language identification and speech recognition. In Automatic Speech Recognition and Understanding Workshop (ASRU), IEEE Workshop on.
Hybrid CTC/attention architecture for end-to-end speech recognition. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, Tomoki Hayashi, 10.1109/JSTSP.2017.2763455IEEE Journal on Selected Topics in Signal Processing. Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R. Hershey, and Tomoki Hayashi. 2017b. Hybrid CTC/attention architecture for end-to-end speech recognition. IEEE Journal on Selected Topics in Signal Processing.
Adversarial multilingual training for lowresource speech recognition. Jiangyan Yi, Jianhua Tao, Zhengqi Wen, Ye Bai, ICASSP. Jiangyan Yi, Jianhua Tao, Zhengqi Wen, and Ye Bai. 2018. Adversarial multilingual training for low- resource speech recognition. ICASSP.
Multilingual transliteration using feature based phonetic method. Kyoung-Young Su-Youn Yoon, Richard Kim, Sproat, ACL. Su-Youn Yoon, Kyoung-Young Kim, and Richard Sproat. 2007. Multilingual transliteration using fea- ture based phonetic method. In ACL.
Ml-Grcv Mlgeiv, Mlgrpv, Ix1wbt, Ixiwbt, Ixl-Wbt, Inzntv, Inzshl, Inztsi, Quhrbv, Quhsbb, Qejllb, Qubpbs, Qufllb, Qvstbl, Qvwtbl, Qwhllb, Spnbda, Spnwtc, Kiabsc, Kiawbt, Kekibs, Keksbg, Swesfb, Aymsbu Swesfv, Aymbsb Mlgrpv, Ixiwbt, Inzshl, Quhrbv, Spnbda, Kiawbt, Swesfv Kekibs, Que ; Qejllb, Qubpbs, Qufllb, Qvwtbl Qvstbl, Qwhllb , Target language readings. 97.6 training hoursTarget language readings MLGEIV, ML- GRCV, MLGRPV, IX1WBT, IXIWBT, IXL- WBT, INZNTV, INZSHL, INZTSI, QUHRBV, QUHSBB, QEJLLB, QUBPBS, QUFLLB, QVSTBL, QVWTBL, QWHLLB, SPNBDA, SPNWTC, KIABSC, KIAWBT, KEKIBS, KEKSBG, SWESFB, SWESFV, AYMSBU, AYMBSB. Evaluation readings AYMSBU, MLGRPV, IXIWBT, INZSHL, QUHRBV, SPNBDA, KIAWBT, KEKIBS, SWESFV. QUE (97.6 training hours) QEJLLB, QUBPBS, QUFLLB, QVSTBL, QVWTBL, QWHLLB.
Bakibt Nogibt, Gagib1, Kjhibt, Crhibt Russ76, Ayr-Phon, CYR (59.6 training hours). INV (145.3 training hoursCYR (59.6 training hours) NOGIBT, BAKIBT, GAGIB1, KJHIBT, RUSS76, CRHIBT. AYR-PHON/INV (145.3 training hours)
. Tobbsa Qubpbs, Qufllb, Qvstbl, Inbwbt, Qejllb, Qu1lsm Jicwbt, Qutibs , QUBPBS, TOBBSA, QUFLLB, QVSTBL, INBWBT, QEJLLB, JICWBT, QU1LSM, QUTIBS.
. Ayr-Geo ; Ignsbb, Gnwntm Tnatbl, Esentm, Mcbtbl, Gyrsbb, Cbsbsp, 146.2 training hoursAYR-GEO (146.2 training hours) IGNSBB, TNATBL, GNWNTM, ESENTM, MCBTBL, GYRSBB, CBSBSP
To-Bbsa, Dugbtl, Tzhsbm Qubpbs, Hus-Llb, Nyfbtl, Ncuwbt, Qejllb, Qufllb, Nzibsg Haggil, Mnbtbl , QUH-PHON/INV (177.9 training hours. QUH-PHON/INV (177.9 training hours) TO- BBSA, DUGBTL, QUBPBS, TZHSBM, HUS- LLB, NYFBTL, NCUWBT, QEJLLB, QUFLLB, HAGGIL, NZIBSG, MNBTBL.
. Quh-Geo ; Gnwntm Ignsbb, Tobbsa, Enxbsp, Gyrsbb, Caxsbb, Cegntp, Esentm Tnatbl, Tertbl Kek-Phon+inv, 178.5 training hours. 142.1 training hoursQUH-GEO (178.5 training hours) GNWNTM, IGNSBB, TOBBSA, ENXBSP, GYRSBB, CAXSBB, CEGNTP, TNATBL, ESENTM, TERTBL. KEK-PHON+INV (142.1 training hours)
. Qutibs Qu1lsm, Tztwbt, Tufwyi, Qwhllb, Udusim Pagpbs, Yuasbm , QU1LSM, QUTIBS, TZTWBT, TUFWYI, QWHLLB, PAGPBS, UDUSIM, YUASBM.
Kek-Geo ; Mopwbt, Pohbsg, Ca1wbt, Ckiwbt, Tztwbt, Qu1lsm, Bzjbsw Qutibs, Mlg-Phon, 137.0training hours). INV (198.2 training hoursKEK-GEO (137.0 training hours) MOPWBT, POHBSG, CA1WBT, CKIWBT, TZTWBT, QU1LSM, QUTIBS, BZJBSW. MLG-PHON/INV (198.2 training hours)
. Tglpbs Ronbsr, Kvnwbt, Huvtbl, Kbrsim, Tpmwbt, Btxlai, Kacubs, Wmwwyi, Ignsbb, Haebse, Ibativ, Tzbsbm Hilhpv, RONBSR, TGLPBS, KVNWBT, HUVTBL, KBRSIM, TPMWBT, BTXLAI, KACUBS, WMWWYI, IGNSBB, HAEBSE, IBATIV, HILHPV, TZBSBM.
. Mlg-Geo, 205.38 training hoursMLG-GEO (205.38 training hours)
. Vmwbsm Wmwwyi, Mfebsm, Sehbsm, Ccesbm Tohsbm, Kdcpbt, Cwepbt, Kkibst, Nyybst, Ksbbst, Dugbtl Kdnbsz, Gogbst , WMWWYI, VMWBSM, MFEBSM, SEHBSM, TOHSBM, CCESBM, KDCPBT, CWEPBT, KKIBST, NYYBST, KSBBST, KDNBSZ, DUGBTL, GOGBST.
Iba-Tiv, Tglpbs, Haebse, Kerabt, Kacubs, Nyfbtl, Ronbsr, Cwtatb, Huvtbl, Btxlai, Ignsbb, Dugbtl Javnrf, Mnkbsg , IND-PHON/INV (193.1 training hours). IND-PHON/INV (193.1 training hours) IBA- TIV, TGLPBS, HAEBSE, KERABT, KACUBS, NYFBTL, RONBSR, CWTATB, HUVTBL, BTXLAI, IGNSBB, JAVNRF, DUGBTL, MNKBSG.
. Ind-Geo ; Sunibs, Ni-Jlai , Javnrf Pselai, Ptulai Ibativ, Mv-Plai, Ppklai, Beplai, Npylai, Mwvlai Lewlai, Swe-Phon/Inv, 122.4 training hoursIND-GEO (191.5 training hours) SUNIBS, NI- JLAI, JAVNRF, PSELAI, IBATIV, PTULAI, MV- PLAI, PPKLAI, BEPLAI, NPYLAI, LEWLAI, MWVLAI. SWE-PHON/INV (122.4 training hours)
. Nzibsg Kdjbsu, Anvwbt Dgabsg, Shkbss, Sldtbl, Kustbl, Muywbt, Liabsl Ncuwbt, Ckogil , KDJBSU, NZIBSG, ANVWBT, DGABSG, SHKBSS, SLDTBL, KUSTBL, MUYWBT, NCUWBT, LIABSL, CKOGIL.
. Swe-Geo ; Rmcwfw, Rmoram En1niv, Ronbsr, Gagib1, Gag-Ibt, Crhibt, Kpvibt, Alsbsa Ltnnvv, Xalibt Udmibt, Bakibt , 122.4 training hoursSWE-GEO (122.4 training hours) RMCWFW, EN1NIV, RMORAM, RONBSR, GAGIB1, GAG- IBT, CRHIBT, KPVIBT, LTNNVV, ALSBSA, UDMIBT, XALIBT, BAKIBT.
Kvn-Wbt, Haebse, Huvtbl, Gugrpv, Husllb, Nyfbtl Gumtbl, Kwiwbt , SPN-PHON/INV (123.7 training hours. SPN-PHON/INV (123.7 training hours) KVN- WBT, HAEBSE, HUVTBL, GUGRPV, HUSLLB, GUMTBL, NYFBTL, KWIWBT.
8 training hours). Spn-Geo ; Porara, Ltnnvv, Rmoram En1niv, Alsbsa , Rm-Cwfw Ronbsr, Gagib1, Gagibt, Crhibt, Taqwbt, Mykwbt Fuqwbt, Acutbl Obowbt, Seywbt, Hauclv, Amkwbt Bzhpng, Gnwntm Gagib1, Urbwbt, Rugwbt, Pauubs, Sehbsm, Snnwbt, Kqetbl, Tgotbl, Nogibt, Xtmtbl, Oj1cbs, Aiawyi Tnatbl, Pabtbl, Mejtbl, Twbomf, Husllb, Esentm, Bakibt, Hnnomf, Ifawbt, Aljomf Enxbsp, Pxmbsm Jaisbg, Pirwbt, Dombec, Ninwyi, Beplai, Jambsw, Tertbl, Lawntm, Agnwps Uratbl, Tpipng, Ttcwbt, Huutbl, Npylai, Azztbl Kjhibt, Kwiwbt Cokwbt, Sab-Wbt, Padtbl, Gumtbl, Crhibt, Rmoram Qxrbse, Nhytbl, Tpptbl, Tufwyi, Zlmavb, Prfwbt, Gagibt Twulai, Far-Wbt, Om1tbl, Russ76, Ptulai, Mifwbt, Miywyi, Mrwnvs, Knetbl, Pbcbss, Achbsu Myywbt, Acnbsm , Adetbl Alpwbt, Alsbsa , Altibt , Anvwbt , Atgwyi , Avnwbt , Avuwbt , Aymbsb , Aymsbu , Azebsa Bexwbt, Bqjatb, Btxlai, Bzjbsw, Ca1wbt, Carbss, Caxsbb, Cbsbsp, Cmrwbt, Cnmrgb Cnltbl, Crnwbt , 100129.5 training hoursSPN-GEO (129.5 training hours) PORARA, LTNNVV, EN1NIV, RMORAM, ALSBSA, RM- CWFW, RONBSR, GAGIB1, GAGIBT, CRHIBT, TAQWBT, FUQWBT, MYKWBT. 100-LANG (1646.8 training hours) OBOWBT, ACUTBL, SEYWBT, HAUCLV, BZHPNG, AMKWBT, GAGIB1, GNWNTM, URBWBT, RUGWBT, PAUUBS, SEHBSM, SNNWBT, KQETBL, TGOTBL, NOGIBT, XTMTBL, OJ1CBS, TNATBL, AIAWYI, PABTBL, MEJTBL, TWBOMF, HUSLLB, ESENTM, BAKIBT, HNNOMF, IFAWBT, ENXBSP, ALJOMF, PXMBSM, JAISBG, PIRWBT, DOMBEC, NINWYI, BEPLAI, JAMBSW, TERTBL, LAWNTM, URATBL, AGNWPS, TPIPNG, TTCWBT, HUUTBL, NPYLAI, KJHIBT, AZZTBL, COKWBT, KWIWBT, SAB- WBT, PADTBL, GUMTBL, CRHIBT, QXRBSE, RMORAM, NHYTBL, TPPTBL, TUFWYI, ZLMAVB, PRFWBT, TWULAI, GAGIBT, FAR- WBT, OM1TBL, RUSS76, PTULAI, MIFWBT, MIYWYI, MRWNVS, KNETBL, PBCBSS, MYYWBT, ACHBSU, ACNBSM, ADETBL, AHKTBS, AK1BSG, ALPWBT, ALSBSA, ALTIBT, ANVWBT, ATGWYI, AVNWBT, AVUWBT, AYMBSB, AYMSBU, AZEBSA, BEXWBT, BQJATB, BTXLAI, BZJBSW, CA1WBT, CARBSS, CAXSBB, CBSBSP, CMRWBT, CNLTBL, CNMRGB, CRNWBT.
| [
"https://github.com/festvox/"
] |
[
"Automated Classification of Text Sentiment",
"Automated Classification of Text Sentiment"
] | [
"Emmanuel Dufourq edufourq@gmail.com ",
"Bruce A Bassett bruce.a.bassett@gmail.com ",
"Emmanuel Dufourq ",
"Bruce A Bassett ",
"\nAfrican Institute for Mathematical Sciences Maths & Applied Maths\nAfrican Institute for Mathematical Sciences South African Astronomical Observatory Maths & Applied Maths\nUniversity of Cape Town Cape Town\nSouth Africa\n",
"\nACM Reference format\nUniversity of Cape Town\nCape TownSouth Africa\n"
] | [
"African Institute for Mathematical Sciences Maths & Applied Maths\nAfrican Institute for Mathematical Sciences South African Astronomical Observatory Maths & Applied Maths\nUniversity of Cape Town Cape Town\nSouth Africa",
"ACM Reference format\nUniversity of Cape Town\nCape TownSouth Africa"
] | [
"Thaba Nchu, South Africa"
] | The ability to identify sentiment in text, referred to as sentiment analysis, is one which is natural to adult humans. This task is, however, not one which a computer can perform by default. Identifying sentiments in an automated, algorithmic manner will be a useful capability for business and research in their search to understand what consumers think about their products or services and to understand human sociology. Here we propose two new Genetic Algorithms (GAs) for the task of automated text sentiment analysis. The GAs learn whether words occurring in a text corpus are either sentiment or amplifier words, and their corresponding magnitude. Sentiment words, such as 'horrible', add linearly to the final sentiment. Amplifier words in contrast, which are typically adjectives/adverbs like 'very', multiply the sentiment of the following word. This increases, decreases or negates the sentiment of the following word. The sentiment of the full text is then the sum of these terms. This approach grows both a sentiment and amplifier dictionary which can be reused for other purposes and fed into other machine learning algorithms. We report the results of multiple experiments conducted on large Amazon data sets. The results reveal that our proposed approach was able to outperform several public and/or commercial sentiment analysis algorithms. | 10.1145/3129416.3129420 | [
"https://arxiv.org/pdf/1804.01963v1.pdf"
] | 4,611,752 | 1804.01963 | f5857f4ee3afdf090420eda1ce398b466f507353 |
Automated Classification of Text Sentiment
2017. September 26-28, 2017. September 26-28, 2017
Emmanuel Dufourq edufourq@gmail.com
Bruce A Bassett bruce.a.bassett@gmail.com
Emmanuel Dufourq
Bruce A Bassett
African Institute for Mathematical Sciences Maths & Applied Maths
African Institute for Mathematical Sciences South African Astronomical Observatory Maths & Applied Maths
University of Cape Town Cape Town
South Africa
ACM Reference format
University of Cape Town
Cape TownSouth Africa
Automated Classification of Text Sentiment
Thaba Nchu, South Africa
SAICSIT '17South Africa102017. September 26-28, 2017. September 26-28, 201710.1145/3129416.3129420ACM ISBN 978-1-4503-5250-5/17/09. . . $15.00CCS CONCEPTS • Computing methodologies → Natural language process- ingBio-inspired approachesKEYWORDS Sentiment analysis, genetic algorithm, machine learning
The ability to identify sentiment in text, referred to as sentiment analysis, is one which is natural to adult humans. This task is, however, not one which a computer can perform by default. Identifying sentiments in an automated, algorithmic manner will be a useful capability for business and research in their search to understand what consumers think about their products or services and to understand human sociology. Here we propose two new Genetic Algorithms (GAs) for the task of automated text sentiment analysis. The GAs learn whether words occurring in a text corpus are either sentiment or amplifier words, and their corresponding magnitude. Sentiment words, such as 'horrible', add linearly to the final sentiment. Amplifier words in contrast, which are typically adjectives/adverbs like 'very', multiply the sentiment of the following word. This increases, decreases or negates the sentiment of the following word. The sentiment of the full text is then the sum of these terms. This approach grows both a sentiment and amplifier dictionary which can be reused for other purposes and fed into other machine learning algorithms. We report the results of multiple experiments conducted on large Amazon data sets. The results reveal that our proposed approach was able to outperform several public and/or commercial sentiment analysis algorithms.
INTRODUCTION
The amount of data collected from users around the world and stored for posterity has skyrocketed over the past decade as websites such as Twitter, Amazon and Facebook have facilitated the publication and aggregation of micro opinion pieces that allow individuals to record their sentiments towards things, people and events. This data is clearly of value to researchers, organizations and companies to understand sentiment both as individuals and on average, and as well as to identify trends. The automated detection of emotions and attitudes towards a particular subject, event or entity is what we will call sentiment analysis [16,23]. Sentiment analysis has been applied to many problem domains; for instance, determining sentiments of consumers towards products, or mining social media to gain an understanding of the public's opinion on matters such as corruption [16,23,26].
For adult humans, interpreting the underlying emotions in text is usually performed unconsciously and with apparent ease. We are able to recognize emotions in emails, sentiments in our social media feed and appreciate the subtle nuances of conflicting views in novels. Nevertheless, even for humans text can be notoriously easy to misinterpret. For machines, on the other hand, sentiment analysis is highly non-trivial. From the year 2000 onwards, a number of researchers have begun contributing towards the field of sentiment analysis [23]. This area of research is highly active, increasingly so, due to the vast amount of digital information available, and the amount of sentiments expressed online. With the rapid increase in computational power available in the recent years and the extreme amount of data available online, it is clear that developing novel sentiment analysis methods will be beneficial to organizations in order to enable them to understand what the public feels about their products and services.
In this study, two genetic algorithms (GA) were proposed for text sentiment analysis. The proposed approach optimizes the sentiments of the words in order to correctly classify as much data as possible. This research proposes a new way of representing words, as either a sentiment or an amplifier word, whereby amplifier words intensify the sentiments in sentences. The words are combined to form mathematical expressions in order to determine whether or not a given sentence is positive or negative. The following section describes the GA which was used to create the models which perform sentiment analysis.
GENETIC ALGORITHM
A GA [9] is an evolutionary algorithm [8] inspired by "survival of the fittest" in nature that can be used to solve optimization problems. A GA evolves a population of chromosomes which are made up of several genes. The size of the population is a user-defined parameter. Each chromosome represents a candidate solution to the optimization problem. Each chromosome is evaluated in order to determine how successful it is at solving the optimization problem. The evaluation is obtained by computing the fitness of each chromosome. For a maximization problem, a chromosome with a higher fitness is considered as a better one, whereas a chromosome with a smaller fitness is considered as a weaker one.
This study implements GAs to optimize the correct classification (positive or negative) of short pieces of text given words of unknown sentiment in the text. While some sentiment algorithms make use of large dictionaries of words with associated sentiment values [20], the GAs we propose can learn the type and associated sentiment values of words; although the GA can make use of sentiment dictionaries if desired. This is appropriate if there is little training data.
Algorithm 1 illustrates the pseudocode for a GA. An initial population of chromosomes is randomly created in step 2, and each chromosome is evaluated in step 3 to determine if a solution to the optimization problem exists from the initial population. In step 5, the algorithm enters into a generational loop until the maximum number of generations is met, or until a solution to the optimization problem is found. The maximum number of generations is a user-defined parameter. Create an initial population of chromosomes. 3 Evaluate the initial population. Select the parents. 8 Perform the genetic operators. 9 Replace the current population with the new offspring created in step 8.
10
Evaluate the current population. 11 return The best chromosome.
RELATED WORK
GAs have been used before in sentiment analysis studies, though not primarily for actual sentiment determination but rather for feature selection and reduction, e.g. [1]. There the chromosomes had length equal to the total number of features, and the genes were encoded with a 0 or a 1 depending on whether or not that particular feature was to be used or not. The GA optimized which features to use from the original set, and a SVM classifier was then applied to that feature set in order to train and predict the reviews. Genes were encoded in a similar manner for feature selection with the ultimate goal of reducing the number of features in the study of Kalaivani and Shunmuganathan [13]. GAs were also used to optimize features in several other studies, such as that of Paramesha and Ravishankar [25] which used a GA in order to allocate weights to features. Govindarajan [10] proposed an ensemble approach using Naive Bayes and a GA. Smith [27] proposed the use of GA to reduce the number of features as did Acampora and Cosma [2].
Carvalho et al. [5] present a novel GA approach whereby a fixed chromosome is split in two parts, a positive and a negative part. A set of 25 positive and 25 negative words were seeded into the algorithm. Their approach attempts to find which of those words should be added into the respective parts of the GA chromosomes in order to maximize the accuracy of classifying Twitter tweets. A chromosome is then evaluated using a distance measure based on the words in the tweets in relation to the words in the chromosome. Thus, for example, if a particular chromosome is evaluated on some tweet, and the words in the tweet are considered to be nearer (based on the distance measure) to the positive words in the chromosome than to the negative, then the tweet is classified as positive.
Das and Bandyopadhyay [7] make use of a GA for subjectivity detection. Even though this area of research does not deal with sentiments, the research is aligned. Ten features were chosen and a number of predetermined values were assigned to each feature. An example of two features used were parts-of-speech and SentiWord-Net values. The former takes up to 45 possible parts-of-speech values; and latter 2 values, positive or negative. The aim behind the research was to optimize the best set of features.
By contrast, the rationale behind the present study is not to propose a new GA feature selection method; instead, the focus is to propose a GA that determines the sentiment of reviews without making use of a feature set. Furthermore, our approach treats each individual piece of text with a sentiment as a mathematical formula made up of unknown variables corresponding to each word in the text. Thus, the goal is to use a GA to simultaneously solve for the unknown variables as a step towards correctly predicting the total sentiment of a piece of text.
CLASSIFICATION-VALUE PAIR
In our study, each word is assigned both a 'classification' and a 'value' that we call a classification-value pair in the form 'classification:value'. Classifications take on one of two types, namely either sentiment or amplifier. Intuitively this captures the difference between words that carry sentiment directly (e.g. 'horrible', 'sad', 'wonderful') and adjectives/adverbs that modify the sentiment of the following word (e.g. 'very', 'not', 'little'). In addition to this classification every word is given exactly one value associated with that classification, taken from this list:
• Sentiment ∈ {-1.0, 0.0, 1.0} • Amplifier ∈ {0.5, 1.0, 1.5}
For this classification-value pair, a sentiment value of -1, 0 and 1 represents a negative, neutral and positive sentiment respectively. The three values for the amplifier represent different intensification values, i.e. a value of 1.5 is a larger amplification than a value of 0.5. These values were selected by conducting various preliminary runs.
A word is referred to as an unknown word if its classificationvalue is not known. Examples of three classification-value pairs are: sentiment:1.0, amplifier:0.5, and sentiment:-1.0.
The goal of this study is to optimize and determine the classificationvalue pairs for certain unknown words within a data set, given that, a number of words already have known classification-value pairs. The words which already have a known classification-value pair are stored in a dictionary. Words in a dictionary do not have to be optimized and their classification-value pairs are never altered.
In this study we use two dictionaries, one for sentiment words and one for amplifier words. Known sentiment words were added into the sentiment dictionary, and similarly, known amplifier words were added into the amplifier dictionary. This was done to provide seeds as to guide the GA to converge to the correct solutions. Furthermore, the proposed algorithm can extend these dictionaries in order to create a sentiment lexicon. Details regarding which dictionaries were used are provided in section 7.
PROPOSED METHODS FOR OPTIMIZING CLASSIFICATION-VALUE PAIRS
This section describes the use of machine learning in order to optimize the classification-value pairs for the unknown words in the sentences of a data set. We propose Genetic Algorithm for Sentiment Analysis (GASA). Each aspect of the GA is explained in terms of how it has been adapted for GASA in the following subsections.
GASA chromosome representation
Each gene within a chromosome is made up of the classificationvalue pair for an unknown word (not in the sentiment or amplifier dictionary). The length of the chromosome is equal to the number of unknown words in the training corpus. The classification for each unknown word corresponds to a gene in the chromosome, and thus the classification of unknown words: word 1 , word 2 , word 3 , . . . ,word n is mapped to: ene 1 , ene 2 , ene 3 ,. . . , ene n -where n represents the number of unknown words in a training corpus. This mapping is never changed. In order to illustrate the chromosome representation, suppose there are three unknown words: word 1 , word 2 , word 3 . Figure 1 illustrates an example of a candidate chromosome of length 3. The illustrated chromosome corresponds to the following classification:
GASA initial population generation
Prior to creating the initial population, the unknown words have to be input into the GA. The initial population size is set to the same value as the user-defined population size. Suppose the population size is n, then n chromosomes are created during the initial population generation. Each chromosome has a fixed length which is set to the number of unknown training words. The pseudocode for creating a chromosome is presented in algorithm 2. The genes which make up the chromosome are created by randomly selecting either a sentiment or an amplifier classification and assigned a value randomly as described in section 4. Randomly select a classification type. 5 Randomly select a corresponding value for the classification type previously obtained in step 4.
GASA chromosome evaluation
A chromosome has to be evaluated in order to determine how good it is at solving the optimization problem (namely how well it predicts the overall sentiment of a piece of text). Every chromosome is evaluated on each instance in the data set. In this study, an instance corresponds to text. Assume that chromosome c is being evaluated. Chromosome c is applied to every instance in the data set, and each word within the instances is examined in order to obtain its classification-value pair. Assume that chromosome c is evaluating instance i, whereby the text for instance i is made up of the following words: w 1 , w 2 , w 3 , . . . , w n , and n denotes the length of instance i.
If a word w i from instance i is in the sentiment dictionary, then w i is classified as a sentiment word, and its corresponding value is retrieved from the sentiment dictionary. Similarly, if w i is in the amplifier dictionary, then w i is classified as an amplifier word, and its corresponding value is obtained from the amplifier dictionary. If however, w i is unknown, then its classification-value pair is retrieved from chromosome c.
Once the classification-value pair for every word in an instance of data has been obtained, these classification-value pairs are converted into a mathematical expression in order to obtain the sentiment for the instance. The mathematical expression is evaluated sequentially from left to right. Algorithm 3 presents the pseudocode to evaluate expressions. Amplifier words boost the sentiment words, whereas the sentiment words accumulate each other. If the final word is an amplifier, then that value is simply added onto the result. A positive output denotes a positive sentiment, and a negative output denotes a negative sentiment.
The fitness of a chromosome is determined as the total number of instances for which the sentiment output by the chromosome is equal to the correct sentiment from the data set. Assume that some data set has sentences s 1 , s 2 , and s 3 , and these have correct sentiments of positive, negative, positive respectively. If some chromosome evaluates each sentence to: negative, negative, negative, then the fitness of that chromosome is one, since, it only correctly classified the second sentence.
Algorithm 3: Pseudocode for arithmetically evaluating a sentence.
input : sentence: the sentence to be evaluated output : The sentiment for the evaluated sentence 1 begin 2 sentiment_count ← 0
GASA parent selection
Parent selection methods are used to obtain parents from the current population of chromosomes. These parents are used by the genetic operators in order to create offspring. A single parent is obtained when the parent selection method is executed. Once a chromosome has been chosen to be a parent, the selection method can select that particular chromosome again. Three common parent selection methods are fitness-proportionate, rank and tournament selection [4]. For this study, tournament selection was used given that it was shown to be a successful method by Zhong et al. [33]. Algorithm 4 presents the pseudocode for the tournament selection. This selection method has one user-defined parameter, namely, the tournament size. Let k be the tournament size. Tournament selection randomly selects k chromosomes from the current GA population, and compares the fitness of each of the k chromosomes. The chromosome with the highest fitness is returned as the parent chromosome. If a tie occurs, then a random chromosome is selected to break the tie.
GASA genetic operators
Genetic operators are applied to parents in order to exchange genetic material between the parent chromosomes, and to consequently create novel offspring. The two most common genetic operators Algorithm 4: Pseudocode for tournament selection.
input : size: size of the tournament output : The best chromosome which will be used as a parent 1 begin 2 current_best ← null are mutation and crossover. Their implementation details for this study are described below.
5.5.1 GASA mutation. The mutation genetic operator makes use of a single parent chromosome. The classification-value for a single gene in the parent is modified to a new one. A user-defined parameter is associated with the mutation operator, namely the mutation application rate. Figure 2 illustrates the application of the mutation operator on a parent chromosome, and the resulting offspring is illustrated. The second gene in the parent was changed from a classification of "amplifier" with a value of 0.5 to a classification of "sentiment" with a value of 1.0. 5.5.2 GASA crossover. The crossover genetic operator exchanges genetic material between two parent chromosomes: parent 1 and parent 2 , and consequently creates two offspring: child 1 and child 2 . There are several variations of the crossover genetic operator, such as uniform, one-point and two-point crossover.
The crossover method we implement randomly selects a position p in the range [0, n] -where n denotes the length of the chromosome -within the parent chromosomes; the same position p must be selected within the two parents. Two offspring are created, and all the genes except those at position p are copied across to the corresponding offspring without modification. The genes are position p are swapped, i.e., the gene in position p from parent 1 is inserted into position p in child 2 , and similarily, the gene in position p from parent 2 is inserted into position p in child 1 . Figure 3 illustrates the application of the proposed crossover operator on two parent chromosomes; the resulting offspring are also illustrated. In this case, the value of p was 1, implying that the first gene was swapped amongst the parent chromosomes.
EXPERIMENTAL SETUP
This section describes the experimental set up which was used in order to evaluate the performance of GASA. GASA was programmed in Java and the experiments were conducted at the University of Geneva on the Baobab cluster.
Data sets
Based on the literature surveyed, there is no consistency in terms of the number of data sets used in previous studies. Furthermore, the total number of reviews also varies from one study to another. For example, Che et al. [6] used a data set containing only 878 reviews, whereas the data set used in the study of Wang et al. [32] had 108891 reviews -the number of reviews largely differ between these two. Pang et al. [24] and Govindarajan [10] used 2000 reviews, whereas Acampora and Cosma [2] used 95084. Paramesha and Ravishankar [25] used 2243. Carvalho et al. [5] used two data sets which had 359 and 1908 reviews. There is no study which guides researchers to a set of recommended benchmark data sets for sentiment analysis.
For this study, eight data sets were constructed from several Amazon data sets. The Amazon data sets were obtained from Leskovec and Krevl [14] and McAuley et al. [18]. Each instance in the Amazon data sets is made up of a short summary and a review which were provided by a user. An example of an instance is as follows:
• summary: Pittsburgh -Home of the OLDIES • review: I have all of the doo wop DVD's and this one is as good or better than the 1st ones. Remember once these performers are gone, we'll never get to see them again. Rhino did an excellent job and if you like or love doo wop and Rock n Roll you'll LOVE this DVD ! Four Amazon data sets were randomly selected, namely: Cell Phones and Accessories (Cellphone), Office Products (Office Prod), Grocery and Gourmet Food (Foods) and Video Games. From these four large data sets, eight data sets (four summary and four review data sets) were created for this study. The summary data sets were created by randomly selecting 1000 positive and 1000 negative summaries from a problem domain. Similarly, the review data sets were created by randomly selecting 1000 positive and 1000 negative reviews from a problem domain. For example, the created Cellphone review data set had reviews selected from the Amazon Cellphone data set only. Similarly, the created Foods summary data set had summaries selected from the Amazon Foods data set only.
These data sets were created since they represent different problem domains and contain a similar number of instances as compared to that presented in [22,24] and allow for a large number of experiments to be performed. The eight data sets used in this study are listed in table 1. Stop words and other irrelevant words were not removed from the data. For these to have no contribution to the overall sentiment, GASA must classify them as either 'sentiment' or 'amplifier' with a value of 0.
Experimental parameters
The parameters used for the GASA experiments are presented in table 2. These parameters were obtained by preliminary runs. The following section presents the results obtained by GASA on several experiments.
RESULTS AND DISCUSSION
This section is split into three subsections. The purpose of the algorithm discussed the first subsection was to determine whether a given word was a sentiment or an amplifier word. The second subsection describes experiments whereby the goal was to determine the value of the sentiment word. And finally, the third subsection presents the results when GASA was compared to other sentiment analysis algorithms. The results from the first two experiments demonstrate GASA's ability to generate a sentiment lexicon whereas the third experiment illustrates how GASA performs as a sentiment analysis algorithm. The amplifier dictionary was seeded with two words: "not" and "never"; each had a value of "-1", representing negation. The sentiment dictionary was obtained from [11] which we refer to as "HuLiu-6786". This dictionary contains 6786 known sentiment words labelled as either positive or negative.
Predicting 'sentiment' or 'amplifier'
Several experiments were conducted whereby the classification problem was converted into a 2-class problem, namely sentiment and amplifier classes. Each word in the HuLiu-6786 dictionary was considered as a sentiment word. Ten-fold cross-validation was used on the HuLiu-6786 dictionary whereby during each experiment, 9 folds from the dictionary were used for training, and the remaining fold from the dictionary was used for testing. GASA had to predict whether each word in the test fold was a sentiment or an amplifier. The training and testing data was made up of dictionary words and not reviews or summary data.
GASA's fitness function did not take into consideration (during the evolutionary process) whether or not a training dictionary word was correctly classified as a sentiment or not. Thus, during the evolutionary process for these experiments, GASA did not directly optimize the chromosomes in order to correctly distinguish between the two classes. Instead, GASA's goal was to correctly classify the overall sentiment of as many instances (reviews or summary data) as possible.
For these experiments, two data sets were created from the Cellphone, Office Products, Foods and Video Games data sets. Namely, all of the summary data combined and all of the review data combined. Thus, the two combined data sets had 8000 instances each.
The results for these experiments are presented in tables 3 and 4. Each row in the table represents a particular word frequency condition; this is followed by the corresponding average number of dictionary words in the training data which met the word frequency condition and the average test accuracy across the 10-folds. Note that this test accuracy is in terms of the 2-class problem of distinguishing between a sentiment or amplifier word as described above.
The word frequency condition is read as follows: a value of '> 10' means that the experiment only took into consideration the training dictionary words which occurred at least 10 times in the review/summary data. In table 3, there were an average of 446 training dictionary words which had a frequency of 10. Similarly, a value of '> 250' means that the experiment only took into consideration the training dictionary words which occurred at least 250 times in the review/summary data. There were an average of 23 words in the training data which had a frequency of 250. The condition '> 0' implied that a dictionary word had to occur at least once in the review/summary data.
The purpose of using the word frequency condition was to determine the effect on the number of times a word was present in the training data and GASA's ability to correctly classify the words in the test set which also had such a frequency. When all of the words are taken into consideration, i.e. a frequency value '> 0' , GASA achieved an accuracy of 51.39% and 55.11% on the review and summary data respectively. The accuracy improved when the word frequency condition was increased. In terms of the review data, the accuracy went from 51.39% to 72.00% when the word frequency was increased from 'greater than 0' to 'greater than 250'.
For the combined summary data, when the words had a frequency of at least 20 the accuracy was 75.61% as opposed to an accuracy of 55.11% for a frequency condition greater than 0. The combined review data set had more words than the combined summary data set because the summaries are short text. For this reason, the conditions were stopped at 20 for the combined summary data. Words from the dictionary which occur with a small frequency are more challenging for GASA to correctly classify as a sentiment or amplifier since they occur infrequently in the data. Nonetheless, the findings reveal that GASA is able to extend a sentiment and amplifier lexicon provided that the words occur with a large frequency in the training data.
Predicting the value of the sentiment
A set of experiments was conducted in order to determine how effective GASA would be at classifying the sentiment value of a set of words instead of sentences. In order to achieve this, the HuLiu-6786 dictionary was used, and a certain percentage of the words in the dictionary were considered as unknown. The problem was converted into a 2-class classification problem, namely positive and negative sentiment values. Thus, in terms of the classificationvalue pair, only the "value" aspect was taken into consideration.
The HuLiu-6786 dictionary contained more sentiment words than were present in the data sets, and thus only the dictionary words found in the training data sets were considered -this set was named S.
Ten-fold cross-validation was used in the following manner: 10 folds were randomly created from S, 9 folds were seeded into GASA and the algorithm was executed as defined in section 5. At the end of the GA generational loop the algorithm had to predict the sentiment value of the words in the test fold as either "positive" or "negative". The predictions were then compared against the correct values in order to determine the accuracy. Similarly to the experiment described in subsection 7.1, GASA did not directly optimize the chromosomes in order to correctly distinguish between the two classes. GASA's objective was to correctly classify the overall sentiment of as many instances (reviews or summary data) as possible. For these experiments, the same data sets which were described in subsection 7.1 were used. The results for these experiments are presented in tables 5 and 6. Subsection 7.1 describes how to interpret the tables. Dictionary words in the combined summary data set which had a frequency value of at least 20 resulted in an accuracy of 95.12%. Sentiment words had to appear a greater number of times in the combined review data set in order to achieve a higher accuracy; more precisely, words which had a frequency of at least 250 times resulted in an accuracy of 82.61%. These results reveal that GASA is able to extend a sentiment lexicon provided that the words occur frequently.
Comparison of GASA with commercial Sentiment Tools
How good is GASA? To check we compared GASA seeded with the HuLiu-6786 dictionary to other sentiment analysis methods including AlchemyAPI [12], MeaningCloud [19], NLTK [21], Lexalytics [15], LingPipe [3], Stanford sentiment analysis [17,28], Sen-tiStrength [30,31] and Dandelion API [29]. AlchemyAPI, Meaning-Cloud, Lexalytics and Dandelion are commercial APIs. LingPipe and SentiStrength have both commercial and non-commercial licences. The results of the comparison are presented in table 7.
In terms of the summary data, the top 3 ranking methods in order of performance were LingPipe, AlchemyAPI and GASA with an average test accuracy of 77.75%, 72.88% and 69.92% respectively. When comparing GASA to the four commercial APIs, AlchemyAPI achieved the best accuracy, while GASA outperformed Dandelion, Lexalytics and MeaningCloud.
In terms of the review data sets, the top three performing methods were LingPipe, AlchemyAPI and SentiStrength. GASA was outperformed by two commercial API, namely AlchemyAPI and Dandelion. GASA achieved higher test accuracy when compared to two commercial APIs, namely, Lexalytics and MeaningCloud.
Appendices A and B illustrates several examples of the review data used. The predicted sentiment on the sample reviews from GASA and other sentiment analysis methods is presented. The reviews were randomly selected in order to illustrate cases where GASA correctly and incorrectly classified the sentiment.
EXTENDING GASA (CA-GASA)
When determining the classification for an unknown word w, the GASA algorithm does not take into consideration the words before and after w, i.e. it is context independent. This ignores the fact that many words have different meanings -with different sentiments. How can we begin to allow for multiple, context-dependent sentiments? We propose to allocate a context-dependent classification to an unknown word w; an approach we call Context Aware GASA (CA-GASA). In order to achieve this, several modifications to GASA are required. The primary modification lies within the representation of the chromosomes.
Each gene contains two principle parts, the context classification and the context-free classification. When a word in an instance of data is evaluated the classification-value pair is obtained from either the context classification or context-free classification. Two lists of words are used in order to make this decision, namely list ne xt and list pr e ious . When a CA-GASA chromosome is evaluated on an instance of data i on a word w, the context classificationvalue pair is allocated if the word w is surrounded by the words in list ne xt and list pr e ious . If this is not the case, then the contextfree classification-value pair is allocated. This process is further discussed below. Figure 4 illustrates an example of a CA-GASA chromosome.
In order to enable multiple classification-value pairs to be associated with a word, a new gene encoding is used. For a word w, each gene has the following properties: Table 7: Test accuracy (%) illustrating a comparison between other commercial and non commercial sentiment analysis methods and GASA. The 70/30 holdout method was used, and all of the data sets had a size of 2000. Types "S" and "R" denote summary and review data sets respectively. Watson refers to Alchemy API, MC to MeaningCloud, LEX to Lexalytics, LP to LingPipe, SS to SentiStrength, DL to Dandelion and SD to Stanford sentiment analysis. • The word w.
• The context rule which is defined as follows:
-The maximum possible size of the next context words, denoted as next size .
CA-GASA chromosome evaluation
Assume that a CA-GASA chromosome c is being applied to an instance of data. A word w in the sentence is evaluated as follows. Starting from the word w, look at the next number ahe ad words from w and add them to the list x . Once again, starting from the word w, look at the previous number behind words from w and add them to the list .
Let the size of list x be denoted as size x , and let the size of list be denoted as size . Let the number of words in the intersection between list x and list ne xt be denoted by a, and let the number of words in the intersection between list and list pr e ious be denoted by b.
If a+b size x +size ≥ 0.5, then the word w is classified by the context classification value. Otherwise, the word w is classified by the context-free classification value.
The CA-GASA chromosome in figure 4 has one next context word, "ship", and one previous context word, "book". Assume the sentence "the ship sunk" is being evaluated, and the classificationvalue for the word "sunk" is being determined. In this case list x is empty, and list = {ship, the}. Consequently, size x = 0 and size = 2. The intersection between list x and list ne xt is empty, and the intersection between list and list pr e ious is {ship}, and thus, a = 0 and b = 1. The word "sunk" is classified by the context classificationvalue since 0+1 0+2 ≥ 0.5, i.e. "sunk" is classified as a sentiment word with a value of -1.0 (since the context classification-value pair in the figure is a sentiment with value of -1.0).
CA-GASA Results
From table 8, when HuLiu-6786 was seeded into the proposed methods, it is observed that GASA outperformed CA-GASA on 2 summary data sets, and tied in the other data sets, whereas CA-GASA outperformed GASA on 3 of the review data sets. One drawback of CA-GASA is that the search space is significantly larger than GASA and as a result the training time is much longer. Given this drawback, CA-GASA was not tested against the other sentiment analysis algorithms.
In order to address the large training time, it would be of interest to determine if an approach could be proposed in order to find which words in some data set have more than one meaning, and to create the context classification for those words only. This would reduce the complexity of CA-GASA whilst retaining the ability to perform word disambiguation. Unknown words which only express one sentiment regardless of the context could be represented using GASA, and words which have more than one sentiment could use the CA-GASA representation.
CONCLUSION
Being able to determine the sentiment of text is a useful ability to businesses and other entities in order to gain an understanding of people's opinions on their products and services. This study proposed a GA approach in order to classify the sentiment of sentences by optimizing unknown words as either a sentiment or an amplifier word. This study proposes a way to represent the sentiments and amplifiers through the use of simple mathematical expressions in order to evaluate the final sentiment of sentences. The experimental results revealed that GASA was able to outperform certain commercial APIs. One advantage of GASA is that the algorithm can grow a sentiment dictionary which can be reused or further improved upon. The experiments suggested that if a particular word appeared a large number of times in the training data set, then the proposed method is likely to correctly classify its sentiment.
We also proposed CA-GASA, and the rationale behind this modification was to provide the ability to allocate a sentiment based on the context for which the words are in. This method requires additional work in order to reduce the complex search space. It would also be of interest in order to investigate if an ensemble of GASA chromosomes could outperform the accuracy of a single one.
A CORRECTLY CLASSIFIED BY GASA
This section presents a random sample of the reviews which were correctly classified by GASA. The reviews were obtained from the Video Game data set. The reviews were lemmatized using Stanford CoreNLP [17]. The sentence "we thought this movie was quite entertaining" is lemmatized as follows: "we think this movie be quite entertaining". The results are compared to the following four sentiment analysis entities: NLTK, Alchemy API, SentiStrength and MeaningCloud. The reviews are as follows:
Text: "have purchase the original when release several year ago, and thoroughly enjoy it, I be excite to see a new version out for 2006. it live up to the challenge I expect from this game. it be difficult to find truly challenging puzzle game, and you will not be disappoint with this. my only disappoint come after solve it as the original provide a video of the programmer, although this version do offer the opportunity to replay and select from several final outcome."
Result: Correctly classified by GASA as positive.
Text: "have not finish it yet, but it sure be a lot of fun! yes, there be/will be even better game. and yes, maybe GTA V be better (it have at least three time more budget so no surprise there). but Watch Dogs be truly a great game! some have unrealistically high expectation or be real hater and should just stick to buy gta every five year instead of buy each game and then give they bad review over and over... before write another obvious review: yes, WE all KNOW that "gta5" be NOT "watch dog" "call of duty" be NOT "candy crush" and "Far Cry 3" be NOT "Tetris"" Result: Correctly classified by GASA as positive.
Text: "I recommend this game. very little communication loss or problem. if problem occur they be very good as about replace what you lose. the game be fun to play."
Result: Correctly classified by GASA as positive.
Text: "I buy this game because I remember how Madden use to be. I have hear the gameplay be break (it be, horribly), but I think, hey, if I get use to the stupid game glitch at least it will look good. no. wow be I wrong. everything about this game be atrocious . the only thing that look crisp in this game be the score, which incidentally be the only reason I know it be display in hd. every time a player score a touchdown he literally run through the stand and disappear into the mesh. I could go on, but just think that I pay more than $ 5 for this game be make I extremely angry right no."
Result: Correctly classified by GASA as negative.
Text: "I play game on a desktop pc in my office, and on a laptop in the living room. I also frequently rebuild my machine and upgrade hardware. I be interested in the online community that will follow this game, however I will not support such strict measure in copy protection. I have cancel my pre-order and may purchase later once this limited activation process be remove."
Result: Correctly classified by GASA as negative.
B INCORRECTLY CLASSIFIED BY GASA
This section presents a random sample of the reviews which were incorrectly classified by GASA. Refer to appendix A for details regarding the data, lemmatization and the algorithms used for comparison. The reviews are as follows:
Text: "I like final Fantasy 13 but it just do not draw I in that well and the story be slightly confusing. with that be say I absolutely love final Fantasy 13-2! the story be weird because of time travel, but I think it be awesome. they seem to have fix everything about the first one. I also like the fact that this game have multiple ending and new game plus! buy it!"
Result: The correct classification for the review is positive. GASA classified it as negative. Three of the algorithms correctly classified the text as positive, and one algorithm classified it as negative.
Text: "outstanding shooter game. it be set in world war two and you better not run out in the open. good graphic and story."
Result: The correct classification for the review is positive. GASA classified it as negative. Only one algorithm correctly classified the text. Two algorithms classified it as neutral and the remaining algorithm classified it as negative.
Text:"they sit a little bit loose on the controller's peg, so there be some extra play. not shabby, not stellar. put some more life into a controller that you think you be go to lose."
Result: The correct classification for the review is positive. GASA classified it as negative. All of the algorithms incorrectly classified the text as negataive.
Text: "this headset rock, that be all you need to know. unless you need more, here be some pro's and con's. pro's 1. affordable price2. long cable3. lightweight design4. adjustable microphone5. Chat Boost / Independent Game and Chat Sound6. stereo ExpanderCon's 1. slightly complicated set -up2. cheap ear Cushions The hiss sound people complain about be negligible, it disappear after just a few minute of use and be otherwise drown out by game. amazing quality and durability at a affordable price, highly recommend. UPDATE : 2/18/11 the microphone serve I very well for only 17 day before the Right Earphone cease to function. a day later, the right earphone completely fall off. I get off recommend the product initially, but the fact that the headset can stop work so quickly be terrible. I be return these for a refund and be go to try another headset. I no longer recommend these headphone."
Result: The correct classification for the review is negative. GASA classified it as positive. Three of the algorithms correctly classified the text as negative whereas the other classified it as neutral.
Text:"Brunswick pro Bowling be not worth much. game be slow, unnatural, and poor scripting. I can not believe someone can not make a better program then the Wii bowling which be do pretty good. do not waste you money. I want sport game that actually feel and act like real sport. how hard can that be?"
Result: The correct classification for the review is negative. GASA classified it as positive. Three of the algorithms correctly classified the text as negative whereas the other classified it as positive.
• 0 Figure 1 :
01word 1 in gene position 1 is classified as a sentiment word with a value of 1.0 • word 2 in gene position 2 is classified as an amplifier word with a value of 0.5 • word 3 in gene position 3 is classified as a sentiment word with a value of 0.Example of a GASA chromosome.
Algorithm 2 :
2Creating a chromosome. input : size: the number of unknown words 1 begin 2 Initialise the length of the chromosome to size 3 for each gene in the chromosome do 4
Figure 2 :
2Example of GASA mutation. The second gene was selected for mutation and was changed from an amplifier with a value 0.5 to a sentiment with a value of 1.0. The other genes remain unchanged.
Figure 3 :
3Example of GASA crossover. The first gene was swapped between the parents, i.e. the amplifier in the first gene from parent 1 was swapped with the sentiment in the first gene from parent 2. The result of the crossover is observed in the children chromosomes. All of the other genes remain unchanged.
-
The maximum possible size of the previous context words, denoted as pre ious size . -The list of the next context words, denoted as list ne xt . -The list of the previous context words, denoted as list pr e ious . -The number of words to look ahead and compare with list ne xt , denoted as number ahe ad . -The number of words to look behind and compare with list pr e ious , denoted as number behind . -The context classification-value pair. • The context-free classification-value pair.
Figure 4 :
4Example of a CA-GASA chromosome.
Table 1 :
1Data sets used in this study. Four review and four summary data sets were created. Each data set had 2000 instances.Data set
Number of positive/
negative instances
Cellphone
1000/1000 reviews
Office Prod
1000/1000 reviews
Foods
1000/1000 reviews
Video Games 1000/1000 reviews
Cellphone
1000/1000 summaries
Office Prod
1000/1000 summaries
Foods
1000/1000 summaries
Video Games 1000/1000 summaries
Table 2 :
2GASA parameters. These were obtained from preliminary runs.GASA Parameter
Value
Population Size
200
Parent selection method
Tournament
Tournament size
7
Maximum number of generations 500
Crossover rate
60%
Mutation rate
40%
Table 3 :
3Test accuracy (%) results on the two class problem (sentiment and amplifier) on all of the review data combined into a single data set. Ten-fold cross-validation was used.Word
Frequency
Total number of
dictionary words
Accuracy (%)
> 0
1917
51.39
> 10
446
54.80
> 15
334
55.47
> 20
260
56.66
> 100
56
64.52
> 250
23
72.00
Table 4 :
4Test accuracy (%) results on the two class problem
(sentiment and amplifier) on all of the summary data com-
bined into a single data set. Ten-fold cross-validation was
used.
Word
Frequency
Total number of
dictionary words
Accuracy (%)
> 0
626
55.11
> 10
76
68.42
> 15
56
67.86
> 20
41
75.61
Table 5 :
5Test accuracy (%) results on the two class problem (positive and negative sentiment) on all of the review data combined into a single data set. Ten-fold cross-validation was used.Word
Frequency
Total number of
dictionary words
Accuracy (%)
> 0
1917
36.62
> 10
446
44.39
> 15
334
46.71
> 20
260
49.23
> 100
56
69.64
> 250
23
82.61
Table 6 :
6Test accuracy (%) results on the two class problem (positive and negative sentiment) on all of the summary data combined into a single data set. Ten-fold cross-validation was used.Word
Frequency
Total number of
dictionary words
Accuracy (%)
> 0
626
53.04
> 10
76
86.84
> 15
56
92.86
> 20
41
95.12
Table 8 :
8Test accuracy (%) on the summary and review data showing a comparison between GASA and CA-GASA. Tenfold cross-validation was used, and all of the data sets had a size of 2000. Both GASA and CA-GASA were seeded with the HuLiu-6786 sentiment dictionary.Summary Data
Review Data
Data set
GASA
CA-
GASA
GASA
CA-
GASA
Cellphone
73.55
71.75
69.10
70.05
Office Prod
71.75
70.70
71.00
68.20
Foods
65.75
65.75
67.15
67.35
Video Games
69.40
69.40
65.45
67.00
ACKNOWLEDGMENTSThe financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the author and are not necessarily to be attributed to the NRF. The computations were performed at the University of Geneva on the Baobab cluster. The authors would like to thank Alchemy API, Dandelion API, Lexalytics, MeaningCloud and SentiStrength for granting access to their APIs. The authors would like to thank Etienne Vos for his feedback and comments.
Sentiment Analysis in Multiple Languages: Feature Selection for Opinion Classification in Web Forums. Ahmed Abbasi, Hsinchun Chen, Arab Salem, ACM Transactions on Information Systems. 26Ahmed Abbasi, Hsinchun Chen, and Arab Salem. 2008. Sentiment Analysis in Multiple Languages: Feature Selection for Opinion Classification in Web Forums. ACM Transactions on Information Systems 26, 3, Article 12 (June 2008), 34 pages.
A hybrid computational intelligence approach for efficiently evaluating customer sentiments in E-commerce reviews. G Acampora, G Cosma, Intelligent Agents (IA). G. Acampora and G. Cosma. 2014. A hybrid computational intelligence ap- proach for efficiently evaluating customer sentiments in E-commerce reviews. In Intelligent Agents (IA), 2014 IEEE Symposium on. 73-80.
. Alias-I, LingPipe 4.1.0.Alias-i. 2008. LingPipe 4.1.0. http://alias-i.com/lingpipe. (2008).
A Comparison of Selection Schemes Used in Evolutionary Algorithms. Tobias Blickle, Lothar Thiele, Evolutionary Computation. 4Tobias Blickle and Lothar Thiele. 1996. A Comparison of Selection Schemes Used in Evolutionary Algorithms. Evolutionary Computation 4, 4 (Dec. 1996), 361-394.
A Statistical and Evolutionary Approach to Sentiment Analysis. Jonnathan Carvalho, Adriana Prado, Alexandre Plastino, Proceedings of the. theJonnathan Carvalho, Adriana Prado, and Alexandre Plastino. 2014. A Statistical and Evolutionary Approach to Sentiment Analysis. In Proceedings of the 2014
WI-IAT '14). ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies. Washington, DC, USA02IEEEIEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and In- telligent Agent Technologies (IAT) -Volume 02 (WI-IAT '14). IEEE Computer So- ciety, Washington, DC, USA, 110-117.
Sentence Compression for Aspect-Based Sentiment Analysis. Audio, Speech, and Language Processing. Wanxiang Che, Yanyan Zhao, Honglei Guo, Zhong Su, Ting Liu, IEEE/ACM Transactions on. 23Wanxiang Che, Yanyan Zhao, Honglei Guo, Zhong Su, and Ting Liu. 2015. Sen- tence Compression for Aspect-Based Sentiment Analysis. Audio, Speech, and Language Processing, IEEE/ACM Transactions on 23, 12 (Dec 2015), 2111-2124.
Subjectivity Detection using Genetic Algorithm. A Das, S Bandyopadhyay, 1st Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA10). A Das and S Bandyopadhyay. 2010. Subjectivity Detection using Genetic Algo- rithm. In 1st Workshop on Computational Approaches to Subjectivity and Senti- ment Analysis (WASSA10).
Introduction to Evolutionary Computing. E Agoston, J E Eiben, Smith, SpringerVerlagAgoston E. Eiben and J. E. Smith. 2003. Introduction to Evolutionary Computing. SpringerVerlag.
Genetic Algorithms in Search, Optimization and Machine Learning. David E Goldberg, Addison-Wesley Longman Publishing Co., IncBoston, MA, USA1st ed.David E. Goldberg. 1989. Genetic Algorithms in Search, Optimization and Ma- chine Learning (1st ed.). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.
Sentiment Analysis of Movie Reviews using Hybrid Method of Naive Bayes and Genetic Algorithm. Govindarajan, International Journal of Advanced Computer Research. 34M Govindarajan. 2013. Sentiment Analysis of Movie Reviews using Hybrid Method of Naive Bayes and Genetic Algorithm. International Journal of Ad- vanced Computer Research 3, 4 (2013).
Mining and Summarizing Customer Reviews. Minqing Hu, Bing Liu, Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '04). the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '04)New York, NY, USAACMMinqing Hu and Bing Liu. 2004. Mining and Summarizing Customer Reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '04). ACM, New York, NY, USA, 168-177.
International Business Machines Watson. AlchemyAPI Service. International Business Machines Watson. 2005. AlchemyAPI Service.
Feature Reduction Based on Genetic Algorithm and Hybrid Model for Opinion Mining. P Kalaivani, K L Shunmuganathan, Scientific Programming. 12pagesArticleP. Kalaivani and K. L. Shunmuganathan. 2015. Feature Reduction Based on Ge- netic Algorithm and Hybrid Model for Opinion Mining. Scientific Programming 2015, Article 12 (Jan. 2015), 1 pages.
Jure Leskovec, Andrej Krevl, SNAP Datasets: Stanford Large Network Dataset Collection. Jure Leskovec and Andrej Krevl. 2014. SNAP Datasets: Stanford Large Network Dataset Collection. http://snap.stanford.edu/data. (June 2014).
. Lexalytics, Lexalytics. 2003. https://www.lexalytics.com/. (2003).
Bing Liu, Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. Cambridge University PressBing Liu. 2015. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. Cambridge University Press.
The Stanford CoreNLP Natural Language Processing Toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mcclosky, Association for Computational Linguistics (ACL) System Demonstrations. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Association for Computational Linguistics (ACL) System Demonstrations. 55-60.
Inferring Networks of Substitutable and Complementary Products. Julian Mcauley, Rahul Pandey, Jure Leskovec, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '15). the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '15)New York, NY, USAACMJulian McAuley, Rahul Pandey, and Jure Leskovec. 2015. Inferring Networks of Substitutable and Complementary Products. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '15). ACM, New York, NY, USA, 785-794.
MeaningCloud is a brand by MeaningCloud LLC, a s|ngular company. Meaningcloud, MeaningCloud. 2015. MeaningCloud is a brand by MeaningCloud LLC, a s|ngular company. https://www.meaningcloud.com/. (2015).
A Comparison of Lexicon-based Approaches for Sentiment Analysis of Microblog Posts. Cataldo Musto, Giovanni Semeraro, Marco Polignano, Proceedings of the 8th International Workshop on Information Filtering and Retrieval co-located with XIII AI*IA Symposium on Artificial Intelligence. the 8th International Workshop on Information Filtering and Retrieval co-located with XIII AI*IA Symposium on Artificial IntelligenceAI*IACataldo Musto, Giovanni Semeraro, and Marco Polignano. 2014. A Comparison of Lexicon-based Approaches for Sentiment Analysis of Microblog Posts. In Pro- ceedings of the 8th International Workshop on Information Filtering and Retrieval co-located with XIII AI*IA Symposium on Artificial Intelligence (AI*IA 2014). 59- 68.
. Nltk, NLTK. 2010. http://text-processing.com/. (2010).
A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. Bo Pang, Lillian Lee, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. the 42nd Annual Meeting on Association for Computational LinguisticsBo Pang and Lillian Lee. 2004. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics.
Opinion Mining and Sentiment Analysis. Foundations and Trends in Information Retrieval. Bo Pang, Lillian Lee, 2Bo Pang and Lillian Lee. 2008. Opinion Mining and Sentiment Analysis. Foun- dations and Trends in Information Retrieval 2, 1-2 (Jan. 2008), 1-135.
Thumbs Up?: Sentiment Classification Using Machine Learning Techniques. Bo Pang, Lillian Lee, Shivakumar Vaithyanathan, Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing. the ACL-02 Conference on Empirical Methods in Natural Language ProcessingStroudsburg, PA, USAAssociation for Computational LinguisticsBo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs Up?: Sen- timent Classification Using Machine Learning Techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing -Vol- ume 10 (EMNLP '02). Association for Computational Linguistics, Stroudsburg, PA, USA, 79-86.
Optimization of Cross Domain Sentiment Analysis Using Sentiwordnet. K Paramesha, K Ravishankar, International Journal in Foundations of Computer Science & Technology. 35K Paramesha and C Ravishankar, K. 2013. Optimization of Cross Domain Sen- timent Analysis Using Sentiwordnet. International Journal in Foundations of Computer Science & Technology 3, 5 (2013).
A Survey on Opinion Mining and Sentiment Analysis. Knowledge-Based Systems. Kumar Ravi, Vadlamani Ravi, C. 89Kumar Ravi and Vadlamani Ravi. 2015. A Survey on Opinion Mining and Senti- ment Analysis. Knowledge-Based Systems 89, C (Nov. 2015), 14-46.
Using genetic algorithms in word-vector optimisation. P W H Smith, Computational Intelligence (UKCI), 2010 UK Workshop on. P.W.H. Smith. 2010. Using genetic algorithms in word-vector optimisation. In Computational Intelligence (UKCI), 2010 UK Workshop on. 1-5.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Y Jean, Jason Wu, Chuang, D Christopher, Manning, Y Andrew, Christopher Ng, Potts, Proceedings of the conference on empirical methods in natural language processing (EMNLP). Citeseer. the conference on empirical methods in natural language processing (EMNLP). CiteseerRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Man- ning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the con- ference on empirical methods in natural language processing (EMNLP). Citeseer, 1631-1642.
. S R Spaziodati, SpazioDati S.r.l. 2013. Dandelion API. http://dandelion.eu/. (2013).
Sentiment Strength Detection for the Social Web. Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Journal of the Association for Information Science and Technology. 63Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. 2012. Sentiment Strength Detection for the Social Web. Journal of the Association for Information Science and Technology 63, 1 (Jan. 2012), 163-173.
Sentiment in Short Strength Detection Informal Text. Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, Arvid Kappas, Journal of the Association for Information Science and Technology. 61Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment in Short Strength Detection Informal Text. Journal of the Asso- ciation for Information Science and Technology 61, 12 (Dec. 2010), 2544-2558.
Latent Aspect Rating Analysis on Review Text Data: A Rating Regression Approach. Hongning Wang, Yue Lu, Chengxiang Zhai, Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10). the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10)New York, NY, USAACMHongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent Aspect Rating Analysis on Review Text Data: A Rating Regression Approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '10). ACM, New York, NY, USA, 783-792.
Comparison of Performance between Different Selection Strategies on Simple Genetic Algorithms. Jinghui Zhong, Xiaomin Hu, Jun Zhang, Min Gu, International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC'06). 2Jinghui Zhong, Xiaomin Hu, Jun Zhang, and Min Gu. 2005. Comparison of Per- formance between Different Selection Strategies on Simple Genetic Algorithms. In International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Tech- nologies and Internet Commerce (CIMCA-IAWTIC'06), Vol. 2. 1115-1121.
| [] |
[
"Automated speech tools for helping communities process restricted-access corpora for language revival efforts",
"Automated speech tools for helping communities process restricted-access corpora for language revival efforts"
] | [
"Nay San nay.san@stanford.edu \nDepartment of Linguistics\nStanford University\n\n\nARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n\n",
"Martijn Bartelds \nDepartment of Computational Linguistics\nUniversity of Groningen\n\n",
"Tolúlo . pé .Ò gúnrèMí \nDepartment of Computer Science\nStanford University\n\n",
"Alison Mount \nARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n\n",
"Ruben Thompson \nARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n\n",
"Michael Higgins \nARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n\n",
"Roy Barker \nARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n\n",
"Jane Simpson \nARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n\n",
"Dan Jurafsky \nDepartment of Linguistics\nStanford University\n\n\nDepartment of Computer Science\nStanford University\n\n"
] | [
"Department of Linguistics\nStanford University\n",
"ARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n",
"Department of Computational Linguistics\nUniversity of Groningen\n",
"Department of Computer Science\nStanford University\n",
"ARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n",
"ARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n",
"ARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n",
"ARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n",
"ARC Centre of Excellence for the Dynamics of Language\nAustralian National University\n",
"Department of Linguistics\nStanford University\n",
"Department of Computer Science\nStanford University\n"
] | [
"Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages"
] | Many archival recordings of speech from endangered languages remain unannotated and inaccessible to community members and language learning programs. One bottleneck is the time-intensive nature of annotation. An even narrower bottleneck occurs for recordings with access constraints, such as language that must be vetted or filtered by authorised community members before annotation can begin. We propose a privacy-preserving workflow to widen both bottlenecks for recordings where speech in the endangered language is intermixed with a more widely-used language such as English for meta-linguistic commentary and questions (e.g. What is the word for 'tree'?). We integrate voice activity detection (VAD), spoken language identification (SLI), and automatic speech recognition (ASR) to transcribe the metalinguistic content, which an authorised person can quickly scan to triage recordings that can be annotated by people with lower levels of access. We report workin-progress processing 136 hours archival audio containing a mix of English and Muruwari. Our collaborative work with the Muruwari custodian of the archival materials show that this workflow reduces metalanguage transcription time by 20% even with minimal amounts of annotated training data: 10 utterances per language for SLI and for ASR at most 39 minutes, and possibly as little as 39 seconds. | 10.18653/v1/2022.computel-1.6 | [
"https://www.aclanthology.org/2022.computel-1.6.pdf"
] | 248,218,756 | 2204.07272 | 5aa7ebf744faf416e902efb1c9cc098d9ad59081 |
Automated speech tools for helping communities process restricted-access corpora for language revival efforts
May 26-27, 2022
Nay San nay.san@stanford.edu
Department of Linguistics
Stanford University
ARC Centre of Excellence for the Dynamics of Language
Australian National University
Martijn Bartelds
Department of Computational Linguistics
University of Groningen
Tolúlo . pé .Ò gúnrèMí
Department of Computer Science
Stanford University
Alison Mount
ARC Centre of Excellence for the Dynamics of Language
Australian National University
Ruben Thompson
ARC Centre of Excellence for the Dynamics of Language
Australian National University
Michael Higgins
ARC Centre of Excellence for the Dynamics of Language
Australian National University
Roy Barker
ARC Centre of Excellence for the Dynamics of Language
Australian National University
Jane Simpson
ARC Centre of Excellence for the Dynamics of Language
Australian National University
Dan Jurafsky
Department of Linguistics
Stanford University
Department of Computer Science
Stanford University
Automated speech tools for helping communities process restricted-access corpora for language revival efforts
Proceedings of the Fifth Workshop on the Use of Computational Methods in the Study of Endangered Languages
the Fifth Workshop on the Use of Computational Methods in the Study of Endangered LanguagesMay 26-27, 2022
Many archival recordings of speech from endangered languages remain unannotated and inaccessible to community members and language learning programs. One bottleneck is the time-intensive nature of annotation. An even narrower bottleneck occurs for recordings with access constraints, such as language that must be vetted or filtered by authorised community members before annotation can begin. We propose a privacy-preserving workflow to widen both bottlenecks for recordings where speech in the endangered language is intermixed with a more widely-used language such as English for meta-linguistic commentary and questions (e.g. What is the word for 'tree'?). We integrate voice activity detection (VAD), spoken language identification (SLI), and automatic speech recognition (ASR) to transcribe the metalinguistic content, which an authorised person can quickly scan to triage recordings that can be annotated by people with lower levels of access. We report workin-progress processing 136 hours archival audio containing a mix of English and Muruwari. Our collaborative work with the Muruwari custodian of the archival materials show that this workflow reduces metalanguage transcription time by 20% even with minimal amounts of annotated training data: 10 utterances per language for SLI and for ASR at most 39 minutes, and possibly as little as 39 seconds.
Introduction
In speech recorded for language documentation work, it is common to find not only the target language that is being documented but also a language of wider communication, such as English. This is especially so in early-stage fieldwork when the elicitation may centre around basic words and phrases from a standard word list (e.g. the Swadesh List: Swadesh, 1955). In these mixed-language recordings, utterances in the language of wider communication are largely metalinguistic questions and commentary (e.g. What is the word for 'tree'?, This word means 'soft'), which appear inter-mixed with the utterances of interest in the target language. In this paper, we propose a workflow to help process hundreds of hours of unannotated speech of this genre.
We describe a use case where the language of wider communication is English (ISO 639-3: eng), and the documented language is Muruwari (ISO 639-3: zmu), an Aboriginal language traditionally spoken in north western New South Wales, Australia. As illustrated in Figure 1, we leverage voice activity detection (VAD) to detect speech regions, then spoken language identification (SLI) to distinguish between Muruwari and English regions, and then automatic speech recognition (ASR) to transcribe the English. The uncorrected transcriptions offer a rough but workable estimate of the contents in a given recording. : Deriving transcriptions of English in mixedlanguage speech using voice activity detection (VAD) and spoken language identification (SLI) to identify speech regions and the language spoken (zmu: Muruwari or eng: English) and automatic speech recognition (ASR) to transcribe English speech.
We use this workflow to help process 136 hours of predominantly single-speaker recordings made in the 1970s by the last first language (L1) speaker of Muruwari, James 'Jimmie' Barker (1900Barker ( -1972. The generated transcriptions can be used by the data custodian and Muruwari elder, Roy Barker (author RB; grandson of Jimmie Barker), to triage the recordings and make initial decisions on which recordings can be listened to by people with lower levels of access who can then correct the transcriptions. The corrected transcriptions provide approximate locations where certain Muruwari words and phrases are being discussed, providing an index of the corpus from which language learning materials can be produced. In this way, we are able to support ongoing language revival initiatives through a strategic deployment of machine and human efforts in a manner that adheres to the level of privacy required.
For the benefit of other projects, we also conducted SLI and ASR experiments to determine the minimum amounts of annotated data required to implement this workflow. Through our SLI experiments we show that 1) only 10 example utterances per language are needed to achieve reliable singlespeaker SLI performance, and 2) speech representations for SLI such as those from SpeechBrain (Ravanelli et al., 2021) can be used as-is as input to a simple logistic regression classifier without needing compute-intensive adaptation methods requiring a graphics processing unit (GPU).
Through our ASR experiments we show that transcriptions for 39 seconds of Jimmie's Australian English was sufficient to increase the accuracy of an ASR system trained for American English (Robust wav2vec 2.0: Hsu et al., 2021). To our surprise, timed transcription tasks revealed that the fine-tuned models offered no meaningful reduction in transcription correction time over an off-the-shelf model. Nevertheless, the machineassisted workflow integrating the VAD, SLI, and ASR systems offers a 20% reduction in annotation time, requiring 2.36 hours of correction time per 30-minute recording compared to 2.95 hours of work to produce the same annotations manually, with ASR-assisted transcription responsible for the majority of the time savings.
With the exception of the archival audio and transcriptions, which we do not have permission to openly release, all experiment artefacts, model training/deployment scripts, and data preparation instructions developed for this project are publicly available on GitHub. 1 The remainder of this paper is organised as follows. We first provide the project background in 1 https://github.com/CoEDL/vad-sli-asr §2. Subsequently, in §3, we formulate the research questions we sought to address with our experiments and then describe the data we used for them in §4. The following three sections detail the methods and results of our SLI ( §5) and ASR ( §6) experiments, and the timed annotation tasks ( §7). In §8, we discuss how this workflow assists in the ongoing documentation of the Muruwari language. Finally, in §9, we summarise and conclude this work, making clear its limitations and outlining directions for future research.
Project background
Muruwari is an Aboriginal language traditionally spoken in north western New South Wales, Australia and belongs to the Pama-Nyungan family of Australian languages (Oates, 1988). Oates (1988), which comprises the largest extant single work on Muruwari, describes it as a relative isolate compared to the neighbouring Pama-Nyungan languages, Yuwaaliyaay, Yuwaalaraay, Barranbinya, Ngiyampaa (Ngemba), Guwamu and Badjiri. James 'Jimmie' Barker (1900Barker ( -1972, the last first language (L1) speaker of Muruwari, produced in the early 1970s a total of 136 hours of reel-to-reel tape recordings consisting of a mix of Muruwari and meta-linguistic commentary on the Muruwari language in English. The now digitised recordings are held at the Australian Institute of Aboriginal and Torres Strait Islander Studies and access to these materials depend on permission from the custodian and Muruwari elder, Roy Barker (author RB; grandson of Jimmie Barker).
To date, RB has manually auditioned approximately 40 of the 136 hours over the course of 4 years to determine regions of speech appropriate for general access and those requiring restricted access (e.g. for only the Muruwari community, or only the Barker family). At this rate of roughly 10 hours per year, the remaining 96 hours may require nearly a decade of manual review by RB.
Parallel to the review of the remaining recordings, a subset of the recordings that have already been cleared by RB is being used to search for excerpts that may be useful for learning materials and those that can inform the development of a standardised orthography for Muruwari. To assist these ongoing initiatives, we investigated how SLI and ASR can be leveraged to allow for the review process and excerpt searches to be done more strategically and efficiently.
42
There has been growing interest in leveraging speech processing tools to assist in language documentation workflows, including the formulation of shared tasks (e.g. Levow et al., 2021;Salesky et al., 2021). 2 Aimed at making unannotated fieldwork recordings more accessible, Levow et al. (2017) proposed a family of shared tasks, dubbed the "Grandma's Hatbox", which include SLI and ASR. In our work, we additionally leverage VAD to make the system fully automatable and, to derive a rough index of the corpus, we transcribe all speech regions detected as English (in the shared task formulation, ASR was intended to transcribe only the metadata preamble in the recordings).
The performance of speech processing systems can be poor when there are mismatches between the speech on which they were trained and that on which they are deployed. Commenting on such poor deployment-time performance of SLI systems, Salesky et al. (2021) concluded that what is necessary for real-world usage are methods for system adaptation with a few examples from the target speakers/domains. Accordingly, we sought to answer the following questions: 1) How many utterances of English and Muruwari are needed to adapt an off-the-shelf SLI system? 2) Is it possible to make use of such a system without computeintensive adaptation methods requiring a graphics processing unit (GPU)?
Regarding this latter question, we were inspired by a recent probing study on various speech representations showing that logistic regression classifiers performed on-par with shallow neural networks for two-way classification of speech, e.g. distinguishing between vowels and non-vowels (Ma et al., 2021). Hence, we examined through our SLI experiments whether using a logistic regression classifier suffices for the twoway classification of the speech data, i.e. distinguishing between English and Muruwari.
Turning now to ASR, the typical use case in language documentation work has been to develop ASR systems to help transcribe the target language (e.g. Adams et al., 2018;Shi et al., 2021;Prud'hommeaux et al., 2021). By contrast, our use of ASR more closely aligns with recent work exploring techniques such as spoken term detec-tion to help locate utterances of interest in untranscribed speech corpora in the target languages (Le Ferrand et al., 2020San et al., 2021). In this work, however, we take advantage of the mixed-language speech in the corpus, and leverage SLI and ASR to transcribe the English speech as a way to derive a rough index.
We opted to use the Robust wav2vec 2.0 model (Hsu et al., 2021) to reduce the mismatch in audio quality between the training and the deployment data (i.e. noisy archival recordings). This model is pre-trained not only on LibriSpeech (960 hours: Panayotov et al., 2015) and Common-Voice English (700 hours: Ardila et al., 2019), but also on noisy telephone-quality speech corpora (Fisher, 2k hours: Cieri et al., 2004 andSwitchboard, 300 hours: Godfrey et al., 1992), and also fine-tuned on 300 hours of transcribed speech from Switchboard. With our ASR experiments, we sought to answer the following questions: 1) What amount of transcribed speech is sufficient to reliably achieve better than off-theshelf performance? 2) Using the same amount of transcribed speech, to what extent can ASR system performance be further increased when supplemented with a language model trained on external texts?
Data: the Jimmie Barker recordings
To gather training and evaluation data for the two speech processing tasks, we obtained 6 archival recordings of Jimmie Barker's speech cleared by RB. For each recording, we used the off-the-shelf Robust wav2vec 2.0 (Hsu et al., 2021), 3 to simply transcribe all speech regions detected by the Silero VAD system, 4 and generated an .eaf file for ELAN. 5 Using ELAN, three annotators (2 recordings per annotator) then erased the spurious text for the Muruwari utterances (i.e. for SLI, we simply used blank annotations to denote Muruwari regions, given the orthography is still in development) and manually corrected the English transcriptions for ASR (i.e. for SLI, any non-blank region with text was considered English). While the machine-generated annotations for the training and evaluation data were human-corrected, we have yet to establish inter-annotator agreement or conduct error analyses.
When correcting the English transcriptions, speech was transcribed verbatim with no punctuation except for apostrophes, i.e. including false starts (e.g. we we don't say) and hesitations (e.g. and uh it means steal). To facilitate searches, transcriptions were made in lower-case with the exception of proper nouns (e.g. uh the Ngiyaamba has it uh) and words that were spelled out by Jimmie (e.g. you've got B U at the end of a word). For ASR training, the transcriptions were automatically converted to all upper-case to normalise the text to a 27-character vocabulary (26 upper-case letters + apostrophe) that matches vocabulary with which the wav2vec 2.0 Robust model was originally trained. As we report in Appendix A, not re-using the original vocabulary required significantly more fine-tuning data to achieve the same performance.
Based on the corrected annotations, we extracted the speech regions into individual 16-bit 16 kHz .wav files and all the transcriptions for the English utterances into a single tab-delimited file. A summary of the data used in this paper is given below in Table 1. Overall, the yielded speech content contained more English than Muruwari (78% English by duration or 66% by number of utterances), reflecting the relatively more numerous and longer nature of the meta-linguistic commentary in English compared to the Muruwari words and phrases being commented upon. Notably, only a third of the total running time of the recordings was found to be speech content on average, with frequent inter-and intra-phrase pauses arising from the semi-improvised linguistic self-elicitation being undertaken by Jimmie. A consequence of these pauses is that the VAD system segments Jimmie's speech into sequences of sentence fragments, e.g. This word..., This word means soft..., And also softly. We will return to these data characteristics in our discussion of the timed annotation tasks §7.
Recording
Finally, we note that having had few prior experimentally-informed estimates of the minimum amounts of data required, we chose to label for our initial implementation of this workflow this specific set of 6 recordings in accordance with other project priorities. While our deployed models are those trained on all the data, we opted to run detailed analyses on how much of the labelled data was actually necessary for adapting the SLI and ASR models to help establish estimates regarding the minimum amounts of labelled data needed to apply this workflow in other settings, and timed the annotation tasks using models trained on these minimum amounts of data.
Spoken Language Identification
We are interested in finding the minimum amount of training utterances required to obtain a performant system for same-speaker SLI. As training a system with very few utterances can lead to a large variance in its performance on unseen utterances, we were particularly interested in determining the training set size at which the variance was functionally equivalent to training on all available data.
Method
For our SLI experiments, we first extracted speech representations from each of the 4864 English and Muruwari utterances using the SpeechBrain toolkit (Ravanelli et al., 2021), which includes a state-of-the-art SLI model trained on 107 languages of the VoxLingua107 dataset (Valk and Alumäe, 2021). 6 We then performed 5000 iterations of training and evaluating logistic regression classifiers. At each iteration, the dataset was shuffled and 20% of the data (972 utterances) was held out as the test set. The remaining 80% of data (3892 utterances) was designated as the 'All' training set and from which we sampled 5 additional subsets (1, 5, 10, 25, and 50 utterances per language). We trained separate logistic regression classifiers using each of the 6 datasets (5 subsets + All), and then measured SLI performance of each classifier on the same test set using the F1 score. 7 Finally, we also calculated the differences between the F1 scores for the classifier trained on all the training data and each of those trained on the smaller datasets (All vs. 1, All vs. 5, All vs. 10, All vs. 25, All vs. 50). per language). On average, using only 1 utterance of English and Muruwari results in a system that is 28 percentage points worse than using all the data (Table 2 a). While using 5 or 10 utterances resulted in similar average differences compared to using all the data (10 vs 7 percentage points, respectively), the difference is nearly twice as variable when only 5 utterances per language are used (CI width: 20 percentage points). Answering our SLI-related questions, then: 1) using 10 utterances per language yields systems whose average performance is within 10 percentage points of using all the data (3892 utterances). 2) a logistic regression classifier suffices for twoway same-speaker SLI using off-the-shelf speech embeddings for SLI (Ravanelli et al., 2021).
Results
Figure
Automatic Speech Recognition
Recall that for ASR, we seek to answer the following questions: 1) What amount of transcribed speech is sufficient to reliably achieve better than off-the-shelf performance for transcribing Jimmie's Australian English? 2) Using the same amount of transcribed speech, to what extent can ASR system performance be further increased when supplemented with a language model trained on external texts? In this section, we report on experiments conducted in order to answer these questions.
Method
In all our fine-tuning experiments, we fine-tuned the Robust wav2vec 2.0 model over 50 epochs, evaluating every 5 epochs (with an early-stopping patience of 3 evaluations). All training runs started from the same off-the-shelf checkpoint and we kept constant the training hyperparameters, all of which can be inspected in the model training script on GitHub. We varied as the independent variable the amount and samples of data used to fine-tune the model and measured as the dependent variable the word error rate (WER). 8 In all our experiments, we split the total 81 minutes of transcribed English speech into an 80% training set (65 minutes) and a 20% testing set (16 minutes). The training split of 65 minutes was designated as the 100% training set from which we sampled smaller subsets consisting of 52 minutes (80% of training split), 39 minutes (60% of training split), 26 minutes (40% of training split), 13 minutes (20% of training split), 6.5 minutes (10% of training split), 3.25 minutes (5% of training split), and 0.65 minutes (1% of training split).
We fine-tuned 8 separate models with varying amounts of data and evaluated their performance on the same test set to obtain a first estimate of an amount of data sufficient to achieve better than offthe-shelf performance. We then created 10 new 80/20 training/testing splits for cross-validation in order to establish the variability in WER when only using that minimal amount of data.
We were also interested in whether supplementing the ASR system with a language model further reduced the WER. Our initial labelling work revealed that many errors made by the off-the-shelf system were particularly related to domain-and region-specific English words (e.g. spear, kangaroo). With permission from the maintainers of the Warlpiri-to-English dictionary, we extracted 8359 English translations from example sentences to obtain in-domain/-region sentences in English, e.g. The two brothers speared the kangaroo.
We used this data to train a word-level bigram model using KenLM (Heafield, 2011). While we opted to extract sentences from the Warlpiri-to-English dictionary given it is the largest of its kind for an Australian language, this corpus of sentences still only amounts to 75,425 words (4,377 unique forms), and as such we opted for a bigram model over a more conventional 3-or 4-gram model. With the only change being the inclusion of the language model, we then fine-tuned 10 additional models using the same training and testing splits. 8 Ranging from 0% (best) to 100% (worst), word error rate (WER) is a measure of the accuracy of an ASR system, taking into account substitutions (wrongly predicted words), additions (erroneous extra words) and deletions (missing words). Training with all 65 minutes of data yielded a topline WER of 10.1%. Remarkably, training with less than 1 minute of speech was sufficient to decrease the WER to 19.1%. As a first estimate, the amount of training data that sufficiently improves on the off-the-shelf model appears to be 0.65 minutes of transcribed speech.
Results
To verify that fine-tuning with only 1% of our training data does consistently yield a better than off-the-shelf WER, we conducted crossvalidation experiments using 10 additional 80/20 training/testing splits, each time using only 1% of the data from the training split (0.65 minutes or 39 seconds on average). Figure 3 displays the results of our crossvalidation experiments. First, evaluating the offthe-shelf model on the 10 test sets, we found the baseline mean WER to be 35.6% (standard deviation, SD: 1.48%; range: 33.8-37.9%). The mean WER of the models fine-tuned with only 1% of data and without a language model was found to be 18.2% (SD: 0.99%; range: 16.7-19.5%). These results demonstrate that fine-tuning with less than 1 minute of speech consistently yields better than off-the-shelf performance.
When a bigram language model was used for decoding, we found that the mean WER increased to 20.0% (SD: 1.48%; range: 17.8-21.9%) for the fine-tuned models. These results are inconsistent with our earlier experiments (reported in Appendix A), where we fine-tuned the same off-theshelf model with 39 minutes of data. In these experiments, decoding with the same bigram model did lead to WER improvements, suggesting that more careful calibration and weighting of the language model may be required in near-zero shot adaptation scenarios.
To answer our ASR-related questions, then: 1) 39 seconds on average of speech on average is sufficient to achieve a better than off-the-shelf performance for transcribing Jimmie's Australian En-glish speech. 2) the effect on ASR performance of a language model is inconclusive (cf. Appendix A).
Timed annotation tasks
In addition to helping provide estimates of the contents of recordings for review by an authorised person, another purpose of this workflow is to help reduce the time required to annotate speech in such a way that excerpts from cleared recordings can be easily extracted for use in relevant initiatives, e.g. creating language learning materials.
The initial process of annotating speech for this purpose involves two tasks: segmentation and transcription, which we illustrate in Figure 4 using two clips of Jimmie's speech. In segmentation, the annotator identifies regions of speech and nonspeech and also which of the speech regions is English or Muruwari. For a sequence of English sentence fragments such as those in Clip a), the utterances can simply be merged into one. For mixedlanguage regions such as those in Clip b), separate utterances should be created to allow the Muruwari speech to be easily extracted for use in language learning materials. To create transcriptions for indexing, the annotator transcribes the English segments, given regions segmented and identified as English. We conducted a set of timed annotation tasks to evaluate to what extent the machineassisted workflow reduces the time taken to perform these two tasks.
As detailed in Table 4, we gathered for our timed annotation tasks three different recordings Table 4: Time taken to annotate recordings by four annotators (A1-A4) with and without machine assistance. In the segmentation task, annotators corrected the segmentations by the voice activity detection (VAD) and spoken language identification systems (SLI: trained on 10 utterances per language), or they manually annotated speech regions. In the transcription task, annotators were given intervals of English speech without any accompanying text (manual transcription), or text generated by one of three ASR (A, B C) systems differing in accuracy. System A was an off-the-shelf Robust wav2vec 2.0 model (Hsu et al., 2021) with no fine-tuning (word error rate/character error rate: 36/22). Systems B (19/7) and C (14/6) were Robust wav2vec 2.0 models fine-tuned on 39 minutes of transcribed English speech, and System C supplemented with a bigram language model trained on external texts. approximately 30 minutes in length that were not part of the training and evaluation recordings in the previous experiments. For each timed task, annotators were asked to perform only segmentation or only transcription. For segmentation, they either manually created all time boundaries or corrected machine-derived ones from the VAD and SLI systems. For transcription, they either manually typed in the transcriptions for English speech or corrected machine-derived ones from an ASR system. We tested ASR systems developed earlier in our research (reported in Appendix A), that was fine-tuned on 39 minutes of Jimmy's Australian English speech, and reached a WER/CER of 19/7, as well as a version of the same system augmented with a bigram language model which reached a WER/CER of 14/6. The three recordings and the four annotators and the six annotation tasks were counter-balanced such that each annotator listened to each recording for a given task exactly once. shows a sequence of sentence fragments in English, to be annotated as a single utterance. Clip b) shows alternating Muruwari (zmu) and English speech, to be annotated as 6 utterances.
The segmentation task took 85.5 minutes of work for a 30-minute recording without machine assistance and 82.5 minutes when assisted. That is, correcting time boundaries, inserting missing intervals or removing erroneous ones, and merging/splitting machine-derived segmentations takes nearly the same amount of time as placing these boundaries manually. The waveforms in Figure 4 illustrate how the acoustics of alternating Muruwari and English separated by brief pauses look indistinguishable from English sentence fragments separated by similar amounts of pauses -leading to sub-optimal segmentations using a standard, sequential VAD-then-SLI pipeline. The mixed-language nature of this speech may require jointly optimising the VAD and SLI steps.
The transcription task took 91.5 minutes of work for a 30-minute recording without machine assistance and on average 59.3 minutes when assisted (a 35% reduction). We found no meaningful difference between the correction times for transcriptions generated by ASR systems with different levels of accuracy. For transcriptions produced by an off-the-shelf system (WER/CER: 36/22), the correction time was 63 minutes. For systems fine-tuned with 39 minutes of transcribed speech, WER/CER: 19/7 and 14/6, the correction times were 55.5 and 59.5 minutes, respectively.
The closeness in transcription correction times may relate to how an English ASR system whose WER is 30% or less produces good enough transcriptions for editing, according to a crowdsourced study (Gaur et al., 2016). Here, our transcribers' tolerance for the relatively less accurate off-the-shelf system (WER 36%) may be attributable to their familiarity with the speech domain and speaker (Sperber et al., 2017), having collectively spent nearly 40 hours correcting transcriptions of Jimmie's English by the time we conducted the timed tasks. These results suggest that, where correction is permissible by L1-speaking transcribers of the metalanguage, the time savings over manual transcription could still be gained using an off-the-shelf system that achieves a WER of 30-36% or less for the metalanguage in the recordings.
Nevertheless, we find that the machine-assisted workflow does offer time savings over a fully manual workflow (in line with previous work, e.g.: Sperber et al., 2016Sperber et al., , 2017. Specifically, we find that the machine-assisted workflow offers a 20% reduction in overall time to identify regions in the target language and metalanguage and also transcribe the latter, requiring 2.36 hours (82.5 + 59.3 mins) of correction time for a 30-minute recording compared to a fully-manual one which requires 2.95 hours (85.5 + 91.5 mins). Unlike the manual workflow, the fully-automatable workflow can derive first-pass transcriptions to help an authorised person triage recordings.
Towards a Muruwari orthography
As mentioned above, the Muruwari orthography is still currently in development. In this section, we provide a brief overview of how transcriptions of the English metalanguage are being used to aid in the development of the Muruwari orthography.
A key source of information on Muruwari phonemes and words of interest to the current Muruwari community are two 1969 recordings in which Jimmie Barker discusses an early Muruwari wordlist (Mathews, 1902). This wordlist was created by linguist R.H. Mathews and consists of Muruwari words in his romanisation along with English translations. Using this wordlist, the documentation team is able to shortlist Muruwari words whose romanisation is suggestive of containing sounds of interest (e.g. dental consonants), and then quickly locate in these recordings Jimmie's pronunciation of the words and associated commentary using the time-aligned English transcripts generated for the two recordings. Here, the English transcripts provide significantly more streamlined access to untranscribed Muruwari utterances than browsing the recordings in real time. Once verified of containing the sounds of interest, the documentation team is able to extract snippets of these words to be included in the community consultation process.
Conclusion
Many hours of unannotated speech from endangered languages remain in language archives and inaccessible to community members and language learning programs. The time-intensive nature of annotating speech creates one bottleneck, with an additional one occurring for speech in restricted access corpora that authorised community members must vet before annotation can begin. For a particular genre of recordings where speech in the endangered language is intermixed with a metalanguage in a more widely-used language such as English, we proposed a privacy-preserving workflow using automated speech processing systems to help alleviate these bottlenecks.
The workflow leverages voice activity detection (VAD) to identify regions of speech in a recording, and then spoken language identification (SLI) to isolate speech regions in the metalanguage and transcribes them using automatic speech recognition (ASR). The uncorrected transcriptions provide an estimate of the contents of a recording for an authorised person to make initial decisions on whether it can be listened to by those with lower levels of access to correct the transcriptions, which, collectively, help index the corpus. This workflow can be implemented using a limited amount of labelled data: 10 utterances per language for SLI and 39 seconds of transcribed speech in the metalanguage for ASR. The workflow reduces metalanguage transcription time by 20% over manual transcription and similar time savings may be achievable with an off-the-shelf ASR system with a word error rate of 36% or less for the metalanguage in the target recordings.
Given our use case, the present demonstration of the workflow was limited to the scenario of processing single-speaker monologues with a mix of Muruwari and English, the latter of which made possible the use of a state-of-the-art model trained for English ASR (Robust wav2vec 2.0: Hsu et al., 2021) and also for transcriptions to be corrected by first language speakers of English. Our work also revealed that VAD and SLI systems require further optimisation for mixed-language speech.
We hope our demonstration encourages further experimentation with model adaptation with limited data for related use cases. For dialogues between a linguist and language consultant, for example, speaker diarisation could be added via fewshot classification using speech representations for speaker recognition (e.g. SpeechBrain SR embeddings: Ravanelli et al., 2021). With user-friendly interfaces like Elpis (Foley et al., 2018), for which wav2vec 2.0 integration is underway (Foley, pers. comm.), we hope to see more streamlined access to pre-trained models for language documentation workflows and, consequently, more streamlined access to the recorded speech for community members and language learning programs. 9 https://huggingface.co/blog/ fine-tune-wav2vec2-english 10 https://huggingface.co/facebook/ wav2vec2-large-robust-ft-swbd-300h WER of the models fine-tuned with 39 minutes of data and without a language model was found to be 19.5% (SD: 2.98%; range: 15-23%). When a bigram language model was included, we found that the mean WER decreased to 14% (SD: 2.30%; range: 11-18%). These findings suggest that while the addition of a language model can be beneficial more experimentation is needed to inform best practices for calibrating and/or weighting the language model in near-zero shot learning scenarios. Figure 5: Variability in word error rates of training and testing Robust wav2vec 2.0 models over 10 iterations using different samples in the training and testing datasets, holding constant the size of the training set (39 minutes) and testing set (16 minutes). The off-theshelf model without fine-tuning was also evaluated on the same 10 testing sets.
Figure 1
1Figure 1: Deriving transcriptions of English in mixedlanguage speech using voice activity detection (VAD) and spoken language identification (SLI) to identify speech regions and the language spoken (zmu: Muruwari or eng: English) and automatic speech recognition (ASR) to transcribe English speech.
Figure 2 :
2Two-way spoken language identification performance (English vs. Muruwari) using logistic regression classifiers trained on SpeechBrain SLI embeddings(Ravanelli et al., 2021) using varying dataset sizes (1, 5, 10, 25, 50 utterances per language, and All available data: 3892 utterances). Points represent mean F1 and error bars the 95% bootstrap confidences intervals over 5000 iterations.
Figure 3 :
3Variability in word error rates of training and testing Robust wav2vec 2.0 models over 10 iterations using different samples in the training and testing datasets, holding constant the size of the training set (1% of training set = 0.65 minutes or 39 seconds, on average) and testing set (16 minutes). The off-theshelf model without fine-tuning was also evaluated on the same 10 testing sets.
Figure 4 :
4Desired annotations for two excerpts of speech from the Jimmie Barker recordings. Clip a)
ID (
IDRunning time, mins) Speech (mins)
eng
zmu
33-2162B (65)
23.2
2.06
31-1919A (65)
16.3
6.28
25-1581B (65)
15.5
4.75
25-1581A (65)
12.1
4.34
28-1706B (64)
7.00
2.06
25-1582A (35)
6.92
2.68
Total: 5.98 hours
4864 utts.
81.0 mins
3243 utts.
22.2 mins
1621 utts.
Table 1 :
1Duration and number of utterances (utts.) of English and Muruwari speech yielded from labelling 6 archival recordings
Table 2 :
2Mean difference in F1 and 95% bootstrap
confidence intervals (lower and upper bounds, and
width) for the difference in means for the performance
on a spoken language identification task using logistic
regression classifiers trained of varying dataset sizes (1,
5, 10, 25, 50 utterances per language, and All available
training data: 3892 utterances)
Table 3 :
3Word error rates (WERs) achieved from fine-
tuning the same wav2vec 2.0 model (large-robust-ft-
swbd-300h) over 50 epochs using various subsets of
data from 65 minutes of Australian English archival au-
dio data.
Table 3
3displays the word error rates (WERs)
achieved by a Robust wav2vec 2.0 model fine-
tuned with various amounts of transcribed speech.
The baseline WER achieved by the off-the-shelf
model with no additional fine-tuning is 36.3%.
Table 5 :
5Word error rates (WERs) achieved from finetuning the same wav2vec 2.0 model (large-robust-ftswbd-300h) over 50 epochs using various subsets of data from 65 minutes of Australian English archival audio data.
Aimed to help drive system development, shared tasks are competitions in which teams of researchers submit competing systems to solve a pre-defined challenge.
https://huggingface.co/facebook/ wav2vec2-large-robust-ft-swbd-300h 4 https://github.com/snakers4/silero-vad 5 https://archive.mpi.nl/tla/elan
While the model was trained to identify English (dialects unspecified), we found that the included, off-the-shelf classifier could not consistently identify Jimmie's Australian English utterances, which were most frequently classified as Welsh (497/3243: 15.3%) or English (321/3243: 9.8%).
Ranging between 0 (worst) and 1 (best), the F1 score is a measure of a classification system's accuracy, taking both false positives and false negatives into account.
A Fine-tuning with a re-initialised vocabularyIn this section, we describe an earlier set of ASR fine-tuning experiments which were analogous to those reported in §6, except for the manner in which vocabulary (i.e. character set) was configured. Following recommended fine-tuning practice, 9 we initialised a linear layer whose output size corresponds to set of characters to be predicted (e.g. 'A', 'B', ...) and is derived from the target training dataset. However, this guidance presupposes that the pre-trained model being finetuned is one with no prior fine-tuning for ASR on the same language. Given the size of our available training data (total 65 minutes), we chose to continue to train the Robust wav2vec 2.0 model, 10 already fine-tuned for English ASR on 300 hours of Switchboard(Godfrey et al., 1992). The results of fine-tuning this model using various-sized subsets of our training data is reported below inTable 5. Notably, fine-tuning with only 13 minutes of data resulted in a significantly worse than off-the-shelf performance (98% vs. 37%, off the shelf). By deriving labels for the linear layer from our training dataset, the label mappings were scrambled (e.g. from Output 4 = 'E' to Output 4 = 'C'), yielding gibberish predictions during initial fine-tuning. Through this fine-tuning process, 39 minutes of training data were required for the model to (re-)learn the appropriate parameters for English ASR.By contrast, in our experiments reported above in §6, we adapted our datasets to match the vocabulary of the tokeniser included with the off-theshelf model. By doing so, we were able to achieve better than off-the-shelf ASR performance using only 39 seconds of training data.Yet, unlike those experiments reported above, the addition of a language model to models finetuned with a re-initialised vocabulary yielded better performance. As shown inFigure 5, the mean
Evaluating phonemic transcription of low-resource tonal languages for language documentation. Oliver Adams, Trevor Cohn, Graham Neubig, Hilaria Cruz, Steven Bird, Alexis Michaud, LREC 2018 (Language Resources and Evaluation Conference). Oliver Adams, Trevor Cohn, Graham Neubig, Hilaria Cruz, Steven Bird, and Alexis Michaud. 2018. Eval- uating phonemic transcription of low-resource tonal languages for language documentation. In LREC 2018 (Language Resources and Evaluation Confer- ence), pages 3356-3365.
CommonVoice: A massivelymultilingual speech corpus. Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, M Francis, Gregor Tyers, Weber, arXiv:1912.06670arXiv preprintRosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, and Gregor Weber. 2019. CommonVoice: A massively- multilingual speech corpus. arXiv preprint arXiv:1912.06670.
The Fisher corpus: A resource for the next generations of speech-to-text. Christopher Cieri, David Miller, Kevin Walker, LREC. 4Christopher Cieri, David Miller, and Kevin Walker. 2004. The Fisher corpus: A resource for the next generations of speech-to-text. In LREC, volume 4, pages 69-71.
Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (Elpis). Ben Foley, R Arnold, G Coto-Solano, T M Durantin, D Ellison, S Van Esch, F Heath, Z Kratochvíl, David Maxwell-Smith, O Nash, M Olsson, Nay Richards, H San, N Stoakes, J Thieberger, Wiles, The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU). Ben Foley, J Arnold, R. Coto-Solano, G. Durantin, T. M. Ellison, D. van Esch, S. Heath, F. Kra- tochvíl, Z. Maxwell-Smith, David Nash, O. Olsson, M. Richards, Nay San, H. Stoakes, N. Thieberger, and J Wiles. 2018. Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference Sys- tem (Elpis). In The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Lan- guages (SLTU), pages 200-204.
The effects of automatic speech recognition quality on human transcription latency. Yashesh Gaur, S Walter, Florian Lasecki, Jeffrey P Metze, Bigham, Proceedings of the 13th International Web for All Conference. the 13th International Web for All ConferenceYashesh Gaur, Walter S Lasecki, Florian Metze, and Jeffrey P Bigham. 2016. The effects of automatic speech recognition quality on human transcription latency. In Proceedings of the 13th International Web for All Conference, pages 1-8.
SWITCHBOARD: Telephone speech corpus for research and development. J John, Godfrey, C Edward, Jane Holliman, Mc-Daniel, Acoustics, Speech, and Signal Processing. IEEE Computer Society1John J Godfrey, Edward C Holliman, and Jane Mc- Daniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In Acous- tics, Speech, and Signal Processing, IEEE Inter- national Conference on, volume 1, pages 517-520. IEEE Computer Society.
KenLM: Faster and smaller language model queries. Kenneth Heafield, Proceedings of the sixth workshop on statistical machine translation. the sixth workshop on statistical machine translationKenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187-197.
Robust wav2vec 2.0: Analyzing domain shift in self-supervised pre-training. Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, arXiv:2104.01027arXiv preprintWei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Ta- tiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, et al. 2021. Robust wav2vec 2.0: Ana- lyzing domain shift in self-supervised pre-training. arXiv preprint arXiv:2104.01027.
Enabling interactive transcription in an indigenous community. Eric Le Ferrand, Steven Bird, Laurent Besacier, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsEric Le Ferrand, Steven Bird, and Laurent Besacier. 2020. Enabling interactive transcription in an in- digenous community. In Proceedings of the 28th In- ternational Conference on Computational Linguis- tics, pages 3422-3428, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.
Phone based keyword spotting for transcribing very low resource languages. Eric Le Ferrand, Steven Bird, Laurent Besacier, Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association. the The 19th Annual Workshop of the Australasian Language Technology AssociationEric Le Ferrand, Steven Bird, and Laurent Besacier. 2021. Phone based keyword spotting for transcrib- ing very low resource languages. In Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association, pages 79-86.
Developing a shared task for speech processing on endangered languages. Gina-Anne Levow, Emily P Ahn, Emily M Bender, Proceedings of the Workshop on Computational Methods for Endangered Languages. the Workshop on Computational Methods for Endangered Languages1Gina-Anne Levow, Emily P Ahn, and Emily M Ben- der. 2021. Developing a shared task for speech pro- cessing on endangered languages. In Proceedings of the Workshop on Computational Methods for En- dangered Languages, volume 1, pages 96-106.
Streamlined challenges: Aligning research interests with shared tasks. Gina-Anne Levow, Emily M Bender, Patrick Littell, Kristen Howell, Shobhana Chelliah, Joshua Crowgey, Dan Garrette, Jeff Good, Sharon Hargus, David Inman, Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages. the 2nd Workshop on the Use of Computational Methods in the Study of Endangered LanguagesGina-Anne Levow, Emily M Bender, Patrick Lit- tell, Kristen Howell, Shobhana Chelliah, Joshua Crowgey, Dan Garrette, Jeff Good, Sharon Hargus, David Inman, et al. 2017. Streamlined challenges: Aligning research interests with shared tasks. In Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 39-47.
Probing acoustic representations for phonetic properties. Danni Ma, Neville Ryant, Mark Liberman, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEDanni Ma, Neville Ryant, and Mark Liberman. 2021. Probing acoustic representations for phonetic prop- erties. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 311-315. IEEE.
Amendments in Murawarri: The Murawarri and Other Australian Languages. R H Mathews, Royal Geographical Society of Australasia. 18R. H. Mathews. 1902. Amendments in Murawarri: The Murawarri and Other Australian Languages. Royal Geographical Society of Australasia, 18:52-68.
The Muruwari Language. Dept. of Linguistics, Research School of Pacific Studies. Lynette Oates, The Australian National UniversityLynette Oates. 1988. The Muruwari Language. Dept. of Linguistics, Research School of Pacific Studies, The Australian National University.
LibriSpeech: an ASR corpus based on public domain audio books. Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEEVassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. LibriSpeech: an ASR corpus based on public domain audio books. In 2015 IEEE international conference on acous- tics, speech and signal processing (ICASSP), pages 5206-5210. IEEE.
Automatic speech recognition for supporting endangered language documentation. Language Documentation & Conservation. Emily Prud'hommeaux, Robbie Jimerson, Richard Hatcher, Karin Michelson, 15Emily Prud'hommeaux, Robbie Jimerson, Richard Hatcher, and Karin Michelson. 2021. Automatic speech recognition for supporting endangered lan- guage documentation. Language Documentation & Conservation, 15:491-513.
Mirco Ravanelli, Titouan Parcollet, Peter Plantinga, Aku Rouhe, Samuele Cornell, Loren Lugosch, Cem Subakan, Nauman Dawalatabad, Abdelwahab Heba, Jianyuan Zhong, arXiv:2106.04624SpeechBrain: A general-purpose speech toolkit. arXiv preprintMirco Ravanelli, Titouan Parcollet, Peter Plantinga, Aku Rouhe, Samuele Cornell, Loren Lugosch, Cem Subakan, Nauman Dawalatabad, Abdelwahab Heba, Jianyuan Zhong, et al. 2021. SpeechBrain: A general-purpose speech toolkit. arXiv preprint arXiv:2106.04624.
Elizabeth Salesky, M Badr, Sabrina J Abdullah, Elena Mielke, Oleg Klyachko, Edoardo Serikov, Ritesh Ponti, Kumar, arXiv:2106.03895Ryan Cotterell, and Ekaterina Vylomova. 2021. SIGTYP 2021 shared task: robust spoken language identification. arXiv preprintElizabeth Salesky, Badr M Abdullah, Sabrina J Mielke, Elena Klyachko, Oleg Serikov, Edoardo Ponti, Ritesh Kumar, Ryan Cotterell, and Ekaterina Vy- lomova. 2021. SIGTYP 2021 shared task: ro- bust spoken language identification. arXiv preprint arXiv:2106.03895.
Leveraging pre-trained representations to improve access to untranscribed speech from endangered languages. Nay San, Martijn Bartelds, Mitchell Browne, Lily Clifford, Fiona Gibson, John Mansfield, David Nash, Jane Simpson, Myfany Turpin, Maria Vollmer, arXiv:2103.14583arXiv preprintNay San, Martijn Bartelds, Mitchell Browne, Lily Clif- ford, Fiona Gibson, John Mansfield, David Nash, Jane Simpson, Myfany Turpin, Maria Vollmer, et al. 2021. Leveraging pre-trained representations to im- prove access to untranscribed speech from endan- gered languages. arXiv preprint arXiv:2103.14583.
Leveraging end-to-end ASR for endangered language documentation: An empirical study on Yoloxóchitl Mixtec. Jiatong Shi, Jonathan D Amith, Rey Castillo García, Esteban Guadalupe Sierra, Kevin Duh, Shinji Watanabe, arXiv:2101.10877arXiv preprintJiatong Shi, Jonathan D Amith, Rey Castillo García, Esteban Guadalupe Sierra, Kevin Duh, and Shinji Watanabe. 2021. Leveraging end-to-end ASR for endangered language documentation: An empiri- cal study on Yoloxóchitl Mixtec. arXiv preprint arXiv:2101.10877.
Optimizing computerassisted transcription quality with iterative user interfaces. Matthias Sperber, Graham Neubig, Satoshi Nakamura, Alex Waibel, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRAMatthias Sperber, Graham Neubig, Satoshi Nakamura, and Alex Waibel. 2016. Optimizing computer- assisted transcription quality with iterative user interfaces. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 1986-1992, Portorož, Slovenia. European Language Resources Associa- tion (ELRA).
. Matthias Sperber, Graham Neubig, Jan Niehues, Satoshi Nakamura, Alex Waibel, Transcribing Against Time. Speech Communication. 93Matthias Sperber, Graham Neubig, Jan Niehues, Satoshi Nakamura, and Alex Waibel. 2017. Tran- scribing Against Time. Speech Communication, 93C:20-30.
Towards greater accuracy in lexicostatistic dating. Morris Swadesh, International journal of American linguistics. 212Morris Swadesh. 1955. Towards greater accuracy in lexicostatistic dating. International journal of American linguistics, 21(2):121-137.
VoxLingua107: a dataset for spoken language recognition. Jörgen Valk, Tanel Alumäe, 2021 IEEE Spoken Language Technology Workshop (SLT). IEEEJörgen Valk and Tanel Alumäe. 2021. VoxLingua107: a dataset for spoken language recognition. In 2021 IEEE Spoken Language Technology Workshop (SLT), pages 652-658. IEEE.
| [
"https://github.com/CoEDL/vad-sli-asr",
"https://github.com/snakers4/silero-vad"
] |
[] | [
"Wanyue Zhai ",
"Jonathan Rusert ",
"Zubair Shafiq ",
"Padmini Srinivasan ",
"\nUniversity of California\nDavis\n",
"\nUniversity of Iowa\nUniversity of California\nDavis\n",
"\nUniversity of Iowa\n\n"
] | [
"University of California\nDavis",
"University of Iowa\nUniversity of California\nDavis",
"University of Iowa\n"
] | [] | Recent advances in natural language processing have enabled powerful privacy-invasive authorship attribution. To counter authorship attribution, researchers have proposed a variety of rule-based and learning-based text obfuscation approaches. However, existing authorship obfuscation approaches do not consider the adversarial threat model. Specifically, they are not evaluated against adversarially trained authorship attributors that are aware of potential obfuscation. To fill this gap, we investigate the problem of adversarial authorship attribution for deobfuscation. We show that adversarially trained authorship attributors are able to degrade the effectiveness of existing obfuscators from 20-30% to 5-10%. We also evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator was used. While there is a a clear degradation in attribution accuracy, it is noteworthy that this degradation is still at or above the attribution accuracy of the attributor that is not adversarially trained at all. Our results underline the need for stronger obfuscation approaches that are resistant to deobfuscation. | 10.48550/arxiv.2203.11849 | [
"https://arxiv.org/pdf/2203.11849v1.pdf"
] | 247,597,137 | 2203.11849 | 4b0970042edf3aff38c7d93890c4eb847016f232 |
Wanyue Zhai
Jonathan Rusert
Zubair Shafiq
Padmini Srinivasan
University of California
Davis
University of Iowa
University of California
Davis
University of Iowa
A Girl Has A Name, And It's ... * Adversarial Authorship Attribution for Deobfuscation † * This paper is third in the series. See (Mahmood et al., 2019) and (Mahmood et al., 2020) for the first two papers. † Our code and data are available at: https://github. com/reginazhai/Authorship-Deobfuscation 1 https://www.eff.org/deeplinks/2013/06/internet-and-surveillance-UN-makes-the-connection
Recent advances in natural language processing have enabled powerful privacy-invasive authorship attribution. To counter authorship attribution, researchers have proposed a variety of rule-based and learning-based text obfuscation approaches. However, existing authorship obfuscation approaches do not consider the adversarial threat model. Specifically, they are not evaluated against adversarially trained authorship attributors that are aware of potential obfuscation. To fill this gap, we investigate the problem of adversarial authorship attribution for deobfuscation. We show that adversarially trained authorship attributors are able to degrade the effectiveness of existing obfuscators from 20-30% to 5-10%. We also evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator was used. While there is a a clear degradation in attribution accuracy, it is noteworthy that this degradation is still at or above the attribution accuracy of the attributor that is not adversarially trained at all. Our results underline the need for stronger obfuscation approaches that are resistant to deobfuscation.
Introduction
Recent advances in natural language processing have enabled powerful attribution systems 1 that are capable of inferring author identity by analyzing text style alone (Abbasi and Chen, 2008;Narayanan et al., 2012;Overdorf and Greenstadt, 2016;Stolerman et al., 2013;Ruder et al., 2016). There have been several recent attempts to attribute the authorship of anonymously published text using such advanced authorship attribution approaches. 2 This poses a serious threat to privacy-conscious individuals, especially human rights activists and journalists who seek anonymity for safety.
Researchers have started to explore text obfuscation as a countermeasure to evade privacy-invasive authorship attribution. Anonymouth (McDonald et al., 2012;Brennan et al., 2012) was proposed to identify words or phrases that are most revealing of author identity so that these could be manually changed by users seeking anonymity. Since it can be challenging for users to manually make such changes, follow up work proposed rule-based text obfuscators that can automatically manipulate certain text features (e.g., spelling or synonym) (Mc-Donald et al., 2013;Almishari et al., 2014;Keswani et al., 2016;Karadzhov et al., 2017;Mansoorizadeh et al., 2016;Kacmarcik and Gamon, 2006;Kingma and Welling, 2018). Since then more sophisticated learning-based text obfuscators have been proposed that automatically manipulate text to evade state-of-the-art authorship attribution approaches (Karadzhov et al., 2017;Shetty et al., 2018;Li et al., 2018;Mahmood et al., 2019;Gröndahl and Asokan, 2020).
In the arms race between authorship attribution and authorship obfuscation, it is important that both attribution and obfuscation consider the adversarial threat model (Potthast et al., 2018). While recent work has focused on developing authorship obfuscators that can evade state-of-the-art authorship attribution approaches, there is little work on developing authorship attribution approaches that can work against state-of-the-art authorship obfuscators. Existing authorship attributors are primarily designed for the non-adversarial threat model and only evaluated against non-obfuscated documents. Thus, it is not surprising that they can be readily evaded by state-of-the-art authorship obfuscators (Karadzhov et al., 2017;Shetty et al., 2018;Li et al., 2018;Mahmood et al., 2019;Gröndahl and Asokan, 2020).
To fill this gap, we investigate the problem of authorship deobfuscation where the goal is to develop adversarial authorship attribution approaches that are able to attribute obfuscated documents. We study the problem of adversarial authorship attribution in the following two settings. First, we develop attributors that filter obfuscated documents using obfuscation/obfuscator detectors and then use an authorship attributor that is adversarially trained on obfuscated documents. Second, we develop adversarially trained authorship attributors that does not make assumptions about whether and which authorship obfuscator is used.
The results show that our authorship deobfuscation approaches are able to significantly reduce the adverse impact of obfuscation, which results in up to 20-30% degradation in attribution accuracy. We find that an authorship attributor that is purpose-built for obfuscated documents is able to improve attribution accuracy to within 5% as without obfuscation. We also find that an adversarially trained authorship attributor is able to improve attribution accuracy to within 10% as without obfuscation. Additionally, we evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator is used. We find that these erroneous assumptions degrade accuracy up to 20%, however, this degradation is the same or smaller than when the attributor is not adversarially trained, which can degrade accuracy up to 32%.
Our key contributions include:
• investigating the novel problem of adversarial authorship attribution for deobfuscation;
• proposing approaches for adversarial authorship attribution; and
• evaluating robustness of existing authorship obfuscators against adversarial attribution.
Ethics Statement: We acknowledge that authorship deobfuscation in itself is detrimental to privacy. Our goal is to highlight a major limitation of prior work on authorship obfuscation under the adversarial threat model. We expect our work to foster further research into new authorship obfuscation approaches that are resistant to deobfuscation.
Related Work
Authorship attribution is the task of identifying the correct author of a document given a range of possible authors. It has been a long-standing topic, and researchers have developed a wide range of solutions to the problem. Earlier researchers focus more on analysis based on writing style features. These include the distribution of word counts and basic Bayesian methods (Mosteller and Wallace, 1963), different types of writing-style features (lexical, syntactic, structural, and contentspecific) (Zheng et al., 2006), and authors' choices of synonyms (Clark and Hannon, 2007). Other researchers combined machine learning and deep learning methods with stylometric features. Abbasi and Chen (2008) combine their rich feature set, "Writeprints", with an SVM. Brennan et al. (2012) improve "Writeprints" to reduce the computational load required of the feature set. Finally, more recent research focuses on fine-tuning pretrained models since they do not require predefined features sets. Ruder et al. (2016) tackle authorship attribution with a CNN, while Howard and Ruder (2018) introduce the Universal Language Model Fine-tuning (ULMFiT) which shows strong performance in attribution.
To the best of our knowledge, prior work lacks approaches for adversarial authorship deobfuscation. Prior work has shown that existing authorship attributors do not perform well against obfuscators. Brennan et al. (2012) present a manual obfuscation experiment which causes large accuracy degradation. Since this obfuscation experiment, much has been done in the area of authorship text obfuscation (Rao and Rohatgi, 2000;Brennan et al., 2012;McDonald et al., 2012McDonald et al., , 2013Karadzhov et al., 2017;Mahmood et al., 2019;Gröndahl and Asokan, 2020;Bo et al., 2019). We focus on state-of-the-art obfuscators, Mutant-X (Mahmood et al., 2019) and DS-PAN specifically in our research. Other obfuscation methods are as vulnerable to adversarial training which is reinforced in (Gröndahl and Asokan, 2020).
Our proposed authorship attributor leverages adversarial training to attribute documents regardless of obfuscation. First described in (Goodfellow et al., 2014), adversarial training uses text produced by an adversary to train a model to be more robust. Adversarial training has seen success in other text domains including strengthening word embeddings (Miyato et al., 2016), better classification in crosslingual texts (Dong et al., 2020), and attacking classifiers (Behjati et al., 2019).
Methodology
In this section, we present our approaches for adversarial authorship attribution for deobfuscation.
Threat Model
We start by describing the threat model for the authorship deobfuscation attack. There is an arms race between an attacker (who desires to identify/attribute the author of a given document) and a defender (an author who desires privacy and therefore uses an obfuscator to protect their identity). Figure 1 illustrates the expected workflow between the defender and the attacker. The defender uses an obfuscator before publishing the documents and the attacker employs obfuscation and/or obfuscator detector as well as an adversarially trained attributor for deobfuscation.
Defender. The goal of the defender is to obfuscate a document so that it cannot be attributed to the author. The obfuscator takes as input an original document and obfuscates it to produce an obfuscated version that is expected to evade authorship attribution.
Attacker. The goal of the attacker is to use an attributor trained on documents from multiple authors to identify the author of a given document. The attacker assumes to know the list of potential authors in the traditional closed-world setting. We examine two scenarios: First, as shown in Figure 1a, the attacker assumes to know that the document is obfuscated and also the obfuscator used by the defender. In this scenario, the attacker is able to access the documents that are produced by the obfuscator and hence train an attributor for obfuscated documents from the obfuscator. Second, as shown in Figure 1b, the attacker assumes to know that the document is obfuscated and that there is a pool of available obfuscators, of which one is used by the defender. Note that the attacker does not know exactly which obfuscator from the pool was used by the defender. Thus, the attacker trains an attributor for documents that are obfuscated by any one of the pool of available obfuscators.
Obfuscation
We use two state-of-the-art text obfuscators . Document Simplification (DS-PAN). This approach obfuscates documents through rule-based sentence simplification . The transformation rules include lexical transformations, substitutions of contractions or expansions, and eliminations of discourse markers and fragments of text in parenthesis. This approach was one of the best performing in the annual PAN competition, a shared CLEF task . It was also one of the few approaches that achieves "passable" and even "correct" judgements on the soundness of obfuscated text (i.e., whether the semantics of the original text are preserved) (Hagen et al., 2017). We refer to this approach as DS-PAN.
Mutant-X. This approach performs obfuscation using a genetic algorithm based search framework (Mahmood et al., 2019). It makes changes to input text based on the attribution probability and semantics iteratively so that obfuscation improves at each step. It is also a fully automated authorship obfuscation approach and outperformed text obfuscation approaches from PAN and has since been used by other text obfuscation approaches (Gröndahl and Asokan, 2020). There are two versions of Mutant-X: Mutant-X writeprintsRFC, which uses Random Forests along with Writeprints-Static features (Brennan et al., 2012); and Mutant-X embeddingCNN, which uses a Convolutional Neural Network (CNN) classifier with word embeddings. We use writeprintsRFC version because it achieves better drop in attribution accuracy and semantic preservation as compared to embeddingCNN.
Deobfuscation
We describe the design of the authorship attributor and our adversarial training approaches for deobfuscation. Authorship Attributor. We use writeprintsRFC as the classifier for authorship attribution. More specifically, we use the Writeprints-Static feature set (Brennan et al., 2012) that includes lexical features on different levels, such as word level (total number of words) and letter level (letter frequency) as well as syntactic features such as the frequency of functional words and parts of speech tags. It is one of the most widely used stylometric feature sets and has consistently achieved high accuracy on different datasets and author sets while maintaining a low computational cost. We then use these features to train an ensemble random forest classifier Figure 1: Deobfuscation pipeline using obfuscation and/or obfuscator detectors for adversarial training with 50 decision trees.
Adversarial Training. The basic idea of adversarial training is to include perturbed/obfuscated inputs into the training set to improve the model's resistance towards such adversarially obfuscated inputs (Goodfellow et al., 2014). It has been widely used in various domains including text classification. In our case, obfuscated texts are texts that vary slightly from the original texts and these serve as adversarial examples. We examine how using these adversarial examples as training data influences the attributor's performance and whether it adds resilience against obfuscation. Based on our two scenarios described in Section 3.1 and shown in Figure 1, we propose two ways of adversarial training. For both cases, original texts from the list of possible authors are selected and prepared for obfuscation. For scenario 1, we train the attributor using documents obfuscated by a known obfuscator. For scenario 2, since the attacker does not assume to know the specific obfuscator used by the defender, we train the attributor using documents obfuscated by the pool of available obfuscators.
Experimental Setup
We describe the dataset, evaluation metrics, and experimental design to assess the effectiveness of our adversarial authorship attribution approaches for deobfuscation.
Dataset. Following previous research (Mahmood et al., 2019), we examine a publicly available dataset for evaluation of our methodology. The Blog Authorship Corpus (Schler et al., 2006) contains over 600,000 blog posts from blogger.com. These posts span 19,320 unique authors. Previous research (Narayanan et al., 2012) found that authorship attribution gets harder when more authors are included. Based on the author selection in (Mahmood et al., 2019), we select a subset of 15 each with 100 documents (compared to their 5 and 10 authors) for a more precised evaluation. These As shown in Figure 2, we train on various combinations of obfuscated documents. These documents are obfuscated by the obfuscators described in Section 3.2. When an attributor-dependentobfuscator (e.g. Mutant-X (Mahmood et al., 2019)) is used, the attributor will have access to the same training documents used to train the obfuscator. Otherwise, the attributor does not assume to have access to the attributor used by the obfuscator. To control for training size, when more than 1 obfuscator is used, we sample equal amounts of documents from each set of obfuscated documents. For example, if we train against 2 obfuscators, then 600 documents are sampled from each set of respective obfuscated documents to get a training set of size 1200.
To calibrate the obfuscated texts, we use ME-TEOR score (Banerjee and Lavie, 2005) to evaluate the soundness of documents. The score for Mutant-X ranges from 0.3 to 0.7 (mean=0.46), and the score for DS-PAN ranges from 0.24 to 0.57 (mean=0.38), which are comparable to previous studies (Mahmood et al., 2019). An in-depth analysis of the METEOR score is reported in Appendix A.
Metric. To quantify attribution performance on the 15-class problem, we calculate the accuracy as: accuracy = # of correctly attributed documents total # of documents (1) Attack Scenarios. Figure 3 illustrates the flow of our experimental evaluation under different attack scenarios. 0. Baseline: For the baseline model, we assume that there is no obfuscation in this world. The attacker is trained on original documents and is deployed on original documents.
1. Obfuscation-unaware-attacker: The first case we examine is when the defender actively seeks to hide author identity. Thus, the defender gains an advantage by obfuscating documents using either Mutant-X or DS-PAN in order to bypass the attacker. The attacker, however, remains unaware of obfuscation and trains the attributor only on original documents.
2. Obfuscation-aware-attacker with obfuscation detector: Next, we give knowledge of obfuscation to the attacker by introducing an obfuscation detector into the system. Previous research (Mahmood et al., 2020) shows that texts generated by existing obfuscators can be detected as obfuscated with high accuracy. The device for this type of detection is called an obfuscation detector. Hence, in this scenario we ask whether there is any benefit to the attacker if the text is identified as obfuscated before attribution. Since the attacker does not know which obfuscator was used by the defender, the attributor is trained on the combination of documents generated from DS-PAN and from Mutant-X. The defender is the same as in the previous scenario, i.e., it uses one of the obfuscators to generate documents. 3. Obfuscator-aware-attacker with obfuscator detector: We give additional information to the attacker. Previous research has shown that embedding watermarks and fingerprints that show the identity of the model into deep neural networks is a valid approach to protect the rights of shared trained models (Uchida et al., 2017;Zhang et al., 2018). Hence, it is reasonable to assume that there will be methods in the future to identify the watermarks for specific deep neural networks. Here, we propose the concept of obfuscator detector, which can detect the specific obfuscator used. In this case, the attacker attributor is trained always on the documents generated by the same obfuscator as the defender: either documents generated from DS-PAN or from Mutant-X.
2i. Obfuscation-aware-attacker with incorrect obfuscation detector: Here we ask the question: what happens in scenario 2 if the obfuscation detector makes errors? The specific error addressed is that the detector classifies the text as obfuscated whereas it is actually an original. Under this condition, the attacker attributor is still trained on the combination of documents generated from DS-PAN and from Mutant-X. But the defender now presents an original document.
3i. Obfuscator-aware-attacker with incorrect obfuscator detector: When the obfuscator detector classifies incorrectly, it assumes that the defender uses a specific obfuscator when it actually uses a different one. The attacker attributor is trained on the documents generated by one of the obfuscators: either documents generated from DS-PAN or from Mutant-X. However, the defender uses a different obfuscator than the attacker to generate the documents.
Results
In this section, we present the results following the progression of various attack scenarios shown in Figure 3.
Major Scenarios
0. Baseline: The original authorship attributor has an accuracy of 76.7% when trained on the original documents and tested on original documents. The attribution accuracy should be higher than 6.7%, which is when we attribute the 15 authors randomly, to be considered effective.
1. Obfuscation-unaware-attacker: Based on the first row of Table 1, the result shows that the attribution accuracy drops from 76.7% to 50.7% with a decrease of 26% when tested on DS-PAN obfuscated documents, while the accuracy for testing on Mutant-X obfuscated documents drops from 76.7% to 44.3%. The average drop in accuracy is from 76.7% to 47.5%, which is 29.2%. Based on the results, we know that as shown by previous works (Karadzhov et al., 2017;Mahmood et al., 2019) on the performance of the obfuscators, DS-PAN and Mutant-X obfuscators can successfully decrease the attribution accuracy of original attributor.
2. Obfuscation-aware-attacker with obfuscation detector: The second row of Table 1 shows that attribution accuracy increases by 13.2% from 50.7% to 63.9% when tested on DS-PAN documents, and increases by 24.7% from 44.3% to 69% when tested on Mutant-X documents. The average accuracy is 66.4%, which increases from the previous 47.5% by about 19%. While the performance is still not comparable to the baseline results, the increase in accuracy from the previous scenario is significant, which suggests that the obfuscation detector would benefit the performance against obfuscated documents.
3. Obfuscator-aware-attacker with obfuscator detector: As shown on the third and forth row of Table 1, when trained only on DS-PAN documents, the accuracy tested on DS-PAN is 68.6%, with an increase of 17.9% from the first scenario; when trained only on Mutant-X documents, the accuracy tested on Mutant-X is 75.7%, with an increase of 31.4%. The average test accuracy is 71.1%, which increases by about 5% compared to the 66.4% in the previous case. From the results, we can see that having an obfuscator detector as well as an obfuscation detector is the most beneficial to improve the attribution accuracy from obfuscated texts.
Error Conditions
Although obfuscation/obfuscator detector are quite accurate, they are not perfect. Hence, we test the success of the attacker when the obfuscation detector and obfuscator detector are incorrect.
2i. Obfuscation-aware-attacker with incorrect obfuscation detector: Shown on the first column of row four on Table 2, the attribution accuracy decreases by 8.4% from the baseline 76.7% to 68.3%, but a higher accuracy is maintained than the average of Attack Scenario 2 (66.4%) The result shows that when the obfuscation detector produces wrong results, performance will be influenced, but still stay at a relatively high level. Thus, having an obfuscation detector is generally good for the attacker with little cost.
3i. Obfuscator-aware-attacker with incorrect obfuscator detector: From second and third rows of Table 2 we see that when the attacker is trained only on DS-PAN documents, the accuracy tested on Mutant-X is 57.3%, with a drop in performance of 18.4% when compared to training on only Mutant-X documents (75.7%). When the attacker is trained only on Mutant-X documents, the accuracy tested on DS-PAN is 48.5%, with a drop in performance of 20.1% as compared to training on only DS-PAN documents (68.6%). The average test accuracy is 52.9%, which is lower than training on the same obfuscator, but higher than the results in 1 of 5.1 (50.7% and 44.3%). When the obfuscator detector gives incorrect results, the attribution accuracy will not achieve its best performance, but the result is still higher than trained only on original documents. Hence, using obfuscated documents to train always tends to benefit the attribution accuracy.
Combined Condition
Here the attacker simply uses originals and obfuscated documents from all available obfuscators for adversarial training of the attributor.
4. Obfuscator-aware-attacker that does not rely on an obfuscator detector or obfuscation detector:
This result is shown on the last row of Table 2. Attribution accuracy when tested on original documents drops from 76.7% to 66.3%, but increases by 10.5% from 50.7% to 61.2% when tested on DS-PAN, and increases by 24.5% from 44.3% to 68.8% when tested on Mutant-X. The average accuracy is 65%, which increases from the average of the former three, 57.2%, by about 8%. While the attacker does not know if the document is obfuscated or not, or by which obfuscator, it is still able to achieve a high boost in attribution accuracy by adversarial training. Therefore, although the previous processes can achieve higher performances, training on a combination of these documents could be a valid approach when time and resources are limited.
Discussion
Next, we look more closely into the results from adversarial training to better understand them. Moving from scenario 1 to 3, we see an increase in color density and percentage on the diagonal, which signifies the general increase in accuracy when the training documents become more specific. Consistent with above, the color on the nondiagonal areas becoming more transparent also indicates reduction of classification errors. At the author level, we observe that almost all of the authors show increases in accuracy on the diagonal cells across the three scenarios. It shows that adversarial training is effective even on authors with different styles.
General Author Analysis
Looking more closely at each author, we know that Author 9 is the easiest to classify -performance is always at 100%. Author 6, on the other hand, is relatively hard to attribute. The best performance for Author 6 is only 35% from the most effective Attack Scenario 3. Figure 6 presents another view on performance. It shows the percentage of errors made for each author out of all the errors in the three scenarios combined (note: the sum of all errors in the figure is 100%). Thus, the errors made for Author 1 under Scenario 1 is 3.18% of total errors across the three scenarios. We observe that the color is generally darker in Scenario 1, while it gradually lightens in Scenario 2 and then in Scenario 3. Again, this indicates the benefit of having more specific training data. Looking more closely within each scenario, we see that the attributor of Attacker Scenario 1 tends to misclassify Authors 5 and 8 the most. But the attributors for Scenario 2 and Scenario 3 learn more effectively for these two authors thereby reducing mistakes. For Attack Scenario 3, the most misclassified author is Author 6, where 3.76% of all errors. But this percentage is still an improvement over the 4.34% in the previous two scenarios. Motivated by the above observations, next we investigate shifts in performance for a specific author.
Individual Author Analysis
We assign labels to the 15 authors in the dataset and select Original Author 15 for more detailed analysis. The reason we choose Author 15 is that its accuracy is among the ones that increases the most, from 45% to 80%. In order to find out the reasons behind such increase, we perform PCA analysis on all of the DS-PAN documents whose original author is Author 15. We use Writeprints-Static feature set, which has a total of 555 features. In order to preserve the most significant features for attribution, we select the most important 25 features from the original writeprintsRFC and process them through PCA so that we can visualize the features into 3 dimensional graphs.
As shown in the graphs in Figure 5, each dot on the graph represents a document. The green ones are the ones that are attributed correctly while the red ones are attributed incorrectly. In Figure 5a, the incorrectly attributed ones are mainly gathered in a cluster. This suggests that the attributor has trouble discriminating the documents that are similar to each other. But as we go from left to right, the documents in the cluster are also gradually attributed correctly. The trend shows that the attributor is getting better at distinguishing between documents that are similar to each other. Hence, we can infer that adversarial training improves attribution accuracy by discriminating between the ones that are more similar to each other.
Comparing DS-PAN and Mutant-X
In Attack Scenarios 2, 3, and 4, the test sets using DS-PAN for obfuscation yield worse attribution x = df1.drop (labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS' color = df1['Original'] figure(figsize=(10,10)) annot = dfx*100 annot = (annot).round(2) annot = annot.astype('str') sns.set(font_scale=1.7) sns.heatmap(dfx*100, fmt='', annot = annot, cmap = "Reds", yticklabels=list(range(1 plt.ylabel('Author',**axis_font) plt.xlabel('Attack Scenario',**axis_font) Figure 6: Percentage of misclassified document for each author across attack scenarios accuracy than those using Mutant-X. Our analysis of obfuscated documents showed that DS-PAN makes both a greater number of changes as well as more significant changes as compared to Mutant-X. Thus, we surmise that DS-PAN results in larger degradation in attribution accuracy because the attacker's training set contains text that is less similar to the original text. However, the changes made by DS-PAN also have side effect in that they lower the soundness of obfuscated text as reflected by lower METEOR scores. The mean METEOR score for DS-PAN is 0.38 as compared to 0.46 for Mutant-X. A more detailed analysis of METEOR score and semantic similarity between obfuscated and original texts is reported in Appendix A.
Insights into Adversarial Training
The performance gain of adversarial training comes from a "noisy" training dataset comprising of obfuscated documents as well as knowledge about the obfuscator. To disentangle these two factors, we compare the accuracy improvements of the second and third rows of Table 2 against the Mutant-X obfuscated test documents. We note that the improvement in attribution accuracy is 13% when DS-PAN obfuscated documents are used for training. The improvement in attribution accuracy is further 18% (31% overall) when Mutant-X obfuscated documents are used for training. This difference (13% vs. 18%) indicates that although having a noisy dataset helps, the knowledge of the specific obfuscator is likely more crucial to improving attri-bution performance. This trend holds for DS-PAN obfuscated test documents.
Concluding Remarks
In this work, we explored the novel problem of adversarial authorship attribution for deobfuscation. We demonstrate that adversarial training is able to significantly reduce the adverse impact of existing text obfuscators on authorship attribution accuracy. We found that an adversarially trained authorship attributor improves attribution accuracy to within 5-10% as without obfuscation. While an adversarially trained authorship attributor achieved best accuracy when it is trained using the documents obfuscated by the respective obfuscator, we found that it achieves reasonable accuracy even when it is trained using documents obfuscated by a pool of obfuscators. When the adversarially trained attributor makes erroneous assumptions about the obfuscator used to obfuscate documents, we note a degradation in attribution accuracy. It is noteworthy, however, that this degradation is still similar or better than the attribution accuracy of the baseline attributor that is not adversarially trained.
Our results shed light into the future of the ensuing arms race between obfuscators and attributors. Most notably, we find that the effectiveness of adversarial training is somewhat limited if the obfuscators continue to employ new and improved methods that are not available to attributors for adversarial training. Therefore, it is important to continue development of new and improved text obfuscation approaches that are resistant to deobfuscation (Bevendorff et al., 2019;Bo et al., 2019;Gröndahl and Asokan, 2020;Hlavcheva et al., 2021). On the other hand, recent work on understanding and improving transferability of adversarial attacks can inform development of better adversarial attributors that might work well even for unknown obfuscators (Tramèr et al., 2017;Zheng et al., 2020;He et al., 2021;Mireshghallah and Berg-Kirkpatrick, 2021).
Finally, our experiments were limited to the closed-world setting where the universe of potential authors is assumed to be known by the attributor. Further research is needed to investigate whether (and how much) adversarial algorithms are effective in the open-world setting.
Figure 2 :
2Generalized deobfuscation training process using adversarial training 1500 documents are divided into 80-20% split for training and testing, respectively. Specifically, 80 documents from each author are used in the training set while the rest 20 documents are used in the test set.
Figure 3 :
3Progression of various attack scenarios
Figure 4
4pesents the confusion matrices produced from DS-PAN obfuscated documents tested on Attack Scenario 1, 2 and 3 respectively. Rows represent the Original Authors, while the columns represent the Predicted Authors. The values in the matrices are the percentage of the original documents that are classified as a specific author.
plt.figure(figsize=(10,10)) sns.set(font_scale=1.7) sns.heatmap(cm1, annot=annot1, fmt='', cmap = "Blues", cbar = None, yticklabels=list( plt.ylabel('Original Author',**axis_font) plt.xlabel('Predicted Author',**axis_font) plt.figure(figsize=(10,10)) ://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc-oR5UiZo Text(0.5, 57.5, 'Predicted Author') plt.figure(figsize=(10,10)) sns.set(font_scale=1.7) sns.heatmap(cm2, annot=annot2, fmt='', cmap = "Blues", cbar = None, yticklabels=list( plt.ylabel('Original Author',**axis_font) plt.xlabel('Predicted Author',**axis_font) plt.figure(figsize=(10,10)) ://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc-oR5UiZo Text(0.5, 57.5, 'Predicted Author') plt.figure(figsize=(10,10)) sns.set(font_scale=1.7) sns.heatmap(cm3, annot=annot3, fmt='', cmap = "Blues", cbar = None, yticklabels=list( plt.ylabel('Original Author',**axis_font) plt.xlabel('Predicted Author',**axis_font) plt.figure(figsize=(10,10))
Figure 5 :
5Attribution performance of Author 15 with PCA under different attack scenarios Page 9 of 13 https://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc
Table 2 :
2Accuracy of adversarial training on various combinations of test documents Predicted Author')findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.9/11/21, 7:36 PM
ConfusionMatrix.ipynb -Colaboratory
Page 5 of 13
https://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc-oR5UiZo
Text(0.5, 57.5, '
.values y = df1['Predicted_Mix'].values for num in range(len(color)):Page 6 of 57 https://colab.research.google.com/drive/1Ly5vXBk_ZQ3ZUj4y1eIUOghjvwLxdQL0 LABEL_COLOR_MAP[l] for l in color] label_color = np.asarray(label_color) ax.scatter(x_trans[y==14, 0], x_trans[y==14, 1], x_trans[y==14, 2], c='yellowgreen' label='Correct') ax.scatter(x_trans[y!=14, 0], x_trans[y!=14, 1], x_trans[y!=14, 2], c='lightcoral', s = label='Incorrect') #plt.title('PCA of DS') plt.show() x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS' x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS','Predictif color[num] != y[num]:
color[num] = 1
else:
color[num] = 0
fig = plt.figure(1, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)
plt.cla()
x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS','Predicted_MX','Predicted_Mix','Combin
(a) Attack Scenario 1
9/11/21, 7:18 PM
PCA4&5&8.ipynb -Colaboratory
8 : 'g',
1 : 'lightcoral',
10: 'purple',
0: 'yellowgreen',
}
label_color = [# Hide grid lines
ax.grid(False)
# Hide axes ticks
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.w_xaxis.gridlines.set_lw(3.0)
ax.w_yaxis.gridlines.set_lw(3.0)
ax.w_zaxis.gridlines.set_lw(3.0)
#ax.legend()
(b) Attack Scenario 2
PCA4&5&8.ipynb -Colaboratory
# Hide axes ticks
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.legend()
#plt.title('PCA of DS')
plt.show()
2D plot
## Print Original Graph
x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfus
y = df1['Original'].values
color = df1['Predicted_Orig'].values
## Print Original Graph
for num in range(len(color)):
if color[num] != y[num]:
color[num] = 1
else:
https://www.nbcchicago.com/news/politics/Science-May-Help-Identify-Opinion-Columnist-492649561.html
. Obfuscator-aware-attacker that does not rely on an obfuscator detector or obfuscation detector: Since the previous processes require the proposed obfuscation and obfuscator detector, it is not efficient. Hence, a simpler, more efficient solution is to train on all the documents at once. In this simplified version, the attacker attributor is trained on the combination of original documents, documents generated from DS-PAN, and documents generated from Mutant-X. Since this is the combined condition, the defender may or may not use an obfuscator, and will choose from the two possible obfuscators to generate documents.
A Qualitative AnalysisWe conduct analysis to evaluate the quality of the text. We first evaluate the semantics of the obfuscated text with respect to the original text using METEOR scores. The results show that METEOR scores of obfuscated text are comparable to those reported in prior studies. We also conduct qualitative analysis of the obfuscated text.First, we evaluate the quality of obfuscated documents from the two obfuscators. We use METEOR score to measure the soundness of the obfuscated text in terms of the semantic similarity between the original and the obfuscated text.Author 14On DS-PAN onlyFigure 7shows the distribution of the METEOR score for Mutant-X and DS-PAN. The plot shows that the METEOR scores for Mutant-X ranges from 0.3 to 0.7 (mean=0.46), and the METEOR score for DS-PAN ranges from 0.24 to 0.57 (mean=0.38). Compared to the previous METEOR score results calculated in(Mahmood et al., 2019), where the METEOR score for Mutant-X ranges from 0.48 to 0.55 (mean = 0.51), and the METEOR score for other baseline models ranges from 0.32 to 0.46 (mean = 0.38), the two obfuscators used in this work achieve similar results at preserving the semantics of the original texts.Table 3contains examples from the two obfuscators showing different types of changes. Synonym replacement is common in both systems. Examples of such are (street <-> sidewalk), (student <-> pupil). There are also changes in word form. (run <-> running), (waited <-> wait) preserves the morpheme, but changes the tense of the word. It is also worth noting that DS-PAN tends to change the form of abbreviations, such as (I'm <-> I am) and (to have <-> to've). In general, the transformations make sense to the readers, and preserve most of the original meanings. But there are also cases (like the last row) where the transformations change the content and break the grammar.Index OriginalDS-PANMutantX 1 I'm not an expert I'm not An expert I am non an expert 2What was the first print run? What was the first print running?What was the ane print run?3The New York Times ran a Styles section profile two weeks before publicationThe New York Times ran a Styles editor profile two weeks before publication the new_york_times run a styles division profile two cal-endar_week before publishing 4Cornelius walks in off of the street.Cornelius walks in off of the sidewalkCornelius walks in away of the street. 5We've discovered librarians are very networked and seem to know about everything before it happensWe've found librarians are extremely networked and seem to believe about everything before it happens.we suffer detect bibliothec are really network and appear to cognize about everything before it happen 6Homework is minimal, but the reading load is daunting.Homework is minor, but the reading load is daunting.Prep is minimum, but the read load is daunt 7Some traces of the original layout remain Some traces of the manifest makeover remain Some trace of the original layout stay 8Some professors seem happy to have a visitor Some professors seem happy to become a pilgrim Some prof appear happy to've a visitor 9He expects interest in the Nancy Pearl doll to be strongest in Seattle, where she is best known.He expects grateful in the Nancy Pearl mannequin to be strongest in Seattle, where she is best known.He expect involvement in the nancy_pearl dolly to be strongest in seattle, where she's well cognize. 10When the sales slot came open a few months later, she applied.When the sales position came open a few years later, she applied.When the cut-rate_sale time_slot arrive open_up a few calendar_month she utilize. 11Professors often mistake her for a student Professors often mistake her for a campus Prof frequently err her for a pupil 12They may look sleepy, but many used-book stores are thriving.They may look sleepy, although many used-book stores are mature they may search sleepy-eyed, but many used-book stores are boom 13The perfumed bear she gave to me lost his scentThe perfumed bobcat she gave to me lost his odorThe perfume bear she render to me lose his aroma 14 I suppose I would have just waited until the morning if I were her.I reckon I will rest just waited until the afternoon if I were She.I presuppose i'd suffer precisely wait until the morn if i were her.
Writeprints: A stylometric approach to identity-level identification and similarity detection in cyberspace. Ahmed Abbasi, Hsinchun Chen, ACM Transactions on Information Systems (TOIS). 2627Ahmed Abbasi and Hsinchun Chen. 2008. Writeprints: A stylometric approach to identity-level identi- fication and similarity detection in cyberspace. ACM Transactions on Information Systems (TOIS), 26(2):7.
Fighting Authorship Linkability with Crowdsourcing. Mishari Almishari, Ekin Oguz, Gene Tsudik, ACM Conference on Online Social Networks (COSN). Mishari Almishari, Ekin Oguz, and Gene Tsudik. 2014. Fighting Authorship Linkability with Crowdsourc- ing. In ACM Conference on Online Social Networks (COSN).
METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and SummarizationSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Im- proved Correlation with Human Judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and Summarization.
Universal adversarial attacks on text classifiers. ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Melika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal FrossardIEEEMelika Behjati, Seyed-Mohsen Moosavi-Dezfooli, Mahdieh Soleymani Baghshah, and Pascal Frossard. 2019. Universal adversarial attacks on text clas- sifiers. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 7345-7349. IEEE.
Heuristic authorship obfuscation. Janek Bevendorff, Martin Potthast, Matthias Hagen, Benno Stein, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsJanek Bevendorff, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Heuristic authorship ob- fuscation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1098-1108.
Er-ae: differentially-private text generation for authorship anonymization. Haohan Bo, H H Steven, Benjamin Ding, Farkhund Fung, Iqbal, arXiv:1907.08736arXiv preprintHaohan Bo, Steven HH Ding, Benjamin Fung, and Farkhund Iqbal. 2019. Er-ae: differentially-private text generation for authorship anonymization. arXiv preprint arXiv:1907.08736.
Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. Michael Brennan, Sadia Afroz, Rachel Greenstadt, ACM Transactions on Information and System Security (TISSEC). 1522Michael Brennan, Sadia Afroz, and Rachel Green- stadt. 2012. Adversarial stylometry: Circumvent- ing authorship recognition to preserve privacy and anonymity. In ACM Transactions on Information and System Security (TISSEC), volume 15, pages 12:1 -12:22.
Author masking by sentence transformation-notebook for pan at clef 2017. Daniel Castro, Reynier Ortega, Rafael Muñoz, CLEF 2017 Evaluation Labs and Workshop-Working Notes Papers. Daniel Castro, Reynier Ortega, and Rafael Muñoz. 2017. Author masking by sentence transforma- tion-notebook for pan at clef 2017. In CLEF 2017 Evaluation Labs and Workshop-Working Notes Pa- pers, pages 11-14.
Author Masking by Sentence Transformation. Daniel Castro-Castro, Reynier Ortega Bueno, Rafael Munoz, Notebook for PAN at CLEF. Daniel Castro-Castro, Reynier Ortega Bueno, and Rafael Munoz. 2017. Author Masking by Sentence Transformation. In Notebook for PAN at CLEF.
An Algorithm for Identifying Authors Using Synonyms. Jonathan H Clark, Charles J Hannon, Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007). IEEEJonathan H. Clark and Charles J. Hannon. 2007. An Algorithm for Identifying Authors Using Synonyms. In Eighth Mexican International Conference on Cur- rent Trends in Computer Science (ENC 2007), pages 99-104. IEEE.
Leveraging adversarial training in selflearning for cross-lingual text classification. Xin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen Yang, Gerard De Melo, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalXin Dong, Yaxin Zhu, Yupeng Zhang, Zuohui Fu, Dongkuan Xu, Sen Yang, and Gerard De Melo. 2020. Leveraging adversarial training in self- learning for cross-lingual text classification. In Pro- ceedings of the 43rd International ACM SIGIR Con- ference on Research and Development in Informa- tion Retrieval, pages 1541-1544.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
Effective writing style transfer via combinatorial paraphrasing. Tommi Gröndahl, Asokan, Proc. Priv. Enhancing Technol. 20204Tommi Gröndahl and N Asokan. 2020. Effective writ- ing style transfer via combinatorial paraphrasing. Proc. Priv. Enhancing Technol., 2020(4):175-195.
Overview of the author obfuscation task at pan 2017: Safety evaluation revisited. Matthias Hagen, Martin Potthast, Benno Stein, CLEF (Working Notes). Matthias Hagen, Martin Potthast, and Benno Stein. 2017. Overview of the author obfuscation task at pan 2017: Safety evaluation revisited. In CLEF (Working Notes).
Model extraction and adversarial transferability. Xuanli He, Lingjuan Lyu, Qiongkai Xu, Lichao Sun, your bert is vulnerable! North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Xuanli He, Lingjuan Lyu, Qiongkai Xu, and Lichao Sun. 2021. Model extraction and adversarial trans- ferability, your bert is vulnerable! North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT).
Language-independent features for authorship attribution on ukrainian texts. Yulia Hlavcheva, Victoria Bobicev, Olga Kanishcheva, CEUR Workshop Proceedings. 2833Yulia Hlavcheva, Victoria Bobicev, Olga Kanishcheva, et al. 2021. Language-independent features for au- thorship attribution on ukrainian texts. In CEUR Workshop Proceedings, volume 2833, pages 134- 143.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). the 56th Annual Meeting of the Association for Computational Linguistics (ACL)Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. Proceedings of the 56th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL).
Obfuscating document stylometry to preserve author anonymity. Gary Kacmarcik, Michael Gamon, Proceedings of the COLING/ACL on Main conference poster sessions. the COLING/ACL on Main conference poster sessionsAssociation for Computational LinguisticsGary Kacmarcik and Michael Gamon. 2006. Ob- fuscating document stylometry to preserve author anonymity. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 444-451. Association for Computational Linguistics.
The case for being average: A mediocrity approach to style masking and author obfuscation. Georgi Karadzhov, Tsvetomila Mihaylova, Yasen Kiprov, Georgi Georgiev, Ivan Koychev, Preslav Nakov, International Conference of the Cross-Language Evaluation Forum for European Languages. SpringerGeorgi Karadzhov, Tsvetomila Mihaylova, Yasen Kiprov, Georgi Georgiev, Ivan Koychev, and Preslav Nakov. 2017. The case for being average: A mediocrity approach to style masking and author obfuscation. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 173-185. Springer.
Author Masking through Translation. Yashwant Keswani, Harsh Trivedi, Parth Mehta, Prasenjit Majumder, Notebook for PAN at CLEF 2016. Yashwant Keswani, Harsh Trivedi, Parth Mehta, and Prasenjit Majumder. 2016. Author Masking through Translation. In Notebook for PAN at CLEF 2016, pages 890-894.
Autoencoding variational bayes. P Diedrik, Max Kingma, Welling, Proceedings of NAACL-HLT. NAACL-HLTDiedrik P Kingma and Max Welling. 2018. Auto- encoding variational bayes. In Proceedings of NAACL-HLT, pages 1865-1874.
TextBugger: Generating Adversarial Text Against Real-world Applications. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang, Network and Distributed Systems Security (NDSS) Symposium. Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. TextBugger: Generating Adversar- ial Text Against Real-world Applications. In Net- work and Distributed Systems Security (NDSS) Sym- posium.
A girl has no name: Automated authorship obfuscation using x-mutant. Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Privacy Enhancing Technologies Symposium (PETS). Padmini Srinivasan, and Fareed ZaffarAsad Mahmood, Faizan Ahmad, Zubair Shafiq, Pad- mini Srinivasan, and Fareed Zaffar. 2019. A girl has no name: Automated authorship obfuscation using x-mutant. In Privacy Enhancing Technologies Sym- posium (PETS).
A girl has a name: Detecting authorship obfuscation. Asad Mahmood, Zubair Shafiq, Padmini Srinivasan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAsad Mahmood, Zubair Shafiq, and Padmini Srini- vasan. 2020. A girl has a name: Detecting author- ship obfuscation. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 2235-2245.
Author obfuscation using WordNet and language models. Muharram Mansoorizadeh, Taher Rahgooy, Notebook for PAN at CLEF. Mohammad Aminiyan, and Mahdy EskandariMuharram Mansoorizadeh, Taher Rahgooy, Moham- mad Aminiyan, and Mahdy Eskandari. 2016. Au- thor obfuscation using WordNet and language mod- els. In Notebook for PAN at CLEF 2016.
Use fewer instances of the letter 'i': Toward writing style anonymization. W E Andrew, Sadia Mcdonald, Afroz, International Symposium on Privacy Enhancing Technologies Symposium. SpringerAylin Caliskan, Ariel Stolerman, and Rachel GreenstadtAndrew WE McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, and Rachel Greenstadt. 2012. Use fewer instances of the letter 'i': Toward writing style anonymization. In International Symposium on Privacy Enhancing Technologies Symposium, pages 299-318. Springer.
Anonymouth Revamped: Getting Closer to Stylometric Anonymity. W E Andrew, Jeffrey Mcdonald, Marc Ulman, Rachel Barrowclift, Greenstadt, PETools: Workshop on Privacy Enhancing Tools. 20Andrew W.E. McDonald, Jeffrey Ulman, Marc Barrow- clift, and Rachel Greenstadt. 2013. Anonymouth Re- vamped: Getting Closer to Stylometric Anonymity. In PETools: Workshop on Privacy Enhancing Tools, volume 20.
Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness. Fatemehsadat Mireshghallah, Taylor Berg-Kirkpatrick, EMNLP. Fatemehsadat Mireshghallah and Taylor Berg- Kirkpatrick. 2021. Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness. In EMNLP.
Adversarial training methods for semi-supervised text classification. Takeru Miyato, M Andrew, Ian Dai, Goodfellow, arXiv:1605.07725arXiv preprintTakeru Miyato, Andrew M Dai, and Ian Good- fellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725.
Inference in an authorship problem: A comparative study of discrimination methods applied to the authorship of the disputed federalist papers. Frederick Mosteller, L David, Wallace, Journal of the American Statistical Association. 58302Frederick Mosteller and David L Wallace. 1963. In- ference in an authorship problem: A comparative study of discrimination methods applied to the au- thorship of the disputed federalist papers. Journal of the American Statistical Association, 58(302):275- 309.
On the Feasibility of Internet-Scale Author Identification. Arvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, IEEE Symposium on Security and Privacy (SP). IEEEEui Chul Richard Shin, and Dawn SongArvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the Feasibility of Internet-Scale Author Identification. In IEEE Symposium on Security and Privacy (SP), pages 300-314. IEEE.
Blogs, twitter feeds, and reddit comments: Crossdomain authorship attribution. Rebekah Overdorf, Rachel Greenstadt, Privacy Enhancing Technologies Symposium (PETS). Rebekah Overdorf and Rachel Greenstadt. 2016. Blogs, twitter feeds, and reddit comments: Cross- domain authorship attribution. In Privacy Enhanc- ing Technologies Symposium (PETS).
Overview of pan'17. Martin Potthast, Francisco Rangel, Michael Tschuggnall, Efstathios Stamatatos, Paolo Rosso, Benno Stein, International Conference of the Cross-Language Evaluation Forum for European Languages. SpringerMartin Potthast, Francisco Rangel, Michael Tschug- gnall, Efstathios Stamatatos, Paolo Rosso, and Benno Stein. 2017. Overview of pan'17. In Inter- national Conference of the Cross-Language Evalu- ation Forum for European Languages, pages 275- 290. Springer.
Overview of the author obfuscation task at pan 2018: A new approach to measuring safety. Martin Potthast, Felix Schremmer, Matthias Hagen, Benno Stein, Notebook for PAN at. Martin Potthast, Felix Schremmer, Matthias Hagen, and Benno Stein. 2018. Overview of the author ob- fuscation task at pan 2018: A new approach to mea- suring safety. In Notebook for PAN at CLEF 2018.
Can pseudonymity really guarantee privacy?. R Josyula, Pankaj Rao, Rohatgi, USENIX Security Symposium. Josyula R Rao and Pankaj Rohatgi. 2000. Can pseudonymity really guarantee privacy? In USENIX Security Symposium, pages 85-96.
Sebastian Ruder, Parsa Ghaffari, John G Breslin, arXiv:1609.06686Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. Character-level and multi-channel convolu- tional neural networks for large-scale authorship at- tribution. arXiv:1609.06686.
Effects of age and gender on blogging. Jonathan Schler, Moshe Koppel, Shlomo Argamon, James W Pennebaker, AAAI spring symposium: Computational approaches to analyzing weblogs. 6Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring sympo- sium: Computational approaches to analyzing we- blogs, volume 6, pages 199-205.
A4NT: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation. Rakshith Shetty, Bernt Schiele, Mario Fritz, USENIX Security Symposium. Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation. In USENIX Security Symposium.
Classify, but verify: Breaking the closed-world assumption in stylometric authorship attribution. Ariel Stolerman, Rebekah Overdorf, Sadia Afroz, Rachel Greenstadt, IFIP Working Group. 1164Ariel Stolerman, Rebekah Overdorf, Sadia Afroz, and Rachel Greenstadt. 2013. Classify, but verify: Breaking the closed-world assumption in stylomet- ric authorship attribution. In IFIP Working Group, volume 11, page 64.
Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick Mcdaniel, arXiv:1704.03453The space of transferable adversarial examples. arXiv preprintFlorian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453.
Embedding watermarks into deep neural networks. Yusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, Shin'ichi Satoh, Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. the 2017 ACM on International Conference on Multimedia RetrievalYusuke Uchida, Yuki Nagai, Shigeyuki Sakazawa, and Shin'ichi Satoh. 2017. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pages 269-277.
Protecting intellectual property of deep neural networks with watermarking. Jialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph Stoecklin, Heqing Huang, Ian Molloy, Proceedings of the 2018 on Asia Conference on Computer and Communications Security. the 2018 on Asia Conference on Computer and Communications SecurityJialong Zhang, Zhongshu Gu, Jiyong Jang, Hui Wu, Marc Ph Stoecklin, Heqing Huang, and Ian Molloy. 2018. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 on Asia Conference on Computer and Commu- nications Security, pages 159-172.
Efficient adversarial training with transferable adversarial examples. Haizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, Atul Prakash, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHaizhong Zheng, Ziqi Zhang, Juncheng Gu, Honglak Lee, and Atul Prakash. 2020. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pages 1181- 1190.
A framework for authorship identification of online messages: Writing-style features and classification techniques. Rong Zheng, Jiexun Li, Hsinchun Chen, Zan Huang, Journal of the American society for information science and technology. JASISTRong Zheng, Jiexun Li, Hsinchun Chen, and Zan Huang. 2006. A framework for authorship identi- fication of online messages: Writing-style features and classification techniques. Journal of the Ameri- can society for information science and technology (JASIST).
| [] |
[
"Minimum Description Length Recurrent Neural Networks",
"Minimum Description Length Recurrent Neural Networks"
] | [
"Nur Lan nlan@ens.fr \nEcole Normale Supérieure\n\n\nTel Aviv University\n\n",
"Michal Geyer michalgeyer@mail.tau.ac.il \nTel Aviv University\n\n",
"Emmanuel Chemla chemla@ens.fr \nEcole Normale Supérieure\n\n\nEHESS\nPSL University\nCNRS\n\n",
"Roni Katzir rkatzir@tauex.tau.ac.il \nTel Aviv University\n\n"
] | [
"Ecole Normale Supérieure\n",
"Tel Aviv University\n",
"Tel Aviv University\n",
"Ecole Normale Supérieure\n",
"EHESS\nPSL University\nCNRS\n",
"Tel Aviv University\n"
] | [] | We train neural networks to optimize a Minimum Description Length score, i.e., to balance between the complexity of the network and its accuracy at a task. We show that networks optimizing this objective function master tasks involving memory challenges and go beyond context-free languages. These learners master languages such as a n b n , a n b n c n , a n b 2n , a n b m c n+m , and they perform addition. Moreover, they often do so with 100% accuracy. The networks are small, and their inner workings are transparent. We thus provide formal proofs that their perfect accuracy holds not only on a given test set, but for any input sequence. To our knowledge, no other connectionist model has been shown to capture the underlying grammars for these languages in full generality. | 10.1162/tacl_a_00489 | [
"https://arxiv.org/pdf/2111.00600v4.pdf"
] | 240,354,389 | 2111.00600 | f68197dbbcd2246028d19059d25e57fecc2b721f |
Minimum Description Length Recurrent Neural Networks
Nur Lan nlan@ens.fr
Ecole Normale Supérieure
Tel Aviv University
Michal Geyer michalgeyer@mail.tau.ac.il
Tel Aviv University
Emmanuel Chemla chemla@ens.fr
Ecole Normale Supérieure
EHESS
PSL University
CNRS
Roni Katzir rkatzir@tauex.tau.ac.il
Tel Aviv University
Minimum Description Length Recurrent Neural Networks
1
We train neural networks to optimize a Minimum Description Length score, i.e., to balance between the complexity of the network and its accuracy at a task. We show that networks optimizing this objective function master tasks involving memory challenges and go beyond context-free languages. These learners master languages such as a n b n , a n b n c n , a n b 2n , a n b m c n+m , and they perform addition. Moreover, they often do so with 100% accuracy. The networks are small, and their inner workings are transparent. We thus provide formal proofs that their perfect accuracy holds not only on a given test set, but for any input sequence. To our knowledge, no other connectionist model has been shown to capture the underlying grammars for these languages in full generality.
Introduction
A successful learning system is one that makes appropriate generalizations. For example, after seeing the sequence 1,0,1,0,1 we might suspect that the next element will be 0. If we then see 0, we might be even more confident that the next input element will be 1. Artificial neural networks have shown impressive results across a wide range of domains, including linguistic data, computer vision, and many more. They excel at generalizing when large training corpora and computing resources are available, but they face serious challenges that become particularly clear when generalizing from small input sequences like the one presented above.
First, they tend to overfit the learning data. To avoid this, they require external measures to control their own tendency for memorization (such as regularization) as well as very large training * Both authors contributed equally to this work. corpora. Moreover, standard regularization techniques fall short in many cases, as we show below.
Second, even when successful, they tend to produce non-categorical results. That is, they output very high probabilities to target responses, but never 100%. Adequate, human-like generalization, on the other hand involves having both a probabilistic guess (which neural networks can do) and, at least in some cases, a clear statement of a categorical best guess (which neural networks cannot do).
Third, these networks are often very big, and it is generally very hard to inspect a given network and determine what it is that it actually knows (though see Lakretz et al., 2019 for a recent successful attempt to probe this knowledge in the context of linguistics).
Some of the challenges above arise from the reliance of common connectionist approaches on backpropagation as a training method, which keeps the neural architecture itself constant throughout the search. The chosen architecture must therefore be large enough to capture the given task, and it is natural to overshoot in terms of size. Furthermore, it must allow for differentiable operations to be applied, which prevents certain categorical patterns from even being expressible.
In this paper, we propose to investigate a training method which differs from common approaches in that its goal is to optimize a Minimum Description Length objective function (MDL; Rissanen, 1978). This amounts to minimizing error as usual, while at the same time trying to minimize the size of the network (a similar pressure to a Bayesian size prior). As a result, the objective function offers a guide to determining the size of the network (a guide that error minimization alone does not provide), which means that the architecture itself can evolve during learning and typically can decrease in size. One potential side effect is that optimization cannot be done through back-propagation alone. We here use a genetic algorithm to search through the very large search space of neural networks of varying sizes.
We find that MDL-optimized networks reach adequate generalizations from very small corpora, and they avoid overfitting. The MDL-optimized networks are all small and transparent; in fact, we provide proofs of accuracy that amount to infinite and exhaustive test sets. They can also provide deterministic outputs when relevant (expressing pure 100% confidence). We illustrate this across a range of tasks involving counters, stacks, and simple functions such as addition.
Previous work
Our primary concern in this paper is the objective function. The idea of applying a simplicity criterion to artificial neural networks dates back at least to Hinton and Van Camp (1993), who minimize the encoding length of a network's weights alongside its error, and to Zhang and Mühlenbein (1993Mühlenbein ( , 1995, who use a simplicity metric that is essentially the same as the MDL metric that we use in the present work (and describe below). Schmidhuber (1997) presents an algorithm for discovering networks that optimize a simplicity metric that is closely related to MDL. Simplicity criteria have been used in a range of works on neural networks, including recent contributions (e.g., Ahmadizar et al., 2015 andGaier andHa, 2019). Outside of neural networks, MDL -and the closely related Bayesian approach to induction -have been used in a wide range of models of linguistic phenomena, in which one is often required to generalize from very limited data (see Horning, 1969, Berwick, 1982, Stolcke, 1994, Grünwald, 1996, and de Marcken, 1996, among others, and see Katzir, 2016 andRasin et al., 2021 for recent proposals to learn full phonological grammars using MDL within two different representational frameworks). In the domain of program induction, Yang and Piantadosi (2022) have recently used a Bayesian learner equipped with a simplicity prior to learn formal languages similar to the ones we present below.
Turning to the optimization algorithm that we use to search for the MDL-optimal network, our work connects with the literature on using evolutionary programming to evolve neural networks. Early work that uses genetic algorithms for various aspects of neural network optimization in-cludes Miller et al. (1989), Montana and Davis (1989), Whitley et al. (1990), and Zhang and Mühlenbein (1993, 1995. These works focus on feedforward architectures, but Angeline et al. (1994) present an evolutionary algorithm that discovers recurrent neural networks and test it on a range of sequential tasks that are very relevant to the goals of the current paper. Evolutionary programming for neural networks remains an active area of research (see Schmidhuber, 2015 andGaier andHa, 2019, among others, for relevant references).
Our paper connects also with the literature on using recurrent neural networks for grammar induction and on the interpretation of such networks in terms of symbolic knowledge (often formallanguage theoretic objects). These challenges were already taken up by early work on recurrent neural networks (see Giles et al., 1990 andElman, 1990, among others), and they remain the focus of recent work (see, e.g., Wang et al., 2018 andWeiss et al., 2018a). Jacobsson (2005) and Wang et al. (2018) provide discussion and further references.
Learner
Objective: Minimum Description Length
Consider a hypothesis space G of possible grammars, and a corpus of input data D. In our case, G is the set of all possible network architectures expressible using our representations, and D is a set of input sequences. For a given G ∈ G we may consider the ways in which we can encode the data D given G. The MDL principle (Rissanen, 1978), a computable approximation of Kolmogorov Complexity (Solomonoff, 1964;Kolmogorov, 1965;Chaitin, 1966), aims at the G that minimizes |G| + |D : G|, where |G| is the size of G and |D : G| is the length of the shortest encoding of D given G (with both components typically measured in bits). Minimizing |G| favors small, general grammars that often fit the data poorly. Minimizing |D : G| favors large, overly specific grammars that overfit the data. By minimizing the sum, MDL aims at an intermediate level of generalization: reasonably small grammars that fit the data reasonably well.
The term |D : G| corresponds to the surprisal of the data D according to the probability distribution defined by G (i.e., the negative log of the probability assigned to targets by the network). The term |G| depends on an encoding scheme for mapping networks onto binary strings, described below.
Our networks and their encoding
The MDL learner explores a space of directed graphs, made of an arbitrary number of units and weighted connections between them. We describe the actual search space explored in the experiments below, by explaining how these networks are uniquely encoded to produce an encoding length |G|.
Example
We will consider networks such as the one represented in Fig. 1. It consists of two input units (yellow units 1 and 2) and one output unit with a sigmoid activation (blue unit 3). The network has one forward connection (from unit 1 to 3) and one recurrent connection (unit 2 to 3), represented by a dashed arrow. Recurrent connections feed a unit with the value of another unit at the previous time step, and thus allow for the development of memory across the different time steps of the sequential tasks we are interested in. Here, unit 3 is fed with input 2 from the previous step. The connection weights are w 1,3 = 0.5 and w 2,3 = 2. Unit 3 is also fed a bias term b 3 = 1 represented by a sourceless arrow. We will now explain how such a network is represented to measure its encoding size |G|.
Preliminary: encoding numbers
To ensure unique readability of a network from its string representation we use the prefix-free encoding for integers from Li and Vitányi (2008):
E(n) = 11111...1111
Encoding a network
The encoding of a network is the concatenation of (i) its total number of units, and (ii) the ordered concatenation of the encoding of each of its units.
Units
The encoding of a unit includes its activation function, the number of its outgoing connections, the encoding of each of its outgoing connections, and its bias weight, if any.
Activation functions
Possible activation functions are: the linear activation (identity), ReLU, sigmoid, square, as well as the floor function and the unit step function (returns 0 for inputs ≤ 0 and 1 otherwise). To build an intuitive measure of simplicity into the model's choice of activation functions, we add a cost to each function, encoded as a unary string: the linear activation has no cost; square costs 2 bits; ReLU, sigmoid, and floor cost 4 bits; and the unit step function costs 8 bits.
Connections and weights
A connection's encoding includes its target unit number (each connection is specified within the description of its source unit, hence the source needs not be encoded), its weight, and its type: forward (0) or recurrent (1).
To simplify the representation of weights in classical neural networks and to make it easier to mutate them in the genetic algorithm described below, we represent weights as signed fractions ± n d , which are serialized into bits by concatenating the codes for the sign (1 for +, 0 for −), the numerator and the denominator. For example, the weight w ij = + 2 5 would be encoded as:
Search algorithm
Our interest in this paper is the MDL objective function and not the training method. However, identifying the MDL-optimal network is hard: the space of possible networks is much too big for an exhaustive search, even in very simple cases. We therefore need to combine the objective function with a suitable search procedure. We chose to use a genetic algorithm (GA; Holland, 1975), which frees us from the constraints coming from backpropagation and is able to optimize the network structure itself rather than just the weights of a fixed architecture. For simplicity and to highlight the utility of the MDL metric as a standalone objective, we use a vanilla implementation of GA, summarized in Algorithm 1. The algorithm is initialized by creating a population of N random neural networks. A network is
T ← t random networks from pop winner ← argmin M DL (T ) loser ← argmax M DL (T ) return winner, loser end function population ← ∅ Population initialization while |population| < N do:
generate a random network net add net to population end while generation ← 0
Evolution loop while generation < Gen do:
for N steps do: parent, loser ← TOURNAMENTSELECTION(population)
offspring ← mutate(parent) eval(offspring) MDL score remove loser from population add offspring to population end for generation ← generation + 1 end while return argmin M DL (population) initialized by randomizing the following parameters: activation functions, the set of forward and recurrent connections, and the weights of each connection. Networks start with no hidden units. In order to avoid an initial population that contains mostly degenerate (specifically, disconnected) networks, output units are forced to have at least one incoming connection from an input unit.
The algorithm is run for Gen generations, where each generation involves N steps of selection followed by mutation. During selection, networks compete for survival on to the next generation based on their fitness, i.e., their MDL score, where lower MDL is better. A selected network is then mutated using one of the following operations: add/remove a unit; add/remove a forward or recurrent connection; add/remove a bias; mutate a weight or bias by changing its numerator or denominator, or flipping its sign; and change an activation function. These mutations make it possible to grow networks and prune them when necessary, and to potentially reach any architecture that can be expressed using our building blocks. The mutation implementations are based on Stanley and Miikkulainen (2002). 1 On top of the basic GA we use the Island Model (Gordon and Whitley, 1993;Adamidis, 1994;Cantú-Paz, 1998) which divides a larger population into 'islands' of equal size N , each running its own GA as described above, periodically exchanging a fixed number of individuals through a 'migration' step. This compartmentalization serves to mitigate against premature convergence which often occurs in large populations. The simulation ends when all islands complete Gen generations, and the best network from all islands is taken as the solution.
Experiments
We ran tasks based on several classical formallanguage learning challenges. We use both deterministic and probabilistic expectations to test the ability of a learner to work on probabilistic and symbolic-like predictions. In addition to showing that the MDL learner performs well on test sets, we provide proofs that it performs well on the whole infinite language under consideration.
General setup and comparison RNNs
All simulations reported in this paper used the following hyper-parameters: 250 islands, each with population size 500 (total 125,000 networks), 25,000 generations, tournament size 2, migration size 2, and a migration interval of 30 minutes or 1,000 generations (earliest of the two). The number of generations was chosen empirically to allow enough time for convergence. Each task was run three times with different random seeds. 2 To compare the performance of the MDLoptimized recurrent neural networks (MDLRNNs) with classical models, we trained standard RNNs on the same tasks, varying their architecture -GRU (Cho et al., 2014), LSTM (Hochreiter and Schmidhuber, 1997), and Elman networks (Elman, 1990) -as well as the size of their hidden state vectors (2, 4, 32, 128), weight regularization method (L1/L2/none), and the regularization constant in case regularization was applied (λ = 1.0/0.1/0.01), totaling 84 RNN configurations. Each configuration was run three times with different random seeds. These RNNs were trained with a cross-entropy loss, which corresponds to the |D : G| term divided by the number of characters in the data. 3 Table 1 summarizes the results for both MDL-RNNs and classical RNNs for all the tasks that will be described below. For each task, the representative network for each model (out of all configurations and random seeds) was chosen based on performance on the test set, using MDL scores for MDLRNNs and cross-entropy for RNNs.
It should be noted that model selection based on test performance is at odds with the premise of MDL: by balancing generalization and data fit during training, MDL automatically delivers a model which generalizes well beyond the training data; MDL also does away with the post-hoc, trial-enderror selection of hyper-parameters and regularization techniques inherent in classical models. In other words, MDL models can just as well be se-2 All experimental material and the source code for the model are available at https://github.com/ taucompling/mdlrnn. 3 All RNNs were trained using the Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001, β1 = 0.9, and β2 = 0.999. The networks were trained by feeding the full batch of training data for 1,000 epochs. The crossentropy loss for RNNs is calculated using the natural logarithm, converted in Table 1 to base 2 for comparison with MDL scores. lected based on training performance. This is in contrast to standard RNNs, for which the trainingbest model is often the one that simply overfits the data the most. We show that even when given posthoc advantage, RNNs still underperform. 4,5
General measures
We will report on two measures: (i) Accuracy, which we explain below based on each task's unique properties; (ii) Cross-entropy, averaged by character for better comparability across set sizes, and compared with a baseline value calculated from the probability distribution underlying each task. Most originally, in some occasions we report on these measures on whole, infinite languages. This is possible because, by design, MDL-optimized networks are just large enough for their task, allowing us to fully understand their inner workings.
Experiment I: counters and functions
We test our model's ability to learn formal languages that require the use of one or multiple counters: a n b n , a n b n c n , a n b n c n d n . These languages can be recognized using unbounded counting mechanisms keeping track of the number n of a's and balance the other characters accordingly. We also test our model's ability to learn languages which not only encode integers as above, but also operate over them: a n b m c n+m (addition) and a n b 2n (multiplication by two). We did not test general multiplication through the language a n b m c nm for practical reasons, namely that the length of the sequences quickly explodes.
Language modeling
The learner is fed with the elements from a sequence, one input after the next, and at each time step its task is to output a probability distribution for the next character. Following Gers and Schmidhuber (2001), each string starts with the symbol #, and the same symbol serves as the target prediction for the last character in each string. If the vocabulary contains n letters, the inputs and outputs are one-hot encoded over n input units (in yellow in the figures), and the outputs are given in n units (in blue). To interpret these n outputs as a probability distribution we zero negative values and normalize the rest to sum to 1. In case of a degenerate network that outputs all 0's, the probabilities are set to the uniform value 1/n.
Setup
Each task was run with data set sizes of 100 and 500. The training sets were generated by sampling positive values for n (and m, when relevant) from a geometric distribution with p = .3. The maximal values K observed for n and m in our batches of size 100 and 500 were 14 and 22, respectively.
We test the resulting networks on all unseen sequences for n in [K+1, K+1001]. For a n b m c n+m we test on n and m in [K + 1, K + 51], i.e., the subsequent 2,500 unseen pairs.
Only parts of the sequences that belong to the formal languages presented here can be predicted deterministically, e.g., for a n b n , the deterministic parts are the first a (assuming n > 0), all b's ex-cept the first one, and the end-of-sequence symbol. For each of the tasks in this section, then, we report a metric of deterministic accuracy, calculated as the number of matches between the output symbol predicted with maximum probability and the ground truth, relative to the number of steps in the data that can be predicted deterministically.
Results
The performance of the resulting networks is presented in Table 1. In Figures 3-6, we show the networks that were found and their typical behavior on language sequences. Thanks to their low number of units and connections, we are able to provide simple walkthroughs of how each network operates. We report the following measures:
Deterministic accuracy: perfect for almost all tasks, both with small and large training sets. The MDL learner achieves perfect accuracy for the tasks a n b n and a n b 2n , both with small and large training sets. The learner also achieves perfect accuracy for a n b n c n and a n b m c n+m with a larger training set, and in fact the networks found there would be better optima also for the respective smaller training sets, therefore showing that the suboptimal results for the small training sets are only due to a limitation of the search, and that perfect accuracy should in principle be reachable there too with a more robust search.
The only task for which MDLRNNs did not reach 100% accuracy is a n b n c n d n . Since the other tasks show that our representations make it possible to evolve counters, we attribute this failure to the search component, assuming a larger population or more generations are needed, rather than lack of expressive power; networks for this task require more units for inputs and outputs, which enlarge the number of possible mutations the search can apply at each step.
Cross-entropy: near perfect. For all tasks but a n b n c n d n , the MDLRNN per-character average cross-entropy is also almost perfect with respect to the optimal cross-entropy calculated from the underlying probability distribution.
RNNs: no perfect generalization. Among the competing models, no standard RNN reached 100% deterministic accuracy on the test sets, and all RNNs reached suboptimal cross-entropy scores, indicating that they failed to induce the grammars and probability distributions underlying the tasks. In terms of architecture size, the bestperforming RNNs are often those with fewer units, while L1 and L2 regularizations do not yield winning models except for one task.
Transparency supports formal proofs that results are perfect for the whole, infinite language. For all tasks but a n b n c n d n then, deterministic accuracy and cross-entropy are perfect/excellent on training and test sets. Because the MDL networks are small and transparent, we can go beyond these results and demonstrate formally that the task is performed perfectly on the entire infinite underlying language. To our knowledge, such results have never been provided for any classic neural network in these tasks or any other.
Theorem 4.1. The a n b n network represented in Fig. 3 outputs the correct probability for each character, for each sequence in the a n b n language, with a margin of error below 10 −6 .
Proof. Table 2 traces the value of each unit at each step in a legal sequence for the relevant network. When normalizing the outputs to obtain probabilities, the values obtained are the exact groundtruth values, up to the contribution of σ(−15) to that normalization (sigmoid is abbreviated as σ), which is negligible compared to all other positive values (the largest deviance is σ(−15) 1+σ(−15) ≈ 3.10 −7 , Figure 3: The network found by the MDL learner for the a n b n task, for a training set with data set size 500. See Theorem 4.1 for a description of how this network accepts any a n b n sequence and why it rejects any other sequence. a n b n Unit 6 Unit 4 Unit 5 Unit 3 P (a) Table 2: Unit values (columns) during each phase of a valid a n b n sequence (rows). The second line for output units, given in bold, indicates the final normalized probability.
P (b) P (#) Initial # 0 7/3 0 σ(−15) ∼ 1 ∼ 0 ∼ 0 k th a k 7/3 1 σ(−15) ∼ .7 ∼ .3 ∼ 0 k th b, n−k -2/3 1 σ(−15) k < n 0 ∼ 1 ∼ 0 n th b 0 -2/3 0 σ(−15) 0 0 1
observed during the b's). The network not only accepts valid a n b n sequences, but also rejects other sequences, visible by the zero probability it assigns to irrelevant outputs at each phase in Table 2. More informally, the network uses a single hidden unit (6) as a counter, recognizable from the recurrent loop onto itself. The counter is incremented by 1 for each a (+2 from unit 1, -1 from the bias), and then decremented by 1 for each b (signaled by a lack of a, which leaves only the -1 bias as input to the counter).
Theorem 4.2. The network represented in Fig. 4 outputs the correct probability for each character, for each sequence in the a n b n c n language, with a margin of error below 10 −6 .
Proof. The proof is again obtained by tracing the values each unit holds at each phase of a valid sequence in the language, see Table 3.
The network uses two hidden units that serve as counters for the number of a's (unit 8) and c's (unit 9). Each occurrence of a simultaneously feeds the output unit for a (5) and the a counter (8) connected to the b output (6), using weights Figure 4: The network found for the a n b n c n task for the larger training set. See Theorem 4.2 for a description of how this network accepts only sequences of the language a n b n c n . a n b n c n Unit 8 Unit 9
Unit 4 Unit 5 Unit 6 Unit 7 Table 3: Unit values (columns) during each phase of a valid a n b n c n sequence (rows).
P (#) P (a) P (b) P (c) Initial # 1 − 1 3 − 1 3 1 0 σ(−15) 0 ∼ 1 0 ∼ 0 k th a 1 − k − k+1 3 − k+1 3 7 3 1 σ(−15) ∼ 0 ∼ .7 ∼ .3 ∼ 0 k th b k+1−n − k+n+1 3 − k+n+1 3 0 1 σ(−15) k < n 0 0 ∼ 1 ∼ 0 n th b 1 − 2n+1 3 − 2n+1 3 0 0 σ(−15) 0 0 0 1 k th c 1+k 2k−2n−1 3 2(k+1−n) 3 0 0 σ(−15) k < n 0 0 0 1 n th c 1+n − 1 3 2 3 0 0 σ(−15) ∼ 1 0 0 ∼ 0
to create the correct probability distribution between a's and b's. Once a's stop, P (a) flatlines, and the a counter (8) starts decreasing until n b's are seen. Another counting system has evolved in unit 9 which counts the number of a's and b's (signaled by lack of c's), and then decreases for each c, finally triggering the end-of-sequence output #. Note how the model avoids holding a third counter for the number of b's, by reusing the a counter. This makes it possible to disconnect the b input unit (2), which minimizes encoding length.
Theorem 4.3. The a n b 2n network represented in Fig. 5 outputs the correct probability for each character, for each sequence in the a n b 2n language, with a margin of error below 10 −6 .
Proof. The network is similar to the one found for a n b n (Fig. 3). The proof that this network is accurate is also similar (Theorem 4.1), the only difference being that the hidden unit is incremented with 2 instead of 1 for each a input.
P(a)
Figure 5: The network found for the a n b 2n task for the larger training set. See Theorem 4.3 for a description of how this network accepts only sequences of the language a n b 2n .
Theorem 4.4. The network represented in Fig. 6 outputs the correct probability for each character, for each sequence in the a n b m c n+m language, with a margin of error below .2 (and below 10 −4 for deterministic steps, i.e., probabilities 0 or 1). 6 Proof. In Table 4 we trace the values of each unit during feeding of a valid sequence in a n b m c n+m . We do not represent the internal memory unit 8, its value is the seventh of that of unit 4.
Here, a network with a single counter (unit 8) has evolved which recognizes the language with 100% accuracy. While one would think that this task requires at least two counters -for n and m -the pressure for parsimony leads the model to evolve a more compact solution: since the number of c's is always n + m, and no other symbols a n b m c n+m unit 5 unit 6 unit 7 unit 4 P (a) Table 4: Unit values during each phase of a valid a n b m c n+m sequence.
P (b) P (c) P (#) Initial # 31 2 0 σ(−4) 7 2 ∼ .996 0 ∼ 0 ∼ .004 k th a 11 2 7 2 σ(−4) 7 2 (1 − k) ∼ .71 ∼ .29 ∼ 0 0 k th b 0 .04 σ(−4) 7 2 (1−n−k) 0 ∼ .69 ∼ .31 0 k th c 0 0 σ(−4) 7 2 (1−n−m+k) k < m + n 0 0 1 0 (m + n) th c 0 0 σ(−4) 7 2 0 0 ∼ 0 ∼ 1
appear between the first and last #, the network uses the signal of lack of c's as an indication of positive occurrences of either a or b. This might raise a suspicion that the network recognizes outof-language sequences such as balanced yet unordered strings, e.g. abbaaccccc. In practice, however, the network imposes a strict order: a receives a positive probability only after # or a; b only after a or b; and c receives a significant proportion of the probability mass only as a last resort.
Experiment II: Dyck-1 vs. Dyck-2
In previous tasks, we showed the capability of MDLRNNs to evolve counters. A counter is also what is needed to recognize the Dyck-1 language of well-matched parentheses sequences. In the Dyck-2 language, however, there are two pairs of opening and closing characters, such as parentheses and brackets. Counters are not sufficient then, and a stack is needed to additionally track whether the next closing element must be a parenthesis or a bracket (and similarly for any Dyck-n language for n > 1, Suzgun et al., 2019). We ask here whether MDL-optimized networks can evolve not only counters but also stacks.
Setup
The setup is that of a language modeling task, as in Experiment I. For Dyck-1, the training sequences were generated from a PCFG with probability p = .3 of opening a new bracket, with data set sizes 100 and 500. The test sets contained 50,000 sequences generated from the same grammar that were not seen during training. For Dyck-2, a fully operational stack is needed in order to recognize the language. We thus first make sure that such a network exists in the search space. We do this by manually designing a network that implements a fully operational stack.
We use this network as a baseline for comparison with the results of the MDL simulations.
The stack network and a detailed description of its mechanism are given in Fig. 8. We add two additional building blocks in order to implement this mechanism: the modulo 3 activation function used in the 'pop' implementation, and a second type of unit which applies multiplication to its inputs, in order to create gates such as the ones used in LSTM networks. Because the inclusion of these new representations enlarges the search space, and because the baseline network is larger in terms of number of units than the networks found in previous tasks (23 vs. 7-10), we double the genetic algorithm's overall population size (500 islands vs. 250), allowing more hypotheses to be explored. We also enlarge the training set to 20,000 samples, which allows networks with costlier |G| terms to evolve. Here again the training sequences were generated from a PCFG with probability p = .3 for opening a new bracket or parenthesis, and tested on 50,000 novel sequences generated from the same grammar.
Dyck sequences don't have any sub-parts which can be predicted deterministically (one can always open a new bracket), which makes deterministic accuracy reported above irrelevant. We report instead a metric we call categorical accuracy, defined as the fraction of steps where the network predicts probability p ≥ for symbols that could appear at the next step, and p < for irrelevant symbols. For example, for Dyck-2, when the upcoming closing character is a bracket (i.e., the last seen opening character was a bracket), the network should assign probability 0 to the closing parenthesis; and similarly for the end-of-sequence symbol as long as a sequence is unbalanced. Because classical RNNs cannot assign categorical 0 probabilities to outputs due to their reliance on softmax layers, we use = 0.005 as a categorical margin.
Results
Full performance details are given in Table 1.
For the Dyck-1 language, the networks for the small and large training sets reach average test cross-entropy of 1.11 and 0.89 respectively, compared to an optimal 0.88. This result is in line with those of Experiment I, where we have shown that our representations are capable of evolving counters, which are sufficient for recognizing Dyck-1 as well. An Elman RNN reaches a better crossentropy score, but worse categorical accuracy, for the smaller training set, while MDLRNN wins with the larger set, reaching a score close to the optimum and 100% categorical accuracy.
Theorem 4.5. When brackets are well balanced, the Dyck-1 network in Fig. 7 correctly predicts that no closing bracket can follow by assigning it probability 0. Conversely, when brackets are unbalanced, it assigns probability 0 to the end-ofsequence symbol. Table 5. The theorem follows from the fact that P (#) = 0 if o > 0, and P (]) = 0 if o = 0. In the target language, we note that opening brackets have a constant probability of P ([) = .3, while in the found network this probability decreases with o (visible in unit 4's output probability, Table 5). This makes a potential difference for high values of o, which however are very rare (o decreases with probability .7 at all time steps).
For Dyck-2, the MDL model fails to reach the architecture of the baseline manual network, or another architecture with a similar cross-entropy score, reaching a network which has a worse MDL score than the baseline (148,497 vs. 147,804). Accordingly, MDLRNN reaches a non-perfect 99.27% categorical accuracy, compared to 89.01% for RNNs, which reflects both models' failure to correctly balance certain sequences. Both models tie at 1.19 cross-entropy, close to the optimal 1.18.
Since we confirmed that the baseline architecture exists in the search space, we conclude that reaching a fully operational stack network is hindered by the non-exhaustive search procedure, rather than by the MDL metric. This may be solvable by tweaking the hyper-parameters or putting more computational resources into the search. It could be, however, that this difficulty is due to a more interesting property of the task at hand. It has been claimed that evolutionary algorithms tend to struggle with so-called 'deceptive' optimization problems -tasks for which series of intermediate good solutions don't necessarily lead to a global optimum (see overview in Lehman and Stanley, 2011). For the stack network, it could be the case that a stack is only operational in its full form, and that intermediate networks deceive and lead the search to local minima, like the one found in the current simulation.
A recent line of work has addressed the need for stacks by manually designing stack-like mechanisms using continuous representations, and integrating them manually into standard architectures (Graves et al., 2014, Joulin and Mikolov, 2015, Suzgun et al., 2019. Indeed, when they are explicitly augmented with manually-designed continuous stack emulators, neural networks seem to be able to capture nonregular grammars such as the one underlying the Dyck-2 language. Similarly, we could allow our search to add stacks in one evolution step. This could overcome the risk of a deceptive search target mentioned above. If successful, we can expect this approach to come with all the benefits of the MDL approach: the winning network would remain small and transparent, and it would eventually contain a memory stack only if this is intrinsically needed for the task. Figure 8: A manually-designed network implementing a fully operational stack, which recognizes the Dyck-2 language. The network uses an additional type of unit, which calculates the product of its inputs instead of summing them, making it possible to create gate units similar to those of LSTM networks (gray striped units in the figure). The stack's memory is implemented as an integer, stored here in unit 13; the integer is shifted to the left or right in base 3, making it possible to store the value 2 for a parenthesis and 1 for a bracket, visible in their respective input weights. Unit 12 is the 'push' gate, which opens when a non-zero value flows from the opening bracket or parenthesis inputs. Unit 16 is the 'pop' gate, opened by a non-zero input from a closing symbol. The recurrent connection from memory unit 13 to unit 11 performs the base-3 left shift by multiplying the memory by 3. For 'pop', a right shift is applied by dividing the memory by 3. To extract the value of the topmost element, modulo 3 is applied. The bias for unit 22 handles outputting the probability p of opening a new bracket/parenthesis.
Experiment III: general addition
In the previous experiments, we saw that MDLoptimized networks are capable of representing integers and add them in what amounts to unary representation (see a n b m c n+m language). Here, we show that addition can be performed when the numbers and outputs are represented in a different format. Specifically, we consider the familiar task of adding two integers in binary representation when the numbers are fed bit-by-bit in parallel, starting from the least significant bit. While this problem has good approximate solutions in terms of standard RNNs, 7 we will show that our model provides an exact solution. As far as we are aware, this has not been shown before.
Setup
In this setting, we diverge from a language modeling task. The network here is fed at each time step i with a tuple of binary digits, representing the digits n i and m i of two binary numbers n and m, starting from the least significant bit. The two input units are assigned the values n i and m i . The 7 An example implementation that reportedly works up to a certain number of bits: https://github.com/ mineshmathew/pyTorch_RNN_Examples output is interpreted as the predicted probability that (n + m) i = 1, that is that 1 is the i th digit in the sum (n + m). Output values are capped to make them probabilities: values at or below 0 are interpreted as probability 0, values at or above 1 are interpreted as probability 1.
The model was trained on two corpus sizes: one that contained all pairs of integers up to K = 10 (total 100 samples), and a larger set of all pairs up to K = 20 (total 400). The resulting networks were then tested on the set of all pairs of integers n, m ∈ [K + 1, K + 251], i.e., 62,500 pairs not seen during training. Since the task is fully deterministic, we report a standard accuracy score.
Results
MDLRNNs reached 100% accuracy on both test sets, and an optimal cross-entropy score of zero. Fig. 9 shows the MDLRNN result for the larger training set. It provably does perfect addition, with perfect confidence, for all pairs of integers:
Theorem 4.6. For the net in Fig. 9, the output unit at time step i is the i th digit of the sum of the inputs.
Proof. Call c 3 i−1 the value of unit 3 at time step i − 1; this value is the carry-over for the next time step, feeding unit 4 through their recurrent connection at time step i. This can be proven in two steps. (1) At the first time step i = 1 the carry-over going into unit 4 is 0, since recurrent inputs are 0 by default at the first time step. (2) By induction, c 4 i is the sum of the relevant carry-over (c 3 i−1 ) and the two input digits at time i. The combination of the 1/2 multiplication and floor operation extracts a correct carry-over value from that sum and stores it in unit 3. From there, we see that c 2 i holds the correct binary digit: the sum of current inputs and carry-over (from c 4 i ), minus the part to be carried over next (from −2 × c 3 i ). Again, the task is learned perfectly and in a readable fashion. As a side remark, the network obtained here can also naturally be extended to perform addition of more than 2 numbers, simply by adding the necessary inputs for the additional digits and connecting them to cell 4. To our knowledge no other RNN has been proven to hold a carry-over in memory for an unbounded number of digits, i.e. to perform general addition of any arbitrary pair of numbers. The best competing classical RNNs trained here were never able to reach more than 79.4% accuracy on the test sets, indicating that they learned a non-general way to do addition.
Objective function probe
In order to further probe the value of the MDL objective function -and to isolate the effects of the objective function, which is our main focus, from those of the training method and the activation functions -we ran four additional simulations using variations of MDL while keeping the setting without change. The variants of the objective function that we tested are: (i) |G| alone, i.e., only the description length of the network is minimized; (ii) |D : G| alone, i.e., the model only opti-mizes training data fit, similarly to a cross-entropy loss in traditional models; (iii)-(iv) replacing |G| with traditional L1 and L2 regularization terms.
The different objective functions were tested on the a n b n task using the same hyper-parameters given in Sec. 4.1. Table 6 summarizes the performance for each resulting network. As expected, when |G| alone is minimized, the result is a degenerate network with no hidden units or connections. Conversely, |D : G|-only training results in a network growing large and picking up on accidental regularities in the training set. The overfitting leads to below-optimal cross-entropy on the training set. Test cross-entropy is infinite because the model assigns a categorical zero probability to some possible targets. Both L1 and L2 regularizations indirectly constrain the encoding length of the resulting networks and have the advantage of being compatible with backpropagation search. However, these constraints are not as effective as pure MDL in avoiding overfitting (cross-entropy is below optimal on the training set and above on the test set). Table 6: Cross-entropy and number of units and connections on the a n b n task using different objective functions; MDL yields ground-truth optimal CE for both training and test.
Conclusion
Classical RNNs optimized for accuracy can partially recognize nonregular languages and generalize beyond the data up to a certain n (Gers and Schmidhuber, 2001;Weiss et al., 2018b). However large this n may be, the failure of these networks to fully generalize to arbitrary values of n reveals that they fail to lock in on the correct grammars that underlie these tasks. We found that an MDL-optimized learner arrives at networks that are reliably close to the true distribution with small training corpora, for classically challenging tasks. In several cases, the networks achieved perfect scores. Beyond the usual evaluation in terms of performance on test sets, the networks lent themselves to direct inspection and showed an explicit statement of the pattern that generated the corpus.
Figure 1 :
1Example network, encoded inFig. 2
Figure 2 :
2Binary encoding of the network in Fig. 1 Algorithm 1 Genetic algorithm function TOURNAMENTSELECTION(pop):
Figure 6 :
6The network found for the a n b m c n+m task for the larger training set. See Theorem 4.4 for a description of how this network accepts only sequences of the language a n b m c n+m .
Figure 7 :Table 5 :
75The network found by the MDL learner for the Dyck-1 task for the larger training set. See Theorem 4.5 for a description of how it accepts only valid Dyck-1 sequences. Unit values and output probabilities during the two possible phases of a Dyck-1 sequence: (i) the number of open brackets o is positive, or (ii) all brackets are well balanced (o = 0). Proof. Call o the number of open brackets in a prefix. Throughout a Dyck-1 sequence, unit 6 holds the value 1 − o: it holds the value 1 after the initial '#'; then +1 is added for each '[', and −1 for each ']'. The output probabilities in the cases of balanced and unbalanced sequences are then given in
Figure 9 :
9The network found by the MDL learner for the binary addition task, trained on all 400 pairs of numbers up to 20. This network is correct for all numbers (Theorem 4.6)..
Table 1 :
1Performance of the networks found by the MDL model compared with classical RNNs for the tasks in this paper. Test accuracy indicates deterministic accuracy, the accuracy restricted to deterministic steps; Dyck-n tasks have no deterministic steps, hence here we report categorical accuracy, defined as the fraction of steps where a network assigns a probability lower than = 0.005 to each of the illegal symbols. When available, the last column refers to an infinite accuracy theorem for MDL networks: describing their behavior not only for a finite test set but over the relevant, infinite language.
A mutation step can potentially produce a network that contains loops in its non-recurrent connections, most commonly after a new connection is added. In the feed-forward phase, we detect loop-closing connections (using depth-first search) and ignore them. This avoids circular feeding, and at the same time creates a smoother search space, in which architectures freely evolve, even through intermediate defective networks. Stagnant loop connections which don't end up evolving into beneficial structures are eventually selected out due to the |G| term.
When models are selected based on training performance (and then evaluated on the test sets), MDLRNNs outperform standard RNNs in all tasks in terms of cross-entropy and accuracy. We make the full training-based comparison available as part of the experimental material. 5 Training-based selection yields different MDLRNN winners for three out of the seven relevant tasks when trained on the smaller data sets, and for two tasks when trained on the larger sets. However, only one of these cases, for a n b n c n with the larger training set, results in a drop from 100% accuracy when selected by test to a suboptimum (97.6%), while other models remain at the same accuracy levels. MDL optimization is thus not immune to overfitting, which could occur for example due to accidental bad sampling. However, as made visible by our results, MDL training produces models that generalize well across data sets.
For this task, the average test cross-entropy per character of the network trained on the larger data set goes slightly below the optimum (seeTable 1); this can happen for example if the model picks up on unintentional regularities in the training set that are also present in the test set.
AcknowledgementsWe wish to thank Matan Abudy, Moysh Bar-Lev, Artyom Barmazel, Marco Baroni, Adi Behar-Medrano, Maxime Cauté, Rahma Chaabouni, Emmanuel Dupoux, Nicolas Guérin, Jean-Rémy King, Yair Lakretz, Tal Linzen, Aël Quelennec, Ezer Rasin, Mathias Sablé-Meyer, Benjamin Spector; the audiences at CNRS/ENS Paris, Facebook AI Paris, NeuroSpin, Tel Aviv University, and ZAS Berlin; and Nitzan Ron for creating the figures in this paper. We also thank the action editors at TACL and three anonymous reviewers for their helpful comments.This work was granted access to the HPC/AI resources of IDRIS under the allocation 2021-A0100312378 made by GENCI.
Review of parallel genetic algorithms bibliography. Panagiotis Adamidis, Thessaloniki, Greece, Tech. RepAristotle Univ. ThessalonikiPanagiotis Adamidis. 1994. Review of parallel ge- netic algorithms bibliography. Aristotle Univ. Thessaloniki, Thessaloniki, Greece, Tech. Rep.
Artificial neural network development by means of a novel combination of grammatical evolution and genetic algorithm. Fardin Ahmadizar, Khabat Soltanian, Engineering Applications of Artificial Intelligence. 39Fardin AkhlaghianTab, and Ioannis TsoulosFardin Ahmadizar, Khabat Soltanian, Fardin AkhlaghianTab, and Ioannis Tsoulos. 2015. Ar- tificial neural network development by means of a novel combination of grammatical evolu- tion and genetic algorithm. Engineering Appli- cations of Artificial Intelligence, 39:1-13.
An evolutionary algorithm that constructs recurrent neural networks. P J Angeline, G M Saunders, J B Pollack, IEEE Transactions on Neural Networks. 51P.J. Angeline, G.M. Saunders, and J.B. Pollack. 1994. An evolutionary algorithm that con- structs recurrent neural networks. IEEE Trans- actions on Neural Networks, 5(1):54-65.
Locality Principles and the Acquisition of Syntactic Knowledge. Robert C Berwick, MIT, Cambridge, MAPh.D. thesisRobert C. Berwick. 1982. Locality Principles and the Acquisition of Syntactic Knowledge. Ph.D. thesis, MIT, Cambridge, MA.
A survey of parallel genetic algorithms. Calculateurs paralleles, reseaux et systems repartis. Erick Cantú-Paz, 10Erick Cantú-Paz. 1998. A survey of parallel ge- netic algorithms. Calculateurs paralleles, re- seaux et systems repartis, 10(2):141-171.
On the length of programs for computing finite binary sequences. Gregory J Chaitin, Journal of the ACM. 13Gregory J. Chaitin. 1966. On the length of pro- grams for computing finite binary sequences. Journal of the ACM, 13:547-569.
Kyunghyun Cho, Caglar Bart Van Merrienboer, Dzmitry Gulcehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, arXiv:1406.1078Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. cs, statKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Trans- lation. arXiv:1406.1078 [cs, stat].
Finding structure in time. Jeffrey L Elman, Cognitive Science. 142Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.
Weight agnostic neural networks. Adam Gaier, David Ha, abs/1906.04358CoRRAdam Gaier and David Ha. 2019. Weight agnostic neural networks. CoRR, abs/1906.04358.
LSTM recurrent networks learn simple context-free and context-sensitive languages. Felix Gers, Jürgen Schmidhuber, 10.1109/72.963769Conference Name: IEEE Transactions on Neural Networks. 12Felix Gers and Jürgen Schmidhuber. 2001. LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans- actions on Neural Networks, 12(6):1333-1340. Conference Name: IEEE Transactions on Neu- ral Networks.
Higher order recurrent networks and grammatical inference. C Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, Dong Chen, Advances in Neural Information Processing Systems. D. S. TouretzkyMorgan-Kaufmann2C. Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, and Dong Chen. 1990. Higher order recurrent networks and grammatical in- ference. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 380-387. Morgan-Kaufmann.
Serial and parallel genetic algorithms as function optimizers. Scott Gordon, Darrell Whitley, ICGA. V Scott Gordon and Darrell Whitley. 1993. Se- rial and parallel genetic algorithms as function optimizers. In ICGA, pages 177-183.
Alex Graves, Greg Wayne, Ivo Danihelka, arXiv:1410.5401[cs].ArXiv:1410.5401Neural Turing Machines. Alex Graves, Greg Wayne, and Ivo Dani- helka. 2014. Neural Turing Machines. arXiv:1410.5401 [cs]. ArXiv: 1410.5401.
A minimum description length approach to grammar inference. Peter Grünwald, Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing. Stefan Wermter, Ellen Riloff, and Gabriele SchelerSpringerPeter Grünwald. 1996. A minimum description length approach to grammar inference. In Ste- fan Wermter, Ellen Riloff, and Gabriele Scheler, editors, Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, Springer Lecture Notes in Artificial Intelligence, pages 203-216. Springer.
Keeping the neural networks simple by minimizing the description length of the weights. Geoffrey E Hinton, Drew Van Camp, Proceedings of the sixth annual conference on Computational learning theory. the sixth annual conference on Computational learning theoryGeoffrey E. Hinton and Drew Van Camp. 1993. Keeping the neural networks simple by mini- mizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pages 5-13.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Publisher: MIT PressSepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. Publisher: MIT Press.
Adaptation in natural and artificial systems. an introductory analysis with application to biology, control, and artificial intelligence. H John, Holland, University of Michigan PressAnn Arbor, MIJohn H Holland. 1975. Adaptation in natural and artificial systems. an introductory analysis with application to biology, control, and artificial in- telligence. Ann Arbor, MI: University of Michi- gan Press, pages 439-444.
A Study of Grammatical Inference. James Horning, StanfordPh.D. thesisJames Horning. 1969. A Study of Grammatical Inference. Ph.D. thesis, Stanford.
Rule extraction from recurrent neural networks: A taxonomy and review. Henrik Jacobsson, Neural Computation. 176Henrik Jacobsson. 2005. Rule extraction from re- current neural networks: A taxonomy and re- view. Neural Computation, 17(6):1223-1263.
Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets. Armand Joulin, Tomas Mikolov, Advances in Neural Information Processing Systems. Curran Associates, Inc28Armand Joulin and Tomas Mikolov. 2015. Inferring Algorithmic Patterns with Stack- Augmented Recurrent Nets. In Advances in Neural Information Processing Systems, vol- ume 28. Curran Associates, Inc.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, ternational Conference of Learning Representations. ICLRDiederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In In- ternational Conference of Learning Represen- tations (ICLR).
Three approaches to the quantitative definition of information. Problems of Information Transmission. Andrei Nikolaevic, Kolmogorov , Problemy Peredachi Informatsii). 1Andrei Nikolaevic Kolmogorov. 1965. Three ap- proaches to the quantitative definition of infor- mation. Problems of Information Transmission (Problemy Peredachi Informatsii), 1:1-7.
The emergence of number and syntax units in LSTM language models. Yair Lakretz, German Kruszewski, Theo Desbordes, Dieuwke Hupkes, Stanislas Dehaene, Marco Baroni, 10.18653/v1/N19-1002Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong and Short Papers; Minneapolis, MinnesotaAssociation for Computational Linguistics1Yair Lakretz, German Kruszewski, Theo Desbor- des, Dieuwke Hupkes, Stanislas Dehaene, and Marco Baroni. 2019. The emergence of num- ber and syntax units in LSTM language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 11-20, Minneapolis, Minnesota. Association for Computational Linguistics.
Abandoning objectives: Evolution through the search for novelty alone. Joel Lehman, Kenneth O Stanley, Publisher: MIT Press19Evolutionary computationJoel Lehman and Kenneth O. Stanley. 2011. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary compu- tation, 19(2):189-223. Publisher: MIT Press.
Binary Strings. Ming Li, Paul Vitányi, 10.1007/978-0-387-49820-1An Introduction to Kolmogorov Complexity and Its Applications, Texts in Computer Science. New York, New York, NYSpringer4Ming Li and Paul Vitányi. 2008. Chapter 1.4, Bi- nary Strings. In An Introduction to Kolmogorov Complexity and Its Applications, Texts in Com- puter Science. Springer New York, New York, NY.
Unsupervised Language Acquisition. Carl De Marcken, Cambridge, MAMITPh.D. thesisCarl de Marcken. 1996. Unsupervised Language Acquisition. Ph.D. thesis, MIT, Cambridge, MA.
Designing Neural Networks using Genetic Algorithms. F Geoffrey, Miller, M Peter, Todd, U Shailesh, Hegde, 89Geoffrey F Miller, Peter M Todd, and Shailesh U Hegde. 1989. Designing Neural Networks us- ing Genetic Algorithms., volume 89.
Training feedforward neural networks using genetic algorithms. J David, Lawrence Montana, Davis, IJCAI. 89David J Montana and Lawrence Davis. 1989. Training feedforward neural networks using ge- netic algorithms. In IJCAI, volume 89, pages 762-767.
Approaching explanatory adequacy in phonology using Minimum Description Length. Iddo Ezer Rasin, Nur Berger, Itamar Lan, Roni Shefi, Katzir, Journal of Language Modelling. 91Ezer Rasin, Iddo Berger, Nur Lan, Itamar Shefi, and Roni Katzir. 2021. Approaching explana- tory adequacy in phonology using Minimum Description Length. Journal of Language Mod- elling, 9(1):17-66.
On evaluation metrics in Optimality Theory. Ezer Rasin, Roni Katzir, Linguistic Inquiry. 472Ezer Rasin and Roni Katzir. 2016. On evalua- tion metrics in Optimality Theory. Linguistic Inquiry, 47(2):235-282.
Modeling by shortest data description. Jorma Rissanen, Automatica. 14Jorma Rissanen. 1978. Modeling by shortest data description. Automatica, 14:465-471.
Discovering neural nets with low Kolmogorov complexity and high generalization capability. Jürgen Schmidhuber, Neural Networks. 105Jürgen Schmidhuber. 1997. Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857-873.
Deep learning in neural networks: An overview. Jürgen Schmidhuber, Neural Networks. 0Jürgen Schmidhuber. 2015. Deep learning in neu- ral networks: An overview. Neural Networks, 61(0):85-117.
A formal theory of inductive inference, parts I and II. J Ray, Solomonoff, Information and Control. 71 & 2Ray J. Solomonoff. 1964. A formal theory of in- ductive inference, parts I and II. Information and Control, 7(1 & 2):1-22, 224-254.
Evolving neural networks through augmenting topologies. O Kenneth, Risto Stanley, Miikkulainen, Evolutionary computation. 102Publisher: MIT PressKenneth O. Stanley and Risto Miikkulainen. 2002. Evolving neural networks through augment- ing topologies. Evolutionary computation, 10(2):99-127. Publisher: MIT Press.
Bayesian Learning of Probabilistic Language Models. Andreas Stolcke, Berkeley, CaliforniaUniversity of California at BerkeleyPh.D. thesisAndreas Stolcke. 1994. Bayesian Learning of Probabilistic Language Models. Ph.D. thesis, University of California at Berkeley, Berkeley, California.
Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, Stuart M Shieber, arXiv:1911.03329Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages. csMirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019. Memory-Augmented Recurrent Neural Net- works Can Learn Generalized Dyck Languages. arXiv:1911.03329 [cs].
An empirical evaluation of rule extraction from recurrent neural networks. Qinglong Wang, Kaixuan Zhang, Alexander G Ororbia, I I , Xinyu Xing, Xue Liu, C. Lee Giles, Neural Computation. 309Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, and C. Lee Giles. 2018. An empirical evaluation of rule ex- traction from recurrent neural networks. Neural Computation, 30(9):2568-2591.
Extracting automata from recurrent neural networks using queries and counterexamples. Gail Weiss, Yoav Goldberg, Eran Yahav, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningGail Weiss, Yoav Goldberg, and Eran Yahav. 2018a. Extracting automata from recurrent neu- ral networks using queries and counterexam- ples. In Proceedings of the 35th International Conference on Machine Learning.
On the Practical Computational Power of Finite Precision RNNs for Language Recognition. Gail Weiss, Yoav Goldberg, Eran Yahav, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018b. On the Practical Computational Power of Finite Precision RNNs for Language Recog- nition. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 740- 745.
Genetic algorithms and neural networks: optimizing connections and connectivity. D Whitley, C Starkweather, Bogart, Parallel Computing. 143D Whitley, T Starkweather, and C Bogart. 1990. Genetic algorithms and neural networks: opti- mizing connections and connectivity. Parallel Computing, 14(3):347-361.
One model for the learning of language. Proceedings of the National Academy of Sciences. Yuan Yang, T Steven, Piantadosi, 10.1073/pnas.2021865119Publisher: National Academy of Sciences Section: Social Sciences. 5119Yuan Yang and Steven T. Piantadosi. 2022. One model for the learning of language. Pro- ceedings of the National Academy of Sciences, 119(5). Publisher: National Academy of Sci- ences Section: Social Sciences.
Evolving optimal neural networks using genetic algorithms with Occam's Razor. Tak Byoung, Heinz Zhang, Mühlenbein, Complex Systems. 73Byoung-Tak Zhang and Heinz Mühlenbein. 1993. Evolving optimal neural networks using genetic algorithms with Occam's Razor. Complex Sys- tems, 7(3):199-220.
Balancing accuracy and parsimony in genetic programming. Tak Byoung, Heinz Zhang, Mühlenbein, Evolutionary Computation. 31Byoung-Tak Zhang and Heinz Mühlenbein. 1995. Balancing accuracy and parsimony in genetic programming. Evolutionary Computation, 3(1):17-38.
| [] |
[
"Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices",
"Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices"
] | [
"Sebastin Santy \nMicrosoft Research\nBangaloreIndia\n",
"♠ Anku \nPlaksha University\nMohaliIndia\n",
"Rani ♥ ",
"Monojit Choudhury monojitc@microsoft.com \nMicrosoft Research\nBangaloreIndia\n"
] | [
"Microsoft Research\nBangaloreIndia",
"Plaksha University\nMohaliIndia",
"Microsoft Research\nBangaloreIndia"
] | [] | Ethical aspects of research in language technologies have received much attention recently. It is a standard practice to get a study involving human subjects reviewed and approved by a professional ethics committee/board of the institution. How commonly do we see mention of ethical approvals in NLP research? What types of research or aspects of studies are usually subject to such reviews? With the rising concerns and discourse around the ethics of NLP, do we also observe a rise in formal ethical reviews of NLP studies? And, if so, would this imply that there is a heightened awareness of ethical issues that was previously lacking? We aim to address these questions by conducting a detailed quantitative and qualitative analysis of the ACL Anthology, as well as comparing the trends in our field to those of other related disciplines, such as cognitive science, machine learning, data mining, and systems. | 10.18653/v1/2021.findings-acl.414 | [
"https://arxiv.org/pdf/2106.01105v1.pdf"
] | 235,293,811 | 2106.01105 | f29d5cb8f405903fc8af7a5d7ab4bf7d65796e95 |
Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices
Sebastin Santy
Microsoft Research
BangaloreIndia
♠ Anku
Plaksha University
MohaliIndia
Rani ♥
Monojit Choudhury monojitc@microsoft.com
Microsoft Research
BangaloreIndia
Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices
Ethical aspects of research in language technologies have received much attention recently. It is a standard practice to get a study involving human subjects reviewed and approved by a professional ethics committee/board of the institution. How commonly do we see mention of ethical approvals in NLP research? What types of research or aspects of studies are usually subject to such reviews? With the rising concerns and discourse around the ethics of NLP, do we also observe a rise in formal ethical reviews of NLP studies? And, if so, would this imply that there is a heightened awareness of ethical issues that was previously lacking? We aim to address these questions by conducting a detailed quantitative and qualitative analysis of the ACL Anthology, as well as comparing the trends in our field to those of other related disciplines, such as cognitive science, machine learning, data mining, and systems.
Introduction
With the rapid advances in the field of Natural Language Processing (NLP), language technologies are getting woven into the daily fabric of our lives and the society. Since "language is a portal of emotions, a proxy of human behavior, and a strong signal of individual characteristics" (Hovy and Spruit, 2016), large-scale deployment of language technology has potential risks that require early detection and mitigation. Naturally, there have been several discussions about the potential harms and ethical issues concerning NLP (Hovy and Spruit, 2016;Conway and O'Connor, 2016). They have mostly revolved around building or deploying systems in sensitive areas such as hate speech (Sap et al., 2019), social media (Benton et al., 2017), clinical NLP and mental health (Šuster et al., 2017;Mikal et al., 2016) and use of sensitive or personal information (Larson, 2017). While building NLP systems, there are Papers mentioning X terms (%) IRB-related terms *ethic* terms also ethical risks associated with involvement of human subjects through user studies or data collection activities (Shmueli et al., 2021).
The awareness of the dangers of the existing and new NLP applications has led to the curation of several ethical guidelines and frameworks. Undergirded by lessons from the past, these guidelines and frameworks help researchers consider and contextualize critical ethical concerns. Most of the ethical issues in NLP are rooted in the data being used for research. Couillault et al. (2014) is one of the first works to explore the ethics of data collection and evaluation in NLP. Several other works have proposed best practices for dealing with ethical implications of NLP research and deployment (Prabhumoye et al., 2019;Leidner and Plachouras, 2017;Bender and Friedman, 2018;Schnoebelen, 2017). There is now an increased awareness around this topic with a number of workshops and tutorials on ethics at NLP conferences (Tsvetkov et al., 2018;Hovy et al., 2017;Alfano et al., 2018). Such discussions have resulted in a number of reforms at NLP conferences. NLP conferences now have new track called Ethics in NLP. Furthermore, several ML and NLP conferences such as NeurIPS 2020, NAACL 2021 and ACL 2021 1 now recommend the inclusion of broader impact statement in their papers, which allows for authors to introspect and be mindful of the ethical implications their research poses.
Although an NLP researcher might be individually committed towards ethical research practices, they may not have the expertise in ethical and legal issues to gauge the potential risks of a technology or a dataset that they are building. The practice of getting the research/study approved by an ethical review board (aka Institutional Review Board or IRB) 2 instituted by the organization is thus critical in early defusal of the potential harms. The two primary functions of IRB are (i) to protect the rights and welfare of human research subjects, and (ii) to support and facilitate the conduct of valuable research (Bankert and Amdur, 2006;Klitzman, 2012;Byerly, 2009). Traditionally, IRB has been a longstanding norm in biomedical research due to its overt exposure to human subjects. However, with computing research pervading human lives, IRBs have started covering computing research as well (Buchanan and Ess, 2009;Vitak et al., 2017). With regards to NLP, most of the data collection and annotation processes as well as user studies come under the purview of these boards. These are particularly necessary if they cover sensitive topics such as mental health or hate speech which can affect the human subjects involved in data collection or the users of the system.
How frequently do NLP researchers take IRB approvals for their studies? What aspects of NLP research or which topics of study are typically considered for IRB approvals? What are the historical and current trends, and what can we say about the awareness of the NLP research community around ethical issues? In this paper, we try to answer these questions through a quantitative and qualitative analysis of papers from the ACL anthology that seek IRB approvals. According to our findings, IRB approvals were almost non-existent in the NLP literature until 2006, but there has been a steady increase since 2016. We also study the distribution of IRB approvals by country and industry/academia affiliation, as well as compare the recent trends in NLP conferences to that of various prominent con-ferences ranging from machine learning and data mining to human-computer interaction and systems. One of the key findings of this study is that IRB permission was mostly sought for either data collection or annotation studies, but hardly ever for data re-purposing or system design/deployment -a void that we think the NLP community should be conscious about.
Method
To determine the trends of IRB approvals in NLP research, we resort to searching for IRB-and ethicsrelated terms in research papers. We obtain the papers (PDFs) for major NLP conferences, journals and workshops [ACL, COLING, EACL EMNLP, LREC, NAACL, CL, TACL, and WS] from the ACL Anthology (curated by Joshi et al. (2020)). For a comparative analysis, we also collect papers from other related conferences [CogSci, InterSpeech, NeurIPS, CVPR, ICWSM, CHI, COMPASS] for the years 2019 and 2020, during which there was considerably more discussion around ethics of computing research.
In order to retrieve papers that seek IRB approvals, we search for the following keywords which cover the phrases used for IRB in countries that are frequently represented at these conferences: review board, ethics board, ethics panel, ethics committee, consent form, and IRB. 3 To further compare and calibrate, we also search for papers that contain the wildcard string *ethic*, which brings up a broader set of papers that may discuss ethical repercussions of their work, even if any approval is not explicitly sought or mentioned. To assist with a robust search over this textual data, we use the Allen AI SPIKE interface 4 (Taub Tabib et al., 2020;Shlain et al., 2020) and use pdfgrep 5 to cross-check our results.
IRB-related term search yielded 210 papers from the ACL anthology (till 2019), which were then manually checked for precision and annotated for aspects (see Figure 3) and topics (e.g., hate speech, social media, mental health, etc.) of the research for which IRB permission was sought. Through our manual curation, we found that 94.17% of these papers actually took the approval for their research study thus showing that our search is precise in capturing the terms.
The remaining papers were mostly ethical frameworks and recommendations (e.g., Hovy and Spruit (2016); Bender and Friedman (2018)) which merely mentioned the need for seeking IRB approvals in NLP research.
3 Findings 3.1 How many papers seek IRB approvals?
06 07 08 09 10 11 12 13 14 15 16 17 18 Figure 2 shows the percentage(%) of papers in each NLP conference iteration that mention IRB-related terms. It is immediately obvious that for almost all the conferences only a minuscule number of papers mention IRB approvals. However, it is heartening to see that the number of mentions is increasing in recent years. LREC and WS particularly stand out among the other conferences for having at least some mentions of IRB approvals in every iteration. For LREC, it is understandable, since the theme of the conference revolves around data and resource generation. In the case of WS, IRB mentions are consistently increasing over the years. We observed that this is mostly due to the diverse nature of workshops some of which are on resource generation (Popescu-Belis et al., 2019) or cover sensitive topics (Niederhoffer et al., 2019;Yannakoudakis et al., 2019). Journals such as CL and TACL have very few papers in each iteration, so even one IRB mention appears to be a lot. It should be noted that there is a possibility that a research study has obtained IRB approval but has not disclosed it in their paper. However, based on the authors' experience (and anecdotes from personal conversations), it is highly unlikely that anyone who has been through the IRB approval process will fail to mention it.
Study System
Figure 3: Different aspects of research for which IRB approvals were sought in the papers that we manually analyzed.
We manually go through each of the 58 NLP papers (which excludes WS) to derive the aspects and understand the context in which IRB approvals are sought and build a taxonomy of the broad topics covered (Figure 3). We see that most of them (24 papers; 41.3%) take IRB approvals for collection of data which can often involve human subjects directly. It is followed by the annotation of data with 20 papers (34%) taking IRB approvals. A meager 7 papers (12%) in our set take IRB approvals for scraping data which is the automatic collection of data from web pages or social media posts without explicit consent from the users. We see that only one paper takes IRB approval for re-purposing and further annotation of data (Rogers et al., 2018). One of the core concerns of GDPR is the usage of personal data collected by media platforms for a purpose different than what the user consented to and hence such re-purposing of data should ideally undergo IRB approvals. 12 papers (20%) take approvals for conducting user studies of both qualitative (survey, interview) and quantitative nature (semantic edits). Interestingly, we see that only one paper has taken IRB approval for the whole system owing to its sensitive nature (Cao et al., 2018).
We also look at the nature (or topics) of the work for which the IRB approvals are taken. We observe that 48.4% of papers that mention IRB have sensitive topics (such as mental health, hate speech, clinical/medical NLP), 20.3% of the papers are for collection of eye movement, EEG and audio/video recordings of human subjects, and rest of them are generic data collection or user study. To further understand the trends, we look at certain tracks of ACL 2020 which deal with sensitive topics or Table 1 shows the distribution of papers which mention IRB approvals along two dimensions: countries and types of institutions. As can be observed, most of the listed countries are WEIRD 7 societies. When it comes to the type of institution, we find that universities account for the vast majority of papers seeking IRB approvals, followed by joint collaborations. This trend can be counter-intuitive as an industry is more likely to be regulated and accountable for the ethical and legal concerns of their work. One possibility is that industries perhaps do not engage in external data collection/annotation work or conduct user studies as much as academic institutions do. Alternatively, it is possible that the data collection/annotation process is a completely independent pipeline that is not specific to the research paper in which it is used and thus is not reported.
How do the IRB trends in NLP research
compare with those in related fields?
We look at the following conferences for our analysis: CogSci in cognitive science, InterSpeech in speech processing, NeurIPS in machine learning, CVPR in computer vision, ICWSM in social media mining, CHI in human-computer interaction, and COMPASS in computing systems deployment. We specifically analyze these conferences for 2019 and 2020 iterations as there have been significant changes made in the conferences during this period in terms of reporting the ethical ramifications of their research. Figure 4 shows the % of papers mentioning *ethic* and IRB-related terms for each conference iteration. We calculate for *ethic* to understand how aware and concerned each field/conference is towards the ethical implications of the research they conduct. It is not surprising that IRB mentions for CHI are so high (∼ 35%) given that more than 65% percent of CHI papers include at least one user study (Koeman, 2020). ICWSM works with datasets and systems related to web and social media analytics and hence would need to undergo IRB approvals. This is apparent in the relatively high number of IRB mentions in both 2019 and 2020. Unlike the other conferences we choose for our analysis, CogSci is a non-computing conference. Linguistics work is frequently found in CogSci, which often makes use of human subjects. We observe that it has the most consistent representation of both *ethic* and IRB-related term mentions among the years. As previously discussed, one of the concerns is that sensitive systems are seldom taking IRB approvals. On the contrary, we notice that COMPASS, a conference largely focused on deploying computing systems, is prevalent in taking IRB approvals. InterSpeech and CVPR have significantly fewer papers with IRB mentions (< 0.35% and < 2.5%, respectively) and the trends have hardly changed over the years, despite the fact that they conduct research with speech, multimodal, and vision data that may have been collected from human subjects. Among these, there is a ray of hope for ACL which is showing a significant positive trend in both *ethic* and IRB mentions without any external reinforcement, thanks to the increasing awareness in the field. NeurIPS, on the other hand, has seen a meteoric rise in their *ethic* mentions, which, on manual inspection reveals, is due to their mandatory inclusion of broader impact statements. There has also been a slight increase in their IRB mentions, which could be attributed to this, indicating that broader impact statements might help researchers be more cautious when proposing their research to the larger community. This quantitative testimony from NeurIPS shows that ACL and other *CL conferences are moving in the right direction with their inclusion of stringent ethics reviews for their papers.
Way Forward
In this paper, we conduct a survey of IRB approvals in NLP research. The two key observations we make are as follows. First, very few papers (< 0.8% of all papers published) since 2006 have sought an IRB approval; though we do observe a rise in numbers (< 1.3% of all papers published) since 2016. This is much smaller compared to the numbers we observe for other conferences such as CogSci, CHI, ICWSM or COMPASS. Second, the majority of the IRB approvals were obtained for data collection or annotation that directly involved users, with only a few studies seeking approvals for data scraping or re-purposing. Such approvals are even more scarce for sensitive systems where we seldom see any paper taking IRB approvals solely for the system. The number of papers creating new datasets is expected to be greater than 1% of all NLP papers 8 ; the number of papers that re-purpose an existing dataset is expected to be even greater than this. Therefore, clearly not all papers creating datasets, and almost no paper repurposing datasets take approvals from IRB. As such, re-purposing data collected from human subjects without their explicit consent on how the data will be re-used is potentially dangerous and may even have legal repercussions. Furthermore, with the exception of a couple of papers, to date, there is no practice or trend of taking IRB approval for designing, developing, and deploying systems. This is in stark contrast to the practice in other related fields/conferences such as COMPASS. Much of the harm caused by a system could actually come from its design or style of training or deployment, rather than the underlying datasets.
We see that the broad impact statements have helped conferences such as NeurIPS which were traditionally oblivious to ethical issues (Nanayakkara et al., 2021). We believe that, in a similar way, the impact statements introduced in NAACL 9 and ACL 2021, with specific clauses for seeking IRB, will be highly beneficial in limiting the aforementioned potential risks by increasing the awareness amongst researchers of broader ethical repercussions of their research. It will be interesting to conduct a similar study a few years down the line and contrast with the findings of the current study.
Figure 1 :
1Percentage(%) of papers mentioning IRB-related and *ethic* terms in NLP conferences, journals, and workshops from 2006 to 2020.
Figure 2 :
2The percentage (%) of papers in NLP conferences, journals and workshops over the past 15 years that mention IRB-related terms. The intensity of color is proportional to the % values. The boxes with a gray hatch reflect the years when that particular conference was not held.
Table 1 :
1Distribution of % IRB-related term mentions among
countries and different types of affiliations for NLP confer-
ences (excluding LREC and WS) from 2012 to 2020. 6
https://2021.aclweb.org/ethics/ Ethics-FAQ/ 2 In this paper, we use IRB as a generic term to refer to such review boards which are known by slightly different terms in different geographies.
Collectively referred to as IRB-related terms from hereon. 4 URL when accessed: https://spike.staging. apps.allenai.org/datasets/acl/search 5 Command-line tool: https://pdfgrep.org/
Country and Affiliation data obtained from https:// github.com/marekrei/ml_nlp_paper_data/ 7 Western, Educated, Industrialized, Rich and Democratic
As a crude statistics, consider the fact that the number of accepted papers in the Resource and Evaluation track in ACL 2020 was 5.4%, whereas only 1.7% of all the papers in that year sought for IRB approvals. 9 Report: https://2021.naacl.org/blog/ ethics-review-process-report-back/
AcknowledgementsWe thank Prof. Emily M. Bender (UW) for providing useful insights and inspiration at the early stages of this work. We also thank Dr. Mary L. Gray and Ms. Anu Kannepalli (Microsoft) for their many insightful inputs, Prof. Yoav Goldberg (Allen AI) for setting up the SPIKE instance for us, and Prof. Katharina Reinecke (UW) and Dr. Prasanta Bhattacharya (A*STAR) for the broad discussions around IRBs. We also thank anonymous reviewers from ACL and NLP4PI workshop for their thoughtful suggestions and feedback.
10.18653/v1/W18-08Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing. Mark Alfano, Dirk Hovy, Margaret Mitchell, and Michael Strubethe Second ACL Workshop on Ethics in Natural Language ProcessingNew Orleans, Louisiana, USAAssociation for Computational LinguisticsMark Alfano, Dirk Hovy, Margaret Mitchell, and Michael Strube, editors. 2018. Proceedings of the Second ACL Workshop on Ethics in Natural Lan- guage Processing. Association for Computational Linguistics, New Orleans, Louisiana, USA.
Institutional review board: Management and function. A Elizabeth, Robert J Bankert, Amdur, Jones & Bartlett LearningElizabeth A Bankert and Robert J Amdur. 2006. In- stitutional review board: Management and function. Jones & Bartlett Learning.
Emily M Bender, Batya Friedman, 10.1162/tacl_a_00041Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Computational Linguistics. 6Emily M. Bender and Batya Friedman. 2018. Data Statements for Natural Language Processing: To- ward Mitigating System Bias and Enabling Better Science. Transactions of the Association for Com- putational Linguistics, 6:587-604.
Ethical Research Protocols for Social Media Health Research. Adrian Benton, Glen Coppersmith, Mark Dredze, 10.18653/v1/W17-1612Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. the First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsAdrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical Research Protocols for Social Media Health Research. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 94-102, Valencia, Spain. Association for Computational Linguistics.
Elizabeth A Buchanan, Charles M Ess, 10.1145/1713066.1713069ternet Research Ethics and the Institutional Review Board: Current Practices and Issues. 39Elizabeth A. Buchanan and Charles M. Ess. 2009. In- ternet Research Ethics and the Institutional Review Board: Current Practices and Issues. SIGCAS Com- put. Soc., 39(3):43-49.
Working with the institutional review board. Wesley G Byerly, 10.2146/ajhp070066American Journal of Health-System Pharmacy. 662Wesley G. Byerly. 2009. Working with the institutional review board. American Journal of Health-System Pharmacy, 66(2):176-184.
BabyCloud, a Technological Platform for Parents and Researchers. Xuân-Nga Cao, Cyrille Dakhlia, Patricia Del Carmen, Mohamed-Amine Jaouani, Malik Ould-Arbi, Emmanuel Dupoux, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAXuân-Nga Cao, Cyrille Dakhlia, Patricia Del Carmen, Mohamed-Amine Jaouani, Malik Ould-Arbi, and Emmanuel Dupoux. 2018. BabyCloud, a Techno- logical Platform for Parents and Researchers. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).
Social media, big data, and mental health: current advances and ethical implications. Mike Conway, O' Daniel, Connor, 10.1016/j.copsyc.2016.01.004Current Opinion in Psychology. 9Mike Conway and Daniel O'Connor. 2016. Social me- dia, big data, and mental health: current advances and ethical implications. Current Opinion in Psy- chology, 9:77-82.
Evaluating corpora documentation with regards to the ethics and big data charter. Alain Couillault, Karën Fort, Gilles Adda, Hugues De Mazancourt, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14). the Ninth International Conference on Language Resources and Evaluation (LREC'14)Reykjavik, IcelandEuropean Language Resources Association (ELRAAlain Couillault, Karën Fort, Gilles Adda, and Hugues de Mazancourt. 2014. Evaluating corpora docu- mentation with regards to the ethics and big data charter. In Proceedings of the Ninth International Conference on Language Resources and Evalua- tion (LREC'14), pages 4225-4229, Reykjavik, Ice- land. European Language Resources Association (ELRA).
Dirk Hovy, Shannon Spruit, Margaret Mitchell, Emily M Bender, Michael Strube, 10.18653/v1/W17-16Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. Hanna Wallachthe First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsDirk Hovy, Shannon Spruit, Margaret Mitchell, Emily M. Bender, Michael Strube, and Hanna Wal- lach, editors. 2017. Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing. Association for Computational Linguistics, Va- lencia, Spain.
The Social Impact of Natural Language Processing. Dirk Hovy, Shannon L Spruit, 10.18653/v1/P16-2096Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics2Short Papers)Dirk Hovy and Shannon L. Spruit. 2016. The Social Impact of Natural Language Processing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 591-598, Berlin, Germany. Association for Computational Linguistics.
The State and Fate of Linguistic Diversity and Inclusion in the NLP World. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, Monojit Choudhury, 10.18653/v1/2020.acl-main.560Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsPratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 6282-6293, Online. Association for Computational Linguistics.
Institutional Review Board Community Members: who are they, what do they do, and whom do they represent?. Robert Klitzman, 10.1097/acm.0b013e3182578b54Academic Medicine. 877Robert Klitzman. 2012. Institutional Review Board Community Members: who are they, what do they do, and whom do they represent? Academic Medicine, 87(7):975-981.
HCI/UX research: what methods do we use?. Lisa Koeman, Lisa Koeman. 2020. HCI/UX research: what methods do we use?
Gender as a Variable in Natural-Language Processing: Ethical Considerations. Brian Larson, 10.18653/v1/W17-1601Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. the First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsBrian Larson. 2017. Gender as a Variable in Natural- Language Processing: Ethical Considerations. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1-11, Valencia, Spain. Association for Computational Linguistics.
Ethical by Design: Ethics Best Practices for Natural Language Processing. L Jochen, Vassilis Leidner, Plachouras, 10.18653/v1/W17-1604Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. the First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsJochen L. Leidner and Vassilis Plachouras. 2017. Ethi- cal by Design: Ethics Best Practices for Natural Lan- guage Processing. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 30-40, Valencia, Spain. Association for Computational Linguistics.
Ethical issues in using twitter for population-level depression monitoring: a qualitative study. Jude Mikal, Samantha Hurst, Mike Conway, 10.1186/s12910-016-0105-5BMC Medical Ethics. 117Jude Mikal, Samantha Hurst, and Mike Conway. 2016. Ethical issues in using twitter for population-level depression monitoring: a qualitative study. BMC Medical Ethics, 17(1).
Unpacking the Expressed Consequences of AI Research in Broader Impact Statements. Priyanka Nanayakkara, Jessica Hullman, Nicholas Diakopoulos, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES '21. the AAAI/ACM Conference on AI, Ethics, and Society, AIES '21New York, NY, USAAssociation for Computing MachineryPriyanka Nanayakkara, Jessica Hullman, and Nicholas Diakopoulos. 2021. Unpacking the Expressed Con- sequences of AI Research in Broader Impact State- ments. In Proceedings of the AAAI/ACM Confer- ence on AI, Ethics, and Society, AIES '21, New York, NY, USA. Association for Computing Machin- ery.
Kate Niederhoffer, Kristy Hollingshead, Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology. Association for Computational Linguistics. the Sixth Workshop on Computational Linguistics and Clinical Psychology. Association for Computational LinguisticsPhilip Resnik, Rebecca Resnik, and Kate Loveys; Minneapolis, MinnesotaKate Niederhoffer, Kristy Hollingshead, Philip Resnik, Rebecca Resnik, and Kate Loveys, editors. 2019. Proceedings of the Sixth Workshop on Computa- tional Linguistics and Clinical Psychology. Associ- ation for Computational Linguistics, Minneapolis, Minnesota.
Proceedings of the Fourth Workshop on Discourse in Machine Translation. Andrei Popescu-Belis, Sharid Loáiciga, Christian Hardmeier, and Deyi Xiongthe Fourth Workshop on Discourse in Machine TranslationHong Kong, ChinaAssociation for Computational LinguisticsAndrei Popescu-Belis, Sharid Loáiciga, Christian Hardmeier, and Deyi Xiong, editors. 2019. Proceed- ings of the Fourth Workshop on Discourse in Ma- chine Translation (DiscoMT 2019). Association for Computational Linguistics, Hong Kong, China.
Principled Frameworks for Evaluating Ethics in NLP Systems. Shrimai Prabhumoye, Elijah Mayfield, Alan W Black, Proceedings of the 2019 Workshop on Widening NLP. the 2019 Workshop on Widening NLPShrimai Prabhumoye, Elijah Mayfield, and Alan W Black. 2019. Principled Frameworks for Evaluat- ing Ethics in NLP Systems. In Proceedings of the 2019 Workshop on Widening NLP, pages 118-121,
Association for Computational Linguistics. Italy Florence, Florence, Italy. Association for Computational Lin- guistics.
RuSentiment: An enriched sentiment analysis dataset for social media in Russian. Anna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, Alex Gribov, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsAnna Rogers, Alexey Romanov, Anna Rumshisky, Svitlana Volkova, Mikhail Gronas, and Alex Gribov. 2018. RuSentiment: An enriched sentiment analysis dataset for social media in Russian. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 755-763, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.
The Risk of Racial Bias in Hate Speech Detection. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A Smith, 10.18653/v1/P19-1163Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMaarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The Risk of Racial Bias in Hate Speech Detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1668-1678, Florence, Italy. Association for Computational Linguistics.
Goal-Oriented Design for Ethical Machine Learning and NLP. Tyler Schnoebelen, 10.18653/v1/W17-1611Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. the First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsTyler Schnoebelen. 2017. Goal-Oriented Design for Ethical Machine Learning and NLP. In Proceedings of the First ACL Workshop on Ethics in Natural Lan- guage Processing, pages 88-93, Valencia, Spain. As- sociation for Computational Linguistics.
Syntactic search by example. Micah Shlain, Hillel Taub-Tabib, Shoval Sadde, Yoav Goldberg, 10.18653/v1/2020.acl-demos.3Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsOnline. Association for Computational LinguisticsMicah Shlain, Hillel Taub-Tabib, Shoval Sadde, and Yoav Goldberg. 2020. Syntactic search by example. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 17-23, Online. Association for Computational Linguistics.
Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing. Boaz Shmueli, Jan Fell, Soumya Ray, Lun-Wei Ku, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsBoaz Shmueli, Jan Fell, Soumya Ray, and Lun-Wei Ku. 2021. Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3758-3769, Online. Association for Computational Linguistics.
A Short Review of Ethical Challenges in Clinical Natural Language Processing. Stéphan Simonšuster, Walter Tulkens, Daelemans, 10.18653/v1/W17-1610Proceedings of the First ACL Workshop on Ethics in Natural Language Processing. the First ACL Workshop on Ethics in Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsSimonŠuster, Stéphan Tulkens, and Walter Daelemans. 2017. A Short Review of Ethical Challenges in Clin- ical Natural Language Processing. In Proceedings of the First ACL Workshop on Ethics in Natural Lan- guage Processing, pages 80-87, Valencia, Spain. As- sociation for Computational Linguistics.
Interactive extractive search over biomedical corpora. Micah Hillel Taub Tabib, Shoval Shlain, Dan Sadde, Matan Lahav, Yaara Eyal, Cohen, 10.18653/v1/2020.bionlp-1.3berg. 2020Proceedings of the 19th. the 19thHillel Taub Tabib, Micah Shlain, Shoval Sadde, Dan Lahav, Matan Eyal, Yaara Cohen, and Yoav Gold- berg. 2020. Interactive extractive search over biomedical corpora. In Proceedings of the 19th
Online. Association for Computational Linguistics. SIGBioMed Workshop on Biomedical Language Pro- cessing, pages 28-37, Online. Association for Com- putational Linguistics.
Socially Responsible NLP. Yulia Tsvetkov, Rob Vinodkumar Prabhakaran, Voigt, 10.18653/v1/N18-6005Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial AbstractsNew Orleans, LouisianaAssociation for Computational LinguisticsYulia Tsvetkov, Vinodkumar Prabhakaran, and Rob Voigt. 2018. Socially Responsible NLP. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Tutorial Abstracts, pages 24-26, New Orleans, Louisiana. Association for Computational Linguistics.
Ethics regulation in social computing research: Examining the role of institutional review boards. Jessica Vitak, Nicholas Proferes, Katie Shilton, Zahra Ashktorab, 10.1177/1556264617725200Journal of Empirical Research on Human Research Ethics. 125Jessica Vitak, Nicholas Proferes, Katie Shilton, and Zahra Ashktorab. 2017. Ethics regulation in social computing research: Examining the role of institu- tional review boards. Journal of Empirical Research on Human Research Ethics, 12(5):372-382.
Ildikó Pilán, and Torsten Zesch. Helen Yannakoudakis, Ekaterina Kochmar, Claudia Leacock, Nitin Madnani, Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics. the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational LinguisticsFlorence, ItalyHelen Yannakoudakis, Ekaterina Kochmar, Claudia Leacock, Nitin Madnani, Ildikó Pilán, and Torsten Zesch, editors. 2019. Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computa- tional Linguistics, Florence, Italy.
| [] |
[
"PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: AQA-Webcorp",
"PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: AQA-Webcorp"
] | [
"Wided Bakari \nFaculty of Economics and Management\nSfaxTunisia\n",
"Patrice Bellot \nAix-Marseille University\nMarseilleFrance\n",
"Mahmoud Neji \nFaculty of Economics and Management\nSfaxTunisia\n"
] | [
"Faculty of Economics and Management\nSfaxTunisia",
"Aix-Marseille University\nMarseilleFrance",
"Faculty of Economics and Management\nSfaxTunisia"
] | [] | With the development of electronic media and the heterogeneity of Arabic data on the Web, the idea of building a clean corpus for certain applications of natural language processing, including machine translation, information retrieval, question answer, become more and more pressing. In this manuscript, we seek to create and develop our own corpus of pair's questions-texts. This constitution then will provide a better base for our experimentation step. Thus, we try to model this constitution by a method for Arabic insofar as it recovers texts from the web that could prove to be answers to our factual questions. To do this, we had to develop a java script that can extract from a given query a list of html pages. Then clean these pages to the extent of having a data base of texts and a corpus of pair's question-texts. In addition, we give preliminary results of our proposal method. Some investigations for the construction of Arabic corpus are also presented in this document. | 10.3991/ijes.v4i2.5345 | [
"https://arxiv.org/pdf/1709.09404v1.pdf"
] | 28,031,850 | 1709.09404 | 22c2570179d4dba37306c82d49a05fb2fe211533 |
PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: AQA-Webcorp
Wided Bakari
Faculty of Economics and Management
SfaxTunisia
Patrice Bellot
Aix-Marseille University
MarseilleFrance
Mahmoud Neji
Faculty of Economics and Management
SfaxTunisia
PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: AQA-Webcorp
10.3991/ijes.v4i2.5345Index Terms-ArabicWebcorpussearch engineURLquestionCorpus buildingscriptGooglehtmltxt
With the development of electronic media and the heterogeneity of Arabic data on the Web, the idea of building a clean corpus for certain applications of natural language processing, including machine translation, information retrieval, question answer, become more and more pressing. In this manuscript, we seek to create and develop our own corpus of pair's questions-texts. This constitution then will provide a better base for our experimentation step. Thus, we try to model this constitution by a method for Arabic insofar as it recovers texts from the web that could prove to be answers to our factual questions. To do this, we had to develop a java script that can extract from a given query a list of html pages. Then clean these pages to the extent of having a data base of texts and a corpus of pair's question-texts. In addition, we give preliminary results of our proposal method. Some investigations for the construction of Arabic corpus are also presented in this document.
INTRODUCTION
The lack and / or the absence of corpus in Arabic has been a problem for the implementation of natural language processing. This also has a special interest in the track of the question answering.
The corpus (singular form of corpora 1 ) construction is a task that both essential and delicate. It is complex because it depends in large part a significant number of resources to be exploited. In addition, the corpus construction is generally used for many NLP applications, including machine translation, information retrieval, questionanswering, etc. Several attempts have succeeded of building their corpus. According to [Sinclair, 2005], a corpus is a collection of pieces of texts in electronic forms, selected according to external criteria for end to represent, if possible, a language as a data source for linguistic research. Indeed, a definition that is both specific and generic of a corpus according [Rastier, 2005], is the result of choices that brings the linguists. A corpus is not a simple object; it should not be a mere collection of phrases or a "bag of words". This is in fact a text assembly that can cover many types of text.
With the internet development and its services, the web has become a great source of documents in different lan-guages and different areas. This source is combined with storage media that allow the rapid construction of a corpus [Meftouh et al., 2007]. In addition, using the Web as a base for the establishment of textual data is a very recent task. The recent years have taken off work attempting to exploit this type of data. From the perspective of automated translation in [Resnik et al., 1998], the others study the possibility of using the websites which offering information in multiple languages to build a bilingual parallel corpus.
Arabic is also an international language, rivaling English in number of native speakers. However, little attentions have been devoted to Arabic. Although there have been a number of investigations and efforts invested the Arabic corpus construction, especially in Europe. Progress in this area is still limited. In our research, we completed building our corpus of texts by querying the search engine Google. We are concerned our aim, a kind of, giving a question, analyzing texts at the end to answer this question.
This paper is organized into six sections as follows: it begins with an introduction, followed by use of the web as a corpus source. Section 3 outlines the earlier work in Arabic; Section 4 shows our proposed approach to build a corpus of pairs of questions and texts; Section 5 describes an experimental study of our approach; a conclusion and future work will conclude this article.
II. USING THE WEB AS A SOURCE OF CORPUS
Although the web is full of documents, finding information for building corpus in Arabic is still is a tedious task. Nowadays, the web has a very important role in the search for information. It is an immense source, free and available. The web is with us, with a simple click of the mouse, a colossal quantity of texts was recovered freely [Gatto, 2011]. It contains billions of text words that can be used for any kind of linguistic research [Kilgarriff & Grefenstette, 2001]. It is considered the greatest knowledge resource. Indeed, the current search engines cannot find extracts containing the effective answers to a user question. Which sometimes makes it difficult to get accurate information; sometimes, they can't even extract the correct answers. The web is also an infinite source of resources (textual, graphical and sound). These resources allow the rapid constitution of the corpus. But their constitution is not easy so that it raises a number of questions [Isaac, 2001]. PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP The construction of a corpus of texts from the web was not a simple task. Such constitution has contributed to developing and improving several linguistic tools such as question-answering systems, information extraction systems, machine translation systems, etc.
In fact, one of the most interesting Web intrinsic characteristics is its multilingual. As for the current distribution of languages used on the Web, recent estimates of the top ten languages (30 June 2015 2 ) report that English and Chinese are the most used languages, followed by Spanish. The Arabic is a fourth on Internet users by language, followed by other major languages such as Portuguese, Japanese, Russian, German, Malay, French, and German.
Our intent is threefold. First, the empirical evaluation must be based on a relevant corpus. Secondly, we seek to analyze our built corpus. Finally, the main purpose of our research concerns the search for an accurate answer to a question in natural language. That is why we considered the Web as a source of potential to build our corpus.
III. LITERATURE REVIEW: ARABIC CORPUS CONSTRUCTION FROM THE WEB
The corpus is a resource that could be very important and useful in advancing the various language applications such as information retrieval, speech recognition, machine translation, question-answering, etc. This resource has gained much attention in NLP. The task of building a corpus of textual resources from the web is somewhat recent. In Arabic, the attempts to exploit this type of data are limited. Although there has been some effort in Europe, which led to the successful production of some Arabic corpus; Progress in this field is still limited. According to [Atwell et al 2004], the progress has been hampered by the lack of effective corpus analysis tools, such as taggers, stemmers, readable dictionaries to the machine, the corpus viewing tools, etc., that are required for build and enrich a corpus as a research tool.
Many question-answering systems rely on document corpus construction in English and other languages; there are some publicly accessible corpuses, especially in Arabic. This language has not received the attention that it deserves in this area. In this regard, many researchers have emphasized the importance of a corpus and the need to work on their construction. For his part, [Mansour, 2013], in his work, showed that the contribution of a corpus in a linguistic research is a huge of many ways. In fact, as such a corpus provides an empirical data that enables to form the objective linguistic declarations rather than subjective. Thus, the corpus helps the researcher to avoid linguistic generalizations based on his internalized cognitive perception of language. With a corpus, the qualitative and quantitative linguistic research can be done in seconds; this saves the time and the effort. Finally, the empirical data analysis can help the researchers not only to precede the effective new linguistic research, but also to test the existing theories.
In this section, we present the most significant studies on English corpus construction and previous attempts to build a corpus in Arabic. In addition, we will also cover the studies that claim the construction of corpus for question-answering. There are now numerous studies that use the Web as a source of linguistic data. Here, we review a few studies for the Arabic language, as our case, that use the search engines queries to build a corpus.
Among the most recognized Arabic corpus construction projects, we cite, for example, the work of Resnik which studies the possibility of using the websites offers the information's in multiple languages to build the bilingual parallel corpora [Resnik, 1998].
Ghani and his associates [Ghani et al., 2001] performs a study of building a corpus of minority languages from the web by automatically querying the search engines.
In order to study the behavior of predicate nouns that highlight the location and movement, the approach proposed by Isaac and colleagues [Isaac et al., 2001] developed software for the creation of a corpus of sentences to measure if the introduction of prepositions in queries, in information retrieval, can improve the accuracy.
Even more, the work of [Baroni & Bernardini, 2004] introduced the "BOOTCAT Toolkit". A set of tools that allow an iterative construction corpus by automated querying the Google and terminology extraction. Although it is devoted to the development of specialized corpora, this tool was used by [Ueyama & Baroni, 2005] and [Sharoff, 2006] to the generalized corpus constitution.
Similarly, the work of [Meftouh et al., 2007] describes a tool of building a corpus for the Arabic. This corpus automatically collected a list of sites dedicated to the Arabic language. Then the content of these sites is extracted and normalized. Indeed, their corpus is particularly dedicated for calculating the statistical language models.
In another approach of Elghamry in which he proposed an experiment on the acquisition of a corpus from the web of the lexicon hypernymy-hyponymy to partial hierarchical structure for Arabic, using the pattern lexicosyntactic "!"# x !"# y1 ... yn" (certain x as y1, ... yn) of [Elghamry, 2008].
From the perspective of automatic summarization, [Maâloul et al., 2010] studied the possibility of extracting Arabic texts of the website "Alhayat" by selecting newspaper articles of HTML type with UTF-8 encoding, to locate the rhetoric relations between the minimum units of the text using rhetorical rules. In addition, [Ghoul, 2014] provides a grammatically annotated corpus for Arabic textual data from the Web, called Web Arabic corpus. The authors note that they apply the "Tree tagger" to annotate their corpus based on a set of labels. This corpus consists of 207 356 sentences and 7 653 881 words distributed on four areas: culture, economy, religion and sports.
Finally, in [Arts et al., 2014] the authors present arTen-Ten, an Arabic explored corpus from the web. This one is a member of the family of Tenten corpus [Jakubí!ek et al., 2013]. arTenTen consists of 5.8 billion words. Using the JusText and Onion tools, this corpus has been thoroughly cleaned, including the removal of duplicate [Pomikalek, 2011]. The authors use the version of the tool MADA 3.2 for marking task, lemmatization and part-of-speech tagging of arTenTen ( [Habash & Rambow, 2005] [Habash et al., 2009]). This corpus is compared to two other Arabic corpus Gigaword [Graff, 2003] and an explored corpus of the web [Sharoff, 2006].
Documents on the web have also operated by other approaches and other researchers ( [Volk, 2001] [Volk, 2002], [Keller & Lapata, 2003], [Villasenor-pineda et al., 2003]) to address the problem of the lack of data in statis-PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP tical modeling language [Kilgarrif & Grefenstette, 2003]. In [Elghamry et al., 2007], these data were used to resolve anaphora in Arabic free texts. Here, the authors construct a dynamic statistical algorithm. This one uses the fewest possible of functionality and less human intervention to overcome the problem lack sufficient resources in NLP. However, [Sellami et al., 2013] are working on, the online encyclopedia, Wikipedia to retrieve the comparable articles.
In a merge of search engine and language processing technologies, the web has also been used by groups in Sheffield and Microsoft among others as source of answers for question-answering applications ( Although the text corpus building efforts are focused on English, Arabic corpus can also be acquired from the Web which is considered as a large data source. These attempts might be in all of the NLP applications. However, we also note significant efforts mainly for the question-answering. In this regard, the major of our knowledge, the number of corpus dedicated to Arabic question-answering is somewhat limited. Among the studies that have dedicated to this field, we cite [Trigui et al., 2010] who built a corpus of definition questions dealing the definitions of organizations. They use a series of 50 organization definition questions. They experienced their system using 2000 extracts returned by the Google search engine and Arabic version of Wikipedia.
We conclude that the natures of web texts are liable to be show up in many applications of automatic processing Arabic language. There are, at present, a number of construction studies of texts corpus in various applications, including the named entity recognition, plagiarism detection, parallel corpus, anaphora resolution, etc. The efforts to build corpus for each application are significant. They could be in all NLP applications.
IV. THE CHALLENGES OF THE ARABIC LANGUAGE
Although, Arabic is within the top ten languages in the internet, it lacks many tools and resources. The Arabic does not have capital letters compared the most Latin languages. This issue makes so difficult the natural language processing, such as, named entity recognition. Unfortunately there is very little attention given to Arabic corpora, lexicons, and machine-readable dictionaries [Hammo et al., 2002]. In their work [Bekhti & Al-harbi, 2011], the authors suggest that the developed Arabic question-answering systems are still few compared to those developed for English or French, for instance. This is mainly due to two reasons: lack of accessibility to linguistic resources and tools, such as corpora and basic Arabic NLP tools, and the very complex nature of the language itself (for instance, Arabic is inflectional and non concatenative and there is no capitalization as in the case of English). On their part, [Abdelnasser et al., 2014] illustrate some difficulties of Arabic. This language is highly inflectional and derivational, which makes its morphological analysis a complex task. Derivational: where all the Arabic words have a three or four characters root verbs. Inflectional: where each word consists of a root and zero or more affixes (prefix, infix, suffix). Arabic is characterized by diacritical marks (short vowels), the same word with different diacritics can express different meanings. Dia-critics are usually omitted which causes ambiguity. Absence of capital letters in Arabic is an obstacle against accurate named entities recognition. And then, In their survey [Ezzeldin & Shaheen, 2012], the authors emphasize that as any other language, Arabic natural language processing needs language resources, such as lexicons, corpora, treebanks, and ontologies are essential for syntactic and semantic tasks either to be used with machine learning or for lookup and validation of processed words.
V. PRESENTATION OF AQA-WEBCORP CORPUS: ARABIC QUESTION ANSWERING WEB CORPUS In the rest of this article, we show in detail our suggestion to build our corpus of pair's questions and texts from the web as well as our preliminary empirical study. In our case, the size of the corpus obtained depends mostly on the number of questions asked and the number of documents selected for each question.
In the context of their construction of a corpus of texts from the web, researchers in [Issac et al., 2001] point out that there are two ways to retrieve information from the web for building a corpus. The first one is to group the data located on known sites [Resnik, 1998]. Indeed, this way runs a vacuum cleaner web. This ensures the recovery of the pages from a given address. However, the second method investigates a search engine to select addresses from one or more queries (whose the complexity depends on the engine). Thereafter, recover manually or automatically the corresponding pages from these addresses.
In our work we follow the second method. From a list of questions posed in natural language, we ensure the recovery of the list of corresponding URLs. Then, from these URLs, we propose to recover the related web pages. Eventually, we propose to clean these pages so as to produce the lists of texts that will be the foundation that built our corpus. After building our corpus of pair's questions and texts, we do not keep it to that state. In this respect, a stage of analysis and processing will be looked later to achieve our main objective which is the extraction of an adequate and accurate answer to each question.
A. Collection of the questions
It is comes to collect a set of questions in natural language. These questions can be asked in different fields, including sport, history & Islam, discoveries & culture, world news, health & medicine.
Currently, our corpus consists of 115 pairs of questions and texts. Indeed, the collection of these Questions is carried out from multiple sources namely, discussion forums, frequently asked questions (FAQ), some questions translated from the two evaluation campaigns TREC and CLEF (Figure 1).
The data collected from the web of the questions and the texts will help us to build an extensible corpus for the Arabic question-answering. The size of our corpus is in the order of 115 factual questions: 10 questions translated from TREC, 10 questions translated from CLEF and 95 questions gathered from the forums and FAQs. To build our corpus, we used the Arabic texts available on the Internet that is collected being based on the questions posed at the outset.
From a perspective of analysis and post processing carried out to our corpus taking into account the form and the PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP content of the corresponding questions, we collected factual questions having the type (What, Where, When, Who, How) ( !" , !"# , !"# , !" !" ) ( Figure 2).
B. The steps of construction
Most studies in Arabic corpus construction are designed for areas other than the question-answering (see figure). According to a research done at Google3, we find that the number of attempts devoted to this area is so limited. That's why we decided to build our own corpus. To address this goal we need as an intermediate step interrogating a search engine. The construction of the corpus for the question-answering gets better. We hope that it will continue to improve in the coming years and will complete one day to produce corpus in this field of research that will be used by researchers in their empirical studies.
Our approach is described in [Bakari et al., 2014] and [Bakari et al., 2015]; The first stage of this approach is the question analysis. Indeed, in this module, we collect and analyse our questions to generate some features for each one. Essentially, these features are the list of question keywords, the focus of the question, and the expected answer type. With a real interrogation of Google, these characteristics may recover for each given question the extracts that address answers for this question. Then, this module infers the question in his declarative form in order to generate, as much as possible, logical representation for each question. Furthermore, the different modules of the extracting the accurate presented in our approach are strongly linked to the question analysis module. In one hand, the keywords and the focus are used to interrogate Google and recuperate relevant passages. In other hand, the declarative form is designed to infer the logic representation. And then, the expected answer type is carried out to select the accurate answer.
To implement our corpus for Arabic, we propose a simple and robust method implemented in Java. The principle of this method is based on four stages, relatively dependent. The constitution of our corpus of pairs of Arabic question and texts is actually done by developing all of these four steps. We describe in the following each of these steps:
• Documents research • Web pages recovery • Texts preparation • Texts classification This methodological framework is to look for web addresses corresponding to each question. Indeed, we have segmented questions in a list of keywords. Then our tool seeks list of URLs addresses which match those keys words. Then, for each given address we propose to recover the webpage that suits him. In this respect, our corpus construction tool is an interface between the user request and Google. Specifically, it is a way to query the Google database to retrieve a list of documents. Finally, we performed a transformation of each retaining web page from (".html" !".txt"). Finally, we look for if the answer is found in the correspondent text. The text is considered valid to build our body if it contains this answer. Otherwise, we go to the following URL.
As illustrated in the (figure 4), we introduce in this section a simple method of construction of our corpus pro-3 http://www.qatar.cmu.edu/~wajdiz/corpora.html moting an effective interrogation of the web. This method is generally composed of three modules. Given a module for generating list of URLs address is implemented, this module supports for any questions posed in natural language a list of corresponding URLs. Next, a processing module behaves like a corresponding web page generator. A third module provides a sort of filtering of these pages. The result of this module could be a set of texts that can be added to the questions to build our corpus from the web.
VI. EMPIRICAL ASSESSMENT
The first stage consists from a question posed in natural language, to attribute the list of URLs corresponding addresses to it. Indeed, the documents search is done being based on the words of the collected questions. A better to ensure this step we developed a java script for interrogating Google to obtain these results. The result of this step is a set of URLs addresses. For each question a list of the URLs will be affected. In our case, to look for an answer to a question in Arabic, we propose to use a search engine (e.g. Google) to retrieve the documents related to each question. Then add post linguistic treatments to those documents which are actually constitute our corpus to have an accurate and appropriate answer. In this respect, querying a search engine accelerates the recovery of documents online but requires an offline processing of these documents. We think that this is a better solution, but much more complex is the implementation of a Linguistic search engine for a particular purpose.
At this stage, the module of documents search is implemented. First, when a question is asked, our tool submits it to the search engine (Google) to identify the list of URLs based on a list of words constitute this question.
Consider the following example: our tool can then from the question: « !" !"# !"# !"#$ ! » Generate a list of equivalent URLs addresses. The default Web access means is through a search engine such as Google. In addition, by clicking the "search URLS" button, a list of addresses will be automatically displayed. While for each URL, this prototype can retrieve the necessary information's (host, query, protocol, etc.). This module, developed in Java, describes our desired process of the passage retrieval from the Web. As a starter, it takes 115 factual questions of five categories (Who, What, When, Where, How). For each question, the elements that it characterizes are recovered: Focus of the question, the expected answer type, the list of keywords, and then its declarative form. Of this declarative form, a logical representation that it suits is established. With a real interrogation of Google, text passages that contain the answers to these questions are chosen. For each question, we assume to be near 9 passages are retrieved. Those passages, in their totality, constitute a text. Besides, the set of texts recovered with the collected questions, build our AQAWebCop corpus. Currently, we are interested in analyzing these texts. To do this, we propose, first, to segment each text into a list of sentences. On the other hand, we aim to transform the sentences into logical forms. In our study of looking for a specific answer, the logic is presented in the question analysis as in the text analysis. Once the list of URLs is generated, our tool must determine for each address the corresponding web page. This is to look for the corresponding HTML page for each given URL. The following figure illustrates this case. From the address retained in the first step, a set of web pages is recovered. Each of these pages is exported in the format ".html".
The figure below is an actual running of our prototype, using the previous steps of figure 6. The "generate HTML" button retrieves a page from its URL ( figure 7).
The last step is to transform every web page obtained in the previous step into a ".txt" format. The texts being in ".html" format, and given that the intended application is the statistical language modeling, it seems justified to put them in the ".txt" format. For this, we remove all the HTML tags for each retrieved pages. As we have said before, our method seeks answers to each question in each generated text. It is possible either is to keep the text for own corpus construction work, or to disregard it.
When the text is generated it shall be ensured its suitability to answer the question put in advance, before saving it. To select this text, we proposed to select from the retained text list one which is to compile the most of information's related to the question. When this text is found, it is saved at the end of to be able to build our own corpus. We are currently developing a corpus dedicated to the Arabic question-answering. The size of the corpus is in the order of 115 pairs of questions and texts. This is collected using the web as a source of data. The data collected, of these questions and texts from the web, will help us to build an extensible corpus for the Arabic questionanswering. The pairs of texts-questions distributed on five areas "!"#$"% &#'() ;!"#$%& '()*+,% ;!"#"$%&' !"#$%& ;!"#$% ; ! "#$ !"" as shown in Figure 10.
Through the Google search engine, we develop a prototype that builds our corpus to Arabic question-answering. This corpus is in the order of 115 pairs of questions and texts. We are experimented our tool using 2875 URLs re- The evaluation of our corpus strongly depends on the number of URLs correctly reported by Google. In this regard, we completed an intrinsic evaluation on a small scale in which we extracted for each question the list of URLs that can be the origin of an appropriate text.
We evaluated manually, the quality of our corpus of pairs of questions and texts by calculating the number of URLs retrieved correctly (list of URLS that contain the answer) compared to the total number of URLs used for each question. Our tool achieved a precision of 0.60. In order to improve the quality of our corpus, we carried out a step of filtering which eliminates all words in other languages, any kind of transliterations, etc.
VII. CONCLUSION AND PERSPECTIVES
It is undeniable that the Arabic corpus is certainly important for various applications in automatic natural language processing. This paper presents our first steps in this field towards the construction of a new corpus dedicated to the Arabic question-answering. It also presents our first version of the AQA-WebCorp (Arabic Question-Answering Web Corpus). It incorporates pairs of questions and texts of five categories, including (« !"#$% !"#$"% ;!"#$%& '()*+,% ;!"#"$%&' !"#$%& ;!"#$% ;!" # $%&»). The first phase focuses on Google search for documents that can answer every question, while the later phase is to select the appropriate text. In addition, to improve the quality of our current corpus, we propose to solve the problems mentioned above. Our task is still unfinished; we hope that we can continue to advance the construction of our body, so it could be effectively used for various purposes. It remains to perform post processing necessary to prepare the corpus for the second phase of representing these textual data into logical forms that can facilitate the extraction of the correct answer.
The proposed method is effective despite its simplicity. We managed to demonstrate that the Web could be used as a data source to build our corpus. The Web is the largest repository of existing electronic documents. Indeed, as prospects in this work, we have labeling this vast corpus and make it public and usable to improve the automatic processing of Arabic.
A perspective of extending the current work of analyzing the texts retrieved from the web using a logical formalization of text phrases. This would allow segmenting each text in a list of sentences and generating logical predicates for each sentence. In addition, the presentation as predicates should facilitate the use of a semantic database, and allow appreciating the semantic distance between the elements of the question and those of the candidate answer using the RTE technique.
Figure 1 .Figure 2 .
12Source of the questions used for our corpus Some examples of questions used in our corpus
Figure 3 .
3List of Arabic corpus per domain iJES -Volume 4, Issue 2, 2016 PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP
Figure 4 .
4Construction process of our corpus AQA-WebCorp
Figure 5 .
5List of URLs generated for the question "
Figure 6 .
6An URL of a web page retaining from the question "
Figure 8 .
8Example of a text containing an answer to the following question:: "
Figure 9 .Figure 10 .
910Saving the text Statistics of pairs questions-texts used in different sources turned by the search engine Google. This is due to the fact that the number of URLs to each particular question is approximately between 23 and 26 URLs. Currently our corpus contains 115 pairs of questions and texts.
[Mark et al., 2002], [Susan et al., 2002]). AnswerBus [Zhiping, 2002] allows answering the questions in English, German, French, Spanish, Italian and Portuguese.
TABLE I .
ISTATISTIC OF OUR CORPUS AQA-WEBCORPNumber of
questions
Total of
URLs
Number
Urls/question
Total texts
Total of
corrects
URLs
/question
115
2875
23!26
115
13!16
McEnery (2003) pointed out that corpuses is perfectly acceptable as a plural form of corpus.
www.internetworldstats.com
Submitted 08 December 2015. Published as resubmitted by the authors 23 Februyra 2016. iJES -Volume 4, Issue 2, 2016
ACKNOWLEDGEMENTS I give my sincere thanks for my collaborators ProfessorPatrice BELLOT (University of Aix Marseille, France) and Mr Mahmoud NEJI (University of Sfax-Tunisia) that i have benefited greatly by working with them.
Developing linguistic corpora: A guide to good practice. J Sinclair, M. WynneOxbow BooksOxford, UKCorpus and text -basic principlesSinclair, J. (2005). Corpus and text -basic principles. In M. Wynne (Ed.), Developing linguistic corpora: A guide to good practice (pp. 1-16). Oxford, UK: Oxbow Books.
« Enjeux épistémologiques de la linguistique de corpus. F Rastier, P.U.RRennes(dir), La linguistique de corpusRastier F. (2005). « Enjeux épistémologiques de la linguistique de corpus », in : Williams C. G. (dir), La linguistique de corpus, Rennes : P.U.R
Constitution d'un corpus de la langue arabe à partir du Web. CITALA '07. Colloque international du traitement automatique de la langue arabe. K Meftouh, K Smaïli, M T Et Laskri, Iera, Rabat, MoroccoMeftouh K., Smaïli K. et Laskri M.T. (2007). Constitution d'un corpus de la langue arabe à partir du Web. CITALA '07. Colloque international du traitement automatique de la langue arabe. Iera, Rabat, Morocco, 17-18 juin 2007.
Parallel strands: A preliminary investigation into mining the web for bilingual text, in conference of the association for machine translation in the Americas. P Resnik, P. Resnik, Parallel strands: A preliminary investigation into min- ing the web for bilingual text, in conference of the association for machine translation in the Americas, 1998.
The 'body'and the 'web': The web as corpus ten years on. Maristella Gatto, ICAME JOURNAL. 35GATTO, Maristella. The 'body'and the 'web': The web as corpus ten years on.ICAME JOURNAL, 2011, vol. 35, p. 35-58.
Web as corpus. A Kilgarriff, G Grefenstette, Proceedings of Corpus Linguistics. Corpus LinguisticsCorpus Linguistics. Readings in a Widening DisciplineKilgarriff, A., & Grefenstette, G. (2001, March). Web as corpus. In Proceedings of Corpus Linguistics 2001 (pp. 342-344). Corpus Linguistics. Readings in a Widening Discipline.
. F Issac, T Hamon, L Bouchard, L Emirkanian, C Fouqueré, extraction informatique de données sur le web : une expérienceF. Issac, T. Hamon, L. Bouchard, L. Emirkanian, C. Fouqueré, extraction informatique de données sur le web : une expérience, in
Internet et francophonie : à la recherche d'un dialogue. Multimédia, Vancouver, Canada, marsMultimédia, Internet et francophonie : à la recherche d'un dia- logue, Vancouver, Canada, mars 2001.
A review of Arabic corpus analysis tools. E Atwell, L Al-Sulaiti, S Al-Osaimi, B Abu Shawar, Proceedings of TALN04: XI Conference sur le Traitement Automatique des Langues Naturelles. B. Bel & I. MarlienTALN04: XI Conference sur le Traitement Automatique des Langues NaturellesATALA2Atwell, E., Al-Sulaiti, L., Al-Osaimi, S. & Abu Shawar, B. (2004). A review of Arabic corpus analysis tools. In B. Bel & I. Marlien (Eds.), Proceedings of TALN04: XI Conference sur le Traitement Automatique des Langues Naturelles (volume 2, pp. 229-234). ATALA.
The Absence of Arabic Corpus Linguistics: A Call for Creating an Arabic National Corpus. Mohamed Abdelmageed Mansour, International Journal of Humanities and Social Science. 312Mohamed Abdelmageed Mansour, The Absence of Arabic Corpus Linguistics: A Call for Creating an Arabic National Corpus, Inter- national Journal of Humanities and Social Science, Vol. 3 No. 12 [Special Issue -June 2013]
Mining the web to create minority language corpora. R Ghani, R Jones, D Mladenic, R. Ghani, R. Jones, D. Mladenic, Mining the web to create minori- ty language corpora, CIKM 2001, 279-286.
Bootcat: Bootstrapping corpora and terms from the web, proceeding of LREC. M Baroni, S Bernardini, M. Baroni, S. Bernardini, Bootcat: Bootstrapping corpora and terms from the web, proceeding of LREC 2004, 1313-1316.
Automated construction and evaluation of Japanese web-based reference corpora, proceedings of corpus linguistics. M Ueyama, M Baroni, M. Ueyama, M. Baroni, Automated construction and evaluation of Japanese web-based reference corpora, proceedings of corpus lin- guistics, 2005.
Creating general-purpose corpora using automated search engine. S Sharoff, Wacky! Working papers on the web as corpus. Bologna: GEDITS. Sharoff, Creating general-purpose corpora using automated search engine, in Wacky! Working papers on the web as corpus, Bologna: GEDIT 2006, 63-98.
Using the web in building a corpus-based hypernymy-hyponymy Lexicon with hierarchical structure for Arabic. Faculty of Computers and Information. K Elghamry, Elghamry, K. (2008). Using the web in building a corpus-based hypernymy-hyponymy Lexicon with hierarchical structure for Ar- abic. Faculty of Computers and Information, 157-165.
Automatic summarization of arabic texts based on RST technique. Mohamed Hédi Maâloul, Iskander Keskes, 12th International Conference on Enterprise Information Systems (ICEIS'2010). PortugalLamia Hadrich Belguith and Philippe Blache. 8 au 12 juin 2010Mohamed Hédi Maâloul,Iskander Keskes, Lamia Hadrich Bel- guith and Philippe Blache, "Automatic summarization of arabic texts based on RST technique",12th International Conference on Enterprise Information Systems (ICEIS'2010), 8 au 12 juin 2010, Portugal.
Web Arabic corpus: Construction d'un large corpus arabe annoté morpho-syntaxiquement à partir du Web. D Ghoul, ACTES DU COLLOQUE. 12Ghoul, D. Web Arabic corpus: Construction d'un large corpus arabe annoté morpho-syntaxiquement à partir du Web. In ACTES DU COLLOQUE , 2014 (p. 12).
arTenTen: Arabic Corpus and Word Sketches. T Arts, Y Belinkov, N Habash, A Kilgarriff, V Suchomel, 10.1016/j.jksuci.2014.06.009Journal of King Saud University-Computer and Information Sciences. 264Arts, T., Belinkov, Y., Habash, N., Kilgarriff, A., & Suchomel, V. (2014). arTenTen: Arabic Corpus and Word Sketches. Journal of King Saud University-Computer and Information Sciences, 26(4), 357-371. http://dx.doi.org/10.1016/j.jksuci.2014.06.009
M Jakubí!ek, A Kilgarriff, " Ková, V Rychl#, P Suchomel, V , The TenTen Corpus Family.International Conference on Corpus Linguistics. LancasterJakubí!ek, M., Kilgarriff, A., Ková", V., Rychl#, P., and Su- chomel, V. 2013.The TenTen Corpus Family.International Con- ference on Corpus Linguistics, Lancaster.
Removing Boilerplate and Duplicate Content from Web Corpora. J Pomikalek, BrnoMasaryk UniversityPhD thesisPomikalek, J. 2011. Removing Boilerplate and Duplicate Content from Web Corpora. PhD thesis, Masaryk University, Brno.
Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. N Habash, O Rambow, Proc. of the Association for Computational Linguistics (ACL'05). of the Association for Computational Linguistics (ACL'05)Ann Arbor, MichiganHabash, N. and Rambow, O. 2005. Arabic Tokenization, Part-of- Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proc. of the Association for Computational Linguistics (ACL'05), Ann Arbor, Michigan.
MADA+TOKAN: A Toolkit for Arabic Tokenization, Diacritization, Morphological Disambiguation, POS Tagging, Stemming and Lemmatization. N Habash, O Rambow, R Roth, Proc. of the International Conference on Arabic Language Resources and Tools. of the International Conference on Arabic Language Resources and ToolsCairo, EgyptHabash, N., Rambow, O. and Roth, R. 2009. MADA+TOKAN: A Toolkit for Arabic Tokenization, Diacritization, Morphological Disambiguation, POS Tagging, Stemming and Lemmatization. In Proc. of the International Conference on Arabic Language Re- sources and Tools, Cairo, Egypt.
Using the web as corpus for linguistic research. Catcher of the Meaning. M Volk, Pajusalu, R., Hennoste, T.Volk, M. (2002). Using the web as corpus for linguistic re- search. Catcher of the Meaning. Pajusalu, R., Hennoste, T.(Eds.).
Dept, General Linguistics. 3Dept. of General Linguistics, 3.
Using the Web to Obtain Frequencies for Unseen Bigrams. F Keller, M Lapata, 10.1162/089120103322711604Computational Linguistics. 29Keller, F. and M. Lapata. 2003. Using the Web to Obtain Fre- quencies for Unseen Bigrams. Computational Linguistics 29:3, 459-484. http://dx.doi.org/10.1162/089120103322711604
A corpus balancing method for language model construction. L Villaseñor-Pineda, M Montes-Y-Gómez, M A Pérez-Coutiño, D Vaufreydaz, 10.1007/3-540-36456-0_40Computational Linguistics and Intelligent Text Processing. Berlin HeidelbergSpringerVillaseñor-Pineda, L., Montes-y-Gómez, M., Pérez-Coutiño, M. A., & Vaufreydaz, D. (2003). A corpus balancing method for lan- guage model construction. In Computational Linguistics and Intel- ligent Text Processing (pp. 393-401). Springer Berlin Heidelberg. http://dx.doi.org/10.1007/3-540-36456-0_40
. Paper A Preliminary Study For Building An Arabic Corpus Of Pair Questions-Texts From The Web, PAPER A PRELIMINARY STUDY FOR BUILDING AN ARABIC CORPUS OF PAIR QUESTIONS-TEXTS FROM THE WEB: AQA-WEBCORP
Introduction to the special issue on the Web as corpus. A Kilgarrif, G Grefenstette, Association for computational linguistics. 293A. Kilgarrif, G. Grefenstette, Introduction to the special issue on the Web as corpus, in Association for computational linguistics 29(3): 333-347, 2003.
Arabic Anaphora Resolution Using Web as Corpus. K Elghamry, R Al-Sabbagh, N El-Zeiny, Proceedings of the seventh conference on language engineering. the seventh conference on language engineeringCairo, EgyptElghamry, K., Al-Sabbagh, R., El-Zeiny, N. (2007). "Arabic Anaphora Resolution Using Web as Corpus", Proceedings of the seventh conference on language engineering, Cairo, Egypt.
Traduction automatique statistique à partir de corpus comparables : application aux couples de langues arabe-français. Rahma Sellami, Fatiha Sadat, Lamia Hadrich Belguith, CORIA. Rahma Sellami, Fatiha Sadat, and Lamia Hadrich Belguith. 2013. Traduction automatique statistique à partir de corpus comparables : application aux couples de langues arabe-français. In CORIA, pages 431-440.
University of she_eld trec 2002 q & a system. Mark Greenwood, Ian Roberts, Robert Gaizauskas, The Eleventh Text REtrieval Conference (TREC-11). E. M. Voorhees and Lori P. BucklandWashington. U.SGovernment Printing OfficeGreenwood, Mark, Ian Roberts, and Robert Gaizauskas. 2002. University of she_eld trec 2002 q & a system. In E. M. Voorhees and Lori P. Buckland, editors, The Eleventh Text REtrieval Con- ference (TREC-11), Washington. U.S. Government Printing Of- fice. NIST Special Publication 500{XXX.
Web question answering: is more always better?. Susan Dumais, Michele Banko, Eric Brill, Jimmy Lin, Andrew Ng, 10.1145/564376.564428Proc. 25th ACM SIGIR. 25th ACM SIGIRTampere, FinlandDumais, Susan, Michele Banko, Eric Brill, Jimmy Lin, and An- drew Ng. 2002. Web question answering: is more always better? In Proc. 25th ACM SIGIR, pages 291{298, Tampere, Finland. http://dx.doi.org/10.1145/564376.564428
Literature Review of Arabic Question-Answering: Modeling, Generation, Experimentation and Performance Analysis. W Bakari, P Bellot, M Neji, Flexible Query Answering Systems 2015. Springer International PublishingBakari, W., Bellot, P., and Neji, M. (2015). Literature Review of Arabic Question-Answering: Modeling, Generation, Experimenta- tion and Performance Analysis. In Flexible Query Answering Sys- tems 2015 (pp. 321-334). Springer International Publishing.
Logicbased approach for improving Arabic question answering. W Bakari, O Trigui, M Neji, 10.1109/iccic.2014.7238319Computational Intelligence and Computing Research (ICCIC). IEEE International Conference onBakari, W., Trigui, O., and Neji, M. (2014, December). Logic- based approach for improving Arabic question answering. In Computational Intelligence and Computing Research (ICCIC), 2014 IEEE International Conference on (pp. 1-6). IEEE. http://dx.doi.org/10.1109/iccic.2014.7238319
AnswerBus question answering system. Zhiping Zheng, 10.3115/1289189.1289238Proceeding of HLT Human Language Technology Conference (HLT 2002). E. M. Voorhees and Lori P. Bucklandeeding of HLT Human Language Technology Conference (HLT 2002)San Diego, CAZheng, Zhiping. 2002. AnswerBus question answering system. In E. M. Voorhees and Lori P. Buckland, editors, Proceeding of HLT Human Language Technology Conference (HLT 2002), San Die- go, CA, March 24{27. http://dx.doi.org/10.3115/1289189.1289238
DefArabicQA: Arabic Definition Question Answering System. O Trigui, L Belguith, P Rosso, Workshop on Language Resources and Human Language Technologies for Semitic Languages, 7th LREC. Valletta, MaltaO. Trigui, L.H Belguith and P. Rosso, "DefArabicQA: Arabic Definition Question Answering System", In Workshop on Lan- guage Resources and Human Language Technologies for Semitic Languages, 7th LREC. Valletta, Malta. 2010.
Exploiting the www as a corpus to resolve pp attachment ambiguities. Martin Volk, Proc. Corpus Linguistics. Corpus LinguisticsUKLancasterVolk, Martin. 2001. Exploiting the www as a corpus to resolve pp attachment ambiguities. In Proc. Corpus Linguistics 2001, Lancas- ter, UK.
The Oxford handbook of computational linguistics. A M Mcenery, R. MitkovOxford University PressOxfordCorpus linguisticsMcEnery, A. M. (2003). Corpus linguistics.In R. Mitkov (Ed.), The Oxford handbook of computational linguistics. (pp. 448-463). Oxford: Oxford University Press.
QARAB: A Question Answering System to Support the Arabic Language. B Hammo, H Abu-Salem, S Lytinen, M Evens, Workshop on Computational Approaches to Semitic Languages. ACL. Philadelphia, PAHammo B., Abu-Salem H., Lytinen S., Evens M., "QARAB: A Question Answering System to Support the Arabic Language", Workshop on Computational Approaches to Semitic Languages. ACL 2002, Philadelphia, PA, July. p 55-65, 2002
. 10.3115/1118637.1118644http://dx.doi.org/10.3115/1118637.1118644
A survey of Arabic question answering: challenges, tasks, approaches, tools, and future trends. A M Ezzeldin, M Shaheen, 1812-0857the 13th International Arab Conference on Information Technology ACIT'2012. Dec. Ezzeldin A. M. and Shaheen M. (2012). "A survey of Arabic question answering: challenges, tasks, approaches, tools, and fu- ture trends", the 13th International Arab Conference on Infor- mation Technology ACIT'2012. Dec.10-13. ISSN 1812-0857.
Al-Bayan: An Arabic Question Answering System for the Holy Quran. H Abdelnasser, R Mohamed, M Ragab, A Mohamed, B Farouk, N El-Makky, M Torki, 10.3115/v1/w14-3607ANLP. 57Abdelnasser, H., Mohamed, R., Ragab, M., Mohamed, A., Farouk, B., El-Makky, N., & Torki, M. (2014). Al-Bayan: An Arabic Question Answering System for the Holy Quran. ANLP 2014, 57. http://dx.doi.org/10.3115/v1/w14-3607
AQUASYS: an arabic question-answering system based on extensive question analysis and answer relevance scoring. S Bekhti, A Rehman, M Al-Harbi, T Saba, In International Journal of Academic Research. 345Bekhti S., Rehman A., AL-Harbi M. and Saba T. (2011)."AQUASYS: an arabic question-answering system based on extensive question analysis and answer relevance scoring", In International Journal of Academic Research; Jul2011, Vol. 3 Is- sue 4, p45.
| [] |
[
"Ranking medical jargon in electronic health record notes by adapted distant supervision",
"Ranking medical jargon in electronic health record notes by adapted distant supervision"
] | [
"Jinying Chen jinying.chen@umassmed.edu \nDepartment of Quantitative Health Sciences\nUniversity of Massachusetts Medical School\nWorcesterMAUSA\n",
"Abhyuday N Jagannatha abhyuday@cs.umass.edu \nSchool of Computer Science\nUniversity of Massachusetts\nAmherstMAUSA\n",
"Samah J Jarad \nYale Center for Medical Informatics\nYale University\nNew HavenCTUSA\n",
"Hong Yu hong.yu@umassmed.edu \nDepartment of Quantitative Health Sciences\nUniversity of Massachusetts Medical School\nWorcesterMAUSA\n\nCenter for Healthcare Organization and Implementation Research\nBedford Veterans Affairs Medical Center\nBedfordMAUnited States\n"
] | [
"Department of Quantitative Health Sciences\nUniversity of Massachusetts Medical School\nWorcesterMAUSA",
"School of Computer Science\nUniversity of Massachusetts\nAmherstMAUSA",
"Yale Center for Medical Informatics\nYale University\nNew HavenCTUSA",
"Department of Quantitative Health Sciences\nUniversity of Massachusetts Medical School\nWorcesterMAUSA",
"Center for Healthcare Organization and Implementation Research\nBedford Veterans Affairs Medical Center\nBedfordMAUnited States"
] | [] | Objective: Allowing patients to access their own electronic health record (EHR) notes through online patient portals has the potential to improve patient-centered care. However, medical jargon, which abounds in EHR notes, has been shown to be a barrier for patient EHR comprehension. Existing knowledge bases that link medical jargon to lay terms or definitions play an important role in alleviating this problem but have low coverage of medical jargon in EHRs. We developed a data-driven approach that mines EHRs to identify and rank medical jargon based on its importance to patients, to support the building of EHR-centric lay language resources.Methods:We developed an innovative adapted distant supervision (ADS) model based on support vector machines to rank medical jargon from EHRs. For distant supervision, we utilized the open-access, collaborative consumer health vocabulary, a large, publicly available resource that links lay terms to medical jargon. We explored both knowledge-based features from the Unified Medical Language System and distributed word representations (word embeddings) learned from unlabeled large corpora. We evaluated the ADS model using physician-identified important medical terms.Results: Our ADS model significantly surpassed two state-of-the-art automatic term recognition methods, TF*IDF and C-Value, yielding 0.810 ROC-AUC versus 0.710 and 0.667, respectively.Our model identified over 10K important medical jargon terms after ranking over 100K candidate terms mined from over 7,500 EHR narratives.Conclusion:Our work is an important step towards enriching lexical resources that link medical jargon to lay terms/definitions to support patient EHR comprehension. The identified medical jargon terms and their rankings are available upon request. | null | [
"https://arxiv.org/pdf/1611.04491v1.pdf"
] | 18,994,751 | 1611.04491 | 3d1fbbe5eba1ba8deef5a960f9028e933ee96d55 |
Ranking medical jargon in electronic health record notes by adapted distant supervision
Jinying Chen jinying.chen@umassmed.edu
Department of Quantitative Health Sciences
University of Massachusetts Medical School
WorcesterMAUSA
Abhyuday N Jagannatha abhyuday@cs.umass.edu
School of Computer Science
University of Massachusetts
AmherstMAUSA
Samah J Jarad
Yale Center for Medical Informatics
Yale University
New HavenCTUSA
Hong Yu hong.yu@umassmed.edu
Department of Quantitative Health Sciences
University of Massachusetts Medical School
WorcesterMAUSA
Center for Healthcare Organization and Implementation Research
Bedford Veterans Affairs Medical Center
BedfordMAUnited States
Ranking medical jargon in electronic health record notes by adapted distant supervision
natural language processingelectronic health recordinformation extractiondistant supervisionranking
Objective: Allowing patients to access their own electronic health record (EHR) notes through online patient portals has the potential to improve patient-centered care. However, medical jargon, which abounds in EHR notes, has been shown to be a barrier for patient EHR comprehension. Existing knowledge bases that link medical jargon to lay terms or definitions play an important role in alleviating this problem but have low coverage of medical jargon in EHRs. We developed a data-driven approach that mines EHRs to identify and rank medical jargon based on its importance to patients, to support the building of EHR-centric lay language resources.Methods:We developed an innovative adapted distant supervision (ADS) model based on support vector machines to rank medical jargon from EHRs. For distant supervision, we utilized the open-access, collaborative consumer health vocabulary, a large, publicly available resource that links lay terms to medical jargon. We explored both knowledge-based features from the Unified Medical Language System and distributed word representations (word embeddings) learned from unlabeled large corpora. We evaluated the ADS model using physician-identified important medical terms.Results: Our ADS model significantly surpassed two state-of-the-art automatic term recognition methods, TF*IDF and C-Value, yielding 0.810 ROC-AUC versus 0.710 and 0.667, respectively.Our model identified over 10K important medical jargon terms after ranking over 100K candidate terms mined from over 7,500 EHR narratives.Conclusion:Our work is an important step towards enriching lexical resources that link medical jargon to lay terms/definitions to support patient EHR comprehension. The identified medical jargon terms and their rankings are available upon request.
INTRODUCTION
Patient portals, including My HealtheVet [1], have been embraced by many healthcare organizations for patient-clinician communication. Allowing patients to access their EHR notes helps improve their disease understanding, self-management and outcomes [1,2]. However, studies have shown that patients often have difficulty in comprehending medical jargon [3][4][5][6][7] (here "medical jargon" is defined as "technical terminology or special words that are used by medical professions and are difficult for others to understand"), limiting their ability to understand their clinical status [5,6]. Figure 1 shows a sample text found in a typical clinical note. The medical terms that may hinder patients' comprehension are italicized. In addition, those medical jargon terms judged by physicians to be important for patient understanding are also underlined. To reduce the communication gap between patients and clinicians, there have been decades of research efforts in creating medical resource for lay people [8]. Natural language processing methods have also been developed to automatically substitute medical jargon with lay terms [9][10][11] or to link them to consumer-oriented definitions [12]. These approaches require high-quality lexical resources of medical jargon and lay terms/definitions. The open-access collaborative consumer health vocabulary (CHV) is one such resource [13]. It has been incorporated into the Unified Medical Language System and has also been used in EHR simplification [9,10].
Research in CHV has been motivated by the vocabulary discrepancies between lay people and health care professionals [14][15][16][17]. CHV incorporates terms extracted from various consumer health sites, such as queries submitted to MedLinePlus, a consumer-oriented online knowledge resource maintained by the National Library of Medicine, and from postings in health-focused online discussion forums [18,19] Language System by a semi-automatic approach. As the result of this work, CHV encompasses lay terms as well as corresponding medical jargon.
From our current work, we found that CHV alone is not sufficient as a lexical resource for comprehending EHR notes, as many medical jargon terms in EHRs do not exist in CHV, and many others exist in CHV but with lay terms identical to the jargon terms themselves. For example, in CHV, the respective lay terms of 19,674 jargon terms (e.g., "neurocytoma", "lymphangiomatosis", and "laryngeal carcinoma") are themselves. Although CHV provides lay definitions to some of these terms, 18,823 (96%) terms remain to be unannotated (i.e., have neither appropriate lay terms nor lay definitions).
The goal of this study is to identify medical jargon from EHRs for the purpose of creating new lexical entries to link medical jargon to lay terms/definitions. Since the size of medical jargon is large (tens of thousands of terms), we will rank them based on how important they are to lay people, and therefore prioritize the annotation effort of lexical entries on those important terms. Specifically, we proposed and developed an adapted distant-supervision (ADS) model to rank terms in EHRs to prioritize those important medical jargon terms for lay language annotation. We made a novel use of a non-EHR-centric resource (i.e., CHV) for distant supervision and showed promising results from using this approach. Our work is different from previous work in building biomedical lexical resources. Previous work either uses unsupervised automatic term recognition methods [21][22][23] or uses supervised learning (when human annotations are available) [21] to prioritize terms.
Our contributions are twofold. Firstly, we develop and evaluate the ADS model, a new model that mines and ranks medical terms from large EHR corpora based on terms' importance to patients.
Secondly, we apply our ADS model to rank over 100K EHR terms to prioritize 10K important terms for lay language annotation.
In addition, the ranking methods we developed have a great potential to be applied to other clinical natural language processing tasks, including generating features for keyphrase extraction, information retrieval, summarization, and question answering.
MATERIALS AND METHODS
EHR Corpora and Candidate Terms
In this study, we utilized two EHR corpora: EHR-Pittsburg and EHR-UMass.
EHR-Pittsburg 1 contains 7,839 discharge summary notes with 5.4M words. We applied the linguistic filter of the automatic term recognition toolkit Jate [24] to this corpus and extracted 106,108 candidate terms (see Step 1 in Figure 2). These candidate terms were further used to identify and rank medical jargon terms. EHR-UMass contains 90 de-identified EHR notes from the UMass Memorial Hospital outpatient clinics. To maximize the representativeness, we selected notes from patients with six common primary clinical diagnoses: cancer, COPD, diabetes, heart failure, hypertension, and liver failure.
We de-identified the notes and then asked physicians to identify, for each note, terms important to patients [25]. Specifically, physicians were asked to identify EHR terms which the patients need to know to comprehend the notes for the most important aspects medically relevant to their health and treatment course. For each note, we obtained annotations from two physicians. Three physicians did the annotation and annotated 48, 68, and 64 notes respectively. We used this expert-annotated corpus to create a dataset for evaluating ADS (details in the subsection Evaluation Using the EHR-UMass Dataset).
Medical Jargon in CHV
In this study, we evaluated the coverage of CHV for medical jargon in EHR. In addition, we used medical jargon in CHV to create training data for the ADS model. We followed [9] (i.e., CHV familiarity score ≤ 0.6) to identify medical jargon terms in CHV.
Baseline Systems
We evaluated two baseline systems for ranking medical terms: Corpus-level TF*IDF [24] and C-Value [26]. Both baselines are state-of-the-art automatic term recognition methods that have proved to be effective in identifying domain-specific biomedical terms [24,26]. EHR notes are abundant in medical jargon. The two methods, by their definitions described below, are expected to be able to identify these medical terms and rank them by their importance to the EHR corpora. When applying the two methods to our task, we made an assumption that medical terms salient in a large EHR corpus are important to patients (based on the fact that EHR notes are documents that record patients' medical and treatment course) and therefore should be prioritized for annotation.
Corpus-level TF*IDF: TF*IDF [27] is a widely used metric for measuring the importance of a term to a document d in a corpus D. The more frequently the term appears in the document and the less frequently it appears in other documents, the more important it is to this document. We used Corpus-level TF*IDF (abbreviated as TF*IDF in the rest of the paper) to measure the importance of a candidate term t to a corpus D by summing up a term's TF*IDF per document over the corpus, as defined in Equation (1):
* ( , ) = ∑ ( ( , )× ( , ) ∈ ) (1) where d is any document in D; tf(t,d) is term frequency of t in d; and idf(t,D) is inverse document frequency of t in D,
which indicates the importance of t across the corpus D and is defined as
| | |{ | ∈ }| .
C-Value: C-Value is a widely used method for extracting terminology from text corpora. It has been used to prioritize health and biomedical terms for developing lexical resources, including CHV [22,23]. It measures the importance of a term t in a corpus D by its frequency in the corpus tf(t,D). If a term is nested in other longer terms, C-Value penalizes it, as defined in Equation (2):
− ( , ) = { 2 | |× ( , ), 2 | |× ( , ) − 1 | | ∑ ( , ), ℎ ∈ (2)
where |t| is the number of words contained in t; T t is the set of all long candidate terms (phrases) b that contain t; and |T t | is the number of terms in T t .
The ADS Model
Our ADS model is a case of learning from positive and unlabeled data [28][29][30][31][32]. In particular, Elkan We used CHV to select and label positive examples to train ADS (see Step 2 in Figure 2). Our approach is based on an assumption that medical terms important to patients must be used by patients. Specifically, we assume that medical terms that occur in both EHRs and CHV are important to patients because they are medical synonyms of terms initially identified from queries and postings generated by patients in online health forums. Based on this assumption, we used medical jargon terms in both EHRs and CHV (called EHR-CHV medical jargon terms) as positive examples because our goal is to prioritize important medical jargon terms from EHRs. In addition to the positive examples labeled by CHV, some unlabeled EHR terms may be also important and therefore should also be ranked high. We achieved this goal by learning from positive and unlabeled data. Figure 2 illustrates data extraction, ADS and its evaluation.
Our ADS model has two major components: the support vector machine classifier and the features used for classification.
The Support Vector Machine Classifier
Previous work has shown that support vector machines [33] are effective in learning from positive and unlabeled data [29,32,34], our ADS is therefore built upon support vector machines. We employed the widely used and reproducible LibSVM package [35]. We chose the RBF-kernel support vector machine as we found it performed better than support vector machines using linear and polynomial kernels in our preliminary experiments. We assigned a rank score to each term using LibSVM's probabilistic outputs [36,37]. We used these probabilistic rank scores to merge the rankings from 10-fold runs to obtain the global rank of a term.
Features
We developed three types of learning features: (1) confidence scores as computed by automatic term recognition (ATR) (2) Unified Medical Language System semantic types (Sem), and (3) distributed word representation or word embedding (WE).
Confidence scores from automatic term recognition: we used the confidence scores from TF*IDF and C-value.
Unified Medical Language System semantic types: We mapped candidate terms to Unified Medical
Language System concepts and included semantic types for those concepts that had an exact match or a head-noun match as features. Each semantic type is a 0-1 binary feature. This type of feature has been used to identify domain-specific medical terms [12,22]. In this work, we made an assumption that different semantic types contribute differently to a term's importance to patients.
We relied on the support vector machine classifier to learn the weight/contribution of each semantic type.
Distributed word representation (word embedding): Word embeddings are distributed vector representations of words. Each dimension of a word vector has a real value ranged between 0 and 1. We treated each dimension as a feature.
Because word embedding vectors are learned from large text corpora and incorporate syntactic and semantic properties of words, words sharing similar semantics and context are expected to be close in their word vector space [38,39]. In this work, we made an assumption that medical terms that are important to patients share similar semantics and context. Therefore, word embeddings are likely to be useful features for learning important medical terms.
We trained a neural language model to learn word embeddings. Specifically, we used Word2Vec software, which supports efficient computations on large datasets, to create the Skip Gram word embeddings [38,39]. We trained Word2Vec using a combined text corpus (over 3G words) of English Wikipedia, articles from PubMed Open Access and 99K EHR notes from EHR-Pittsburg.
We set the training parameters based on the study of Pyysalo et al. [40]. We represented multiword terms with the mean of individual word vectors.
Training and Evaluation Datasets
We used two datasets in our study. The EHR-Pittsburg dataset was used for training, evaluation, and generating the global ranking of candidate terms, while the EHR-UMass dataset was used for evaluation. The statistics of these two datasets are summarized in Table 1
Training Using the EHR-Pittsburg Dataset
The numbers of terms used as positive and unlabeled data were 6,959 and 99,149, respectively (see
Step 2 in Figure 2). For training, we divided the data into 10 folds. We used 9 folds to train the ADS model and applied it to the remaining fold to obtain the rank scores of the candidate terms.
We produced a total of 10 ranking outputs, one for each fold. We then merged the 10 outputs to produce a global ranking, which we evaluated using a metrics that measures system performance for learning from positive and unlabeled data (details in the subsection Evaluation Metrics).
Evaluation Using the EHR-UMass Dataset
Because the EHR-UMass corpus was annotated by physicians in such a way that terms that are important to patients were labeled as "positive", we utilized their annotations to create a data set with both positive and negative examples for evaluation (see Step 4 in Figure 2). Specifically, we applied Jate to extract 6,280 candidate terms from the EHR-UMass corpus. Candidate terms exactly matching the physician-annotated important terms were labeled as positive. The remaining candidate terms were labeled as negative. In total, we obtained 1,018 positive and 5,262 negative examples, which we included in the EHR-UMass dataset for evaluating ADS. We compute the C-Value and TF*IDF scores for the terms in this dataset by using a large EHR corpus that contains 6K notes (including the 90 EHR-UMass notes) collected using the same six diagnoses used for collecting the 90 notes.
Post-processing
As we found that the performances of TF*IDF and C-Value were negatively affected by highfrequency, common terms (e.g., "patient" and "pain") in EHRs, we added a post-processing procedure that used a stopwords list to filter out common terms from the models' outputs. This list contains 100 high-frequency non-medical terms in the EHR-UMass corpus. In addition, we used regular expressions to rank low compound terms that contain "not" "no", "and", or "or". In our evaluation using the EHR-UMass dataset, we report both conditions: with and without postprocessing.
Evaluation Metrics
Receiver Operating Characteristic (ROC) and Area Under ROC Curve (ROC-AUC): ROC curve
is a metrics widely used for evaluating ranking outputs. It plots the true positive rate (y-coordinate) against the false positive rate (x-coordinate) at various threshold settings. We report both ROC and ROC-AUC by using the R library pROC [41].
Metrics for learning from Positive and Unlabeled data (PU
RESULTS
CHV Coverage of Medical Jargon in EHRs
From the EHR-Pittsburg corpus, we extracted 106,108 candidate terms, on which we applied MetaMap [43] to select medical terms. A total of 19,503 (18%) of the candidate terms were successfully mapped to Unified Medical Language System concepts by MetaMap. However, 4,680 (24%) of these medical terms do not appear in CHV. We manually examined those terms and found a majority of them were medical jargon terms, such as "Bruton agammaglobulinemia", "molecular diagnostics", "motor symptom", and "reactive lymphocytosis". The remaining 86,605 (82%) candidate terms also contained medical jargon terms, such as "anticardiolipin", "BGM", 2 "demargination", "heptoglobin", "hypoalimentation", and "hypobilirubinemia". Figure 3 plots the PU metrics as a function of rank k for the ADS model and two baseline systems on the EHR-Pittsburg dataset. The PU metrics curve of ADS rapidly reaches to the peak at k = 9,229, and then declines sharply. In contrast, the two baseline systems' PU metrics are relatively stable. Overall, ADS was consistently better than the two baselines for all k.
ADS Ranking Performance on EHR-Pittsburg Dataset
ADS Ranking Performance on EHR-UMass Dataset
DISCUSSION
Principle Results
We find that CHV has incomplete coverage of medical jargon in EHRs. We therefore developed the ADS model to rank 100K candidate terms from the EHR-Pittsburg corpus and prioritized our annotation of lay terms/definitions for top-ranked terms. ADS ranks EHR terms based on the assumption that medical terms that occur in both EHRs and CHV are important to patients. ADS achieved 0.810 ROC-AUC on the EHR-UMass dataset ( Table 2). This level of performance is adequate, especially considering that ADS does not use any human-annotated training data. This result indirectly verifies the validity of our assumption. It also suggests that we can use ADS to prioritize EHR terms for annotation.
Interpretation of the PU Metrics Curves
In Figure 3, the performance of ADS on the EHR-Pittsburg dataset reaches a peak at rank k = 9,229, with a sharp drop afterwards. At its peak point, ADS is able to identify 5,248 (75%) of the total (4a) (4b) 6,959 labeled positive terms (i.e., EHR-CHV medical jargon terms). Most of them are important medical jargon, including "premature atrial contraction", "polycythemia", "T-cell lymphoma", and "thallium stress test". The non-CHV terms that were ranked by ADS in top-10K also contain many important medical jargon terms, such as "chronic respiratory insufficiency", "nasogastric decompression", and "preoperative chemotherapy". At lower ranks, ADS is less effective in identifying EHR-CHV jargon terms, finding more common terms such as "nose", "activate", "training", and "dust". This result suggests that we can use the rank 10K as a threshold to divide the ranked terms into high-quality and low-quality groups. Terms in the high-quality group are being used to support annotation of lay terms/definitions.
The curves of TF*IDF and C-Value are relatively flat because term ranking is based on statistical values and therefore insensitive to CHV terms, as opposed to ADS, which is supervised by CHV. Figure 4 and Table 2 show that ADS outperforms the two baselines on the EHR-UMass dataset.
ADS and the Baselines TF*IDF and C-Value
Our result analysis shows that the three models have similar performance at top ranks (top-n lists where n<30) and ADS has much better performance at lower ranks. We manually checked the top-10 erroneous terms identified from the EHR-UMass dataset (with post-processing), shown in Table 4 where the EHR-CHV medical jargon terms are bolded. As shown in Table 4, the top-10 errors identified by ADS are all EHR-CHV medical jargon terms (bolded). Because the physicians only identified few (15 per note, on average) important medical terms from each EHR note, they did not mark many CHV terms as important. However, this type of error may not be critical for our annotation task. For example, some CHV terms not marked by physicians are still worth annotating with lay definitions/terms for EHR comprehension, e.g., "Raynaud"(Raynaud's disease), "pyelonephristis", "onychomycosis", and "cholestasis".
Although post-processing boosted TF*IDF and C-Value performances, they still ranked some common terms high (e.g., "surgery", "sleep", "liver", "abdominal pain", and "blood sugar"). In addition, C-Value ranks many multi-word terms high because it favors long phrases by its definition.
ADS Raynaud, macrolide, polyuria, Tegretol, aminoglycoside, pyelonephritis, onychomycosis, cholestasis, coarctation, Imuran TF*IDF surgery, lesion, area, sleep, continue, issue, liver, breast, Allscripts, state C-Value abdominal pain, past medical history, blood sugar, p.r.n., normal limit, CT scan, vital sign, family history, low back pain, soft nontender nondistended Table 4. Top-10 erroneous terms identified from the EHR-UMass dataset by ADS, TF*IDF and C-Value, with post-processing
The false-negative terms predicted by the three systems (i.e., important medical jargon terms that were ranked low) are also different. ADS often missed medical terms that contain easy words, such as "pancreatic enzyme replacement" and "chronic lower extremity edema". One reason for this error is that these terms have similar word embedding and/or semantic type features as lay (negative) terms (e.g., "replacement", "chronic", "lower", and "extremity"). Using advanced phrase embedding techniques may alleviate this problem, which we may explore in future. Different from ADS, TF*IDF and C-Value often missed terms that are important but occur infrequently in the 6K EHR-UMass notes, such as "neurodermatitis", "diabetic renal disease", "autonomic neuropathy", and "PML" (progressive multifocal leukoencephalopathy).
CONCLUSION
We report an ADS model for prioritizing medical terms that are important for patient EHR comprehension. Our experiments have shown that ADS is more effective than TF*IDF and C-Value, two methods that are widely used to mine and prioritize terms from large text corpora for building domain-specific lexical resources. The EHR terms prioritized by our model are being used to enrich a comprehensive medical jargon-lay term/definition knowledge resource for EHR simplification. Our top-10K-ranked EHR terms are available upon request.
Figure 1 .
1Illustration of medical jargon in a clinical note
Figure 2 .
2Overview of our approach: data extraction (Steps 1 and 2), ADS (Step 3) and evaluation (Step 4)
Metrics): Evaluating systems that learn from positive and unlabeled data is challenging because the data include unlabeled examples, and thus we can't calculate recall and precision. For evaluation, Lee and Liu [42] introduced PU metrics, r 2 /Pr[system positive], where r is recall and Pr[system positive] is the probability of positive examples predicted by the system. Recall can be estimated as the total number of positive predictions divided by the total number of labeled positive examples, as explained in[42]. We plotted r 2 /Pr[system positive] as a function of the rank k.
Figure 4
4plots the ROC curves of ADS, TF*IDF and C-Value in ranking EHR-UMass terms, without and with post-processing. ADS achieved the best performance.Table 2shows the ROC-AUC, where ADS outperformed TF*IDF and C-Value by large margins (>15 points, absolute gains). Although post-processing improved performance of TF*IDF and C-Value substantially, ADS still exhibited better performance.
Figure 3 .Table 2 .
32Plots of the PU metrics (r 2 /Pr[system positive]) as a function of rank k for different methods in ranking the EHR-ROC-AUC values of different methods in ranking the EHR-UMass terms, without and with post-processing
Figure 4 .
4ROC plots of different methods in ranking the EHR-UMass terms: (4a) without postprocessing and (4b) with post-processing
. CHV contained 152,338 terms, most of which are consumer health terms [18-20]. Zeng et al. [18] mapped these consumer health terms to the Unified Medical
and Noto[32] have shown that a binary classifier that outputs probability confidence scores on examples can be trained using positive examples and unlabeled examples (treated as "negative") and its confidence scores on new examples can then be used to rank those new examples.
.Table 1. A summary of the two datasets used in this studyDataset
EHR
notes
# of
terms
Positive Unlabeled/negative
Purpose of use
EHR-Pittsburg 7,839 106,108
6,959
99,149 (unlabeled) Train and evaluate ADS,
generate the global ranking
of candidate terms
EHR-UMass
90
6,280
1,018
5,262 (negative)
Evaluate ADS
Chapman W. University of Pittsburgh Natural Language Processing Repository (http://www.dbmi.pitt.edu/nlpfront). Using this data requires a license.
BGM ("blood glucose monitoring") is a frequently used acronym in clinical notes. It is only registered as a gene name in the Unified Medical Language System.
ACKNOWLEDGMENTS
Evaluating patient access to Electronic Health Records: results from a survey of veterans. K M Nazi, T P Hogan, D K Mcinnes, Med Care. 51Nazi KM, Hogan TP, McInnes DK, et al. Evaluating patient access to Electronic Health Records: results from a survey of veterans. Med Care 2013;51:S52-S56.
Inviting patients to read doctors' notes. S Strum, 10.7326/0003-4819-156-8-201204170-00016Ann Intern Med. 156608Strum S. Inviting patients to read doctors' notes. Ann Intern Med 2012;156:608. doi:10.7326/0003-4819-156-8-201204170-00016
Medical communication: do our patients understand?. E B Lerner, D V Jehle, D M Janicke, 10.1053/ajem.2000.18040Am J Emerg Med. 18Lerner EB, Jehle DV, Janicke DM, et al. Medical communication: do our patients understand? Am J Emerg Med 2000;18:764-6. doi:10.1053/ajem.2000.18040
Lay understanding of terms used in cancer consultations. K Chapman, C Abraham, V Jenkins, 10.1002/pon.673Psychooncology. 12Chapman K, Abraham C, Jenkins V, et al. Lay understanding of terms used in cancer consultations. Psychooncology 2003;12:557-66. doi:10.1002/pon.673
Patients' experiences when accessing their on-line electronic patient records in primary care. C Pyper, Amery J Watson, M , Br J Gen Pr. 54Pyper C, Amery J, Watson M, et al. Patients' experiences when accessing their on-line electronic patient records in primary care. Br J Gen Pr 2004;54:38-43.
Towards consumer-friendly PHRs: patients' experience with reviewing their health records. A Keselman, L Slaughter, C A Smith, Proceedings of AMIA Annual Symposium. AMIA Annual SymposiumKeselman A, Slaughter L, Smith CA, et al. Towards consumer-friendly PHRs: patients' experience with reviewing their health records. In: Proceedings of AMIA Annual Symposium. 2007. 399-403.
Lay understanding of common medical terminology in oncology. A H Pieterse, N A Jager, Ema Smets, 10.1002/pon.3096Psychooncology. 22Pieterse AH, Jager NA, Smets EMA, et al. Lay understanding of common medical terminology in oncology. Psychooncology 2013;22:1186-91. doi:10.1002/pon.3096
Promoting health literacy. A T Mccray, J Am Med Inform Assoc. 12McCray AT. Promoting health literacy. J Am Med Inform Assoc 2005;12:152-163.
Making texts in electronic health records comprehensible to consumers: a prototype translator. Q Zeng-Treitler, S Goryachev, H Kim, Proceedings of AMIA Annual Symposium. AMIA Annual SymposiumZeng-Treitler Q, Goryachev S, Kim H, et al. Making texts in electronic health records comprehensible to consumers: a prototype translator. In: Proceedings of AMIA Annual Symposium. 2007. 846-50.
A semantic and syntactic text simplification tool for health content. S Kandula, D Curtis, Q Zeng-Treitler, Proceedings of AMIA Annual Symposium. AMIA Annual SymposiumKandula S, Curtis D, Zeng-Treitler Q. A semantic and syntactic text simplification tool for health content. In: Proceedings of AMIA Annual Symposium. 2010. 366-70.
Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. E Abrahamsson, T Forni, M Skeppstedt, Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations at EACL. Association for Computational Linguistics. the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations at EACL. Association for Computational LinguisticsAbrahamsson E, Forni T, Skeppstedt M, et al. Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. In: Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations at EACL. Association for Computational Linguistics 2014. 57-65.
Improving Patients' Electronic Health Record Comprehension with NoteAid. Polepalli Ramesh, B Houston, T Brandt, C , Stud Health Technol Inform. 192Polepalli Ramesh B, Houston T, Brandt C, et al. Improving Patients' Electronic Health Record Comprehension with NoteAid. Stud Health Technol Inform 2013;192:714-8.
Exploring and developing consumer health vocabularies. Q T Zeng, T Tse, J Am Med Inform Assoc. 1324Zeng QT, Tse T. Exploring and developing consumer health vocabularies. J Am Med Inform Assoc 2006;13:24.
Terminology issues in user access to Web-based medical information. A T Mccray, R F Loane, A C Browne, Proc AMIA Symp. McCray AT, Loane RF, Browne AC, et al. Terminology issues in user access to Web-based medical information. Proc AMIA Symp 1999;:107-11.
Patient and clinician vocabulary: how different are they? Stud. Q Zeng, S Kogan, N Ash, Health Technol Inform. 84Zeng Q, Kogan S, Ash N, et al. Patient and clinician vocabulary: how different are they? Stud Health Technol Inform 2001;84:399-403.
Evaluation of Controlled Vocabulary Resources for Development of a Consumer Entry Vocabulary for Diabetes. T B Patrick, H K Monga, M C Sievert, doi:10J Med Internet Res. 324Patrick TB, Monga HK, Sievert MC, et al. Evaluation of Controlled Vocabulary Resources for Development of a Consumer Entry Vocabulary for Diabetes. J Med Internet Res 2001;3:e24. doi:10.
Characteristics of consumer terminology for health information retrieval. Q Zeng, S Kogan, N Ash, Methods Inf Med. 41Zeng Q, Kogan S, Ash N, et al. Characteristics of consumer terminology for health information retrieval. Methods Inf Med 2002;41:289-298.
Identifying consumer-friendly display (CFD) names for health concepts. Q T Zeng, T Tse, J Crowell, Proceedings of AMIA Annual Symposium. AMIA Annual SymposiumZeng QT, Tse T, Crowell J, et al. Identifying consumer-friendly display (CFD) names for health concepts. In: Proceedings of AMIA Annual Symposium. 2005. 859- 63.
Consumer health concepts that do not map to the UMLS: where do they fit?. A Keselman, C A Smith, G Divita, 10.1197/jamia.M2599J Am Med Inform Assoc JAMIA. 15Keselman A, Smith CA, Divita G, et al. Consumer health concepts that do not map to the UMLS: where do they fit? J Am Med Inform Assoc JAMIA 2008;15:496- 505. doi:10.1197/jamia.M2599
Exploring Medical Expressions Used by Consumers and the Media: An Emerging View of Consumer Health Vocabularies. T Tse, D Soergel, AMIA Annu Symp Proc. Tse T, Soergel D. Exploring Medical Expressions Used by Consumers and the Media: An Emerging View of Consumer Health Vocabularies. AMIA Annu Symp Proc 2003;2003:674-8.
Term identification methods for consumer health vocabulary development. Q Zeng, T Tse, G Divita, J Med Internet Res. 94Zeng Q, Tse T, Divita G, et al. Term identification methods for consumer health vocabulary development. J Med Internet Res 2007;9:e4.
Facilitating the development of controlled vocabularies for metabolomics technologies with text mining. I Spasić, D Schober, S-A Sansone, BMC Bioinformatics. 95Spasić I, Schober D, Sansone S-A, et al. Facilitating the development of controlled vocabularies for metabolomics technologies with text mining. BMC Bioinformatics 2008;9:S5.
Computer-assisted update of a consumer health vocabulary through mining of social network data. K M Doing-Harris, Q Zeng-Treitler, 10.2196/jmir.1636J Med Internet Res. 1337Doing-Harris KM, Zeng-Treitler Q. Computer-assisted update of a consumer health vocabulary through mining of social network data. J Med Internet Res 2011;13:e37. doi:10.2196/jmir.1636
A Comparative Evaluation of Term Recognition Algorithms. Z Zhang, J Iria, C Brewster, Proceedings of the sixth international conference of Language Resources and Evaluation (LREC). the sixth international conference of Language Resources and Evaluation (LREC)Zhang Z, Iria J, Brewster C, et al. A Comparative Evaluation of Term Recognition Algorithms. In: Proceedings of the sixth international conference of Language Resources and Evaluation (LREC). 2008. 2108-13.
Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations. J Chen, J Zheng, H Yu, JMIR Med Inform. 26acceptChen J, Zheng J, Yu H. Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations. JMIR Med Inform (accept) 26
Automatic recognition of multi-word terms:. the c-value/nc-value method. K Frantzi, S Ananiadou, H Mima, Int J Digit Libr. 3Frantzi K, Ananiadou S, Mima H. Automatic recognition of multi-word terms:. the c-value/nc-value method. Int J Digit Libr 2000;3:115-130.
A statistical interpretation of term specificity and its application in retrieval. Sparck Jones, K , J Doc. 28Sparck Jones K. A statistical interpretation of term specificity and its application in retrieval. J Doc 1972;28:11-21.
Text classification from positive and unlabeled examples. F Denis, R Gilleron, M Tommasi, Proceedings of the 9th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU'02. the 9th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU'02Denis F, Gilleron R, Tommasi M. Text classification from positive and unlabeled examples. In: Proceedings of the 9th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU'02. 2002. 1927-1934.
Building text classifiers using positive and unlabeled examples. B Liu, Y Dai, X Li, Data Mining, 2003. ICDM 2003. Third IEEE International Conference on. IEEE 2003. Liu B, Dai Y, Li X, et al. Building text classifiers using positive and unlabeled examples. In: Data Mining, 2003. ICDM 2003. Third IEEE International Conference on. IEEE 2003. 179-186.
A simple probabilistic approach to learning from positive and unlabeled examples. D Zhang, W S Lee, Proceedings of the 5th Annual UK Workshop on Computational Intelligence (UKCI). Citeseer. the 5th Annual UK Workshop on Computational Intelligence (UKCI). CiteseerZhang D, Lee WS. A simple probabilistic approach to learning from positive and unlabeled examples. In: Proceedings of the 5th Annual UK Workshop on Computational Intelligence (UKCI). Citeseer 2005. 83-87.
Text classification without negative examples revisit. Gpc Fung, J X Yu, H Lu, Knowl Data Eng IEEE Trans On. 18Fung GPC, Yu JX, Lu H, et al. Text classification without negative examples revisit. Knowl Data Eng IEEE Trans On 2006;18:6-20.
Learning classifiers from only positive and unlabeled data. C Elkan, K Noto, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningACMElkan C, Noto K. Learning classifiers from only positive and unlabeled data. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM 2008. 213-220.
Support-vector networks. C Cortes, V Vapnik, 10.1007/BF00994018Mach Learn. 20Cortes C, Vapnik V. Support-vector networks. Mach Learn 1995;20:273-97. doi:10.1007/BF00994018
A bagging SVM to learn from positive and unlabeled examples. F Mordelet, J-P Vert, Pattern Recognit Lett. 37Mordelet F, Vert J-P. A bagging SVM to learn from positive and unlabeled examples. Pattern Recognit Lett 2014;37:201-209.
LIBSVM: a library for support vector machines. C-C Chang, C-J Lin, ACM Trans Intell Syst Technol TIST. 227Chang C-C, Lin C-J. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol TIST 2011;2:27:1-27:27.
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. J Platt, Adv Large Margin Classif. 10Platt J. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv Large Margin Classif 1999;10:61-74.
A note on Platt's probabilistic outputs for support vector machines. H-T Lin, C-J Lin, R C Weng, Mach Learn. 68Lin H-T, Lin C-J, Weng RC. A note on Platt's probabilistic outputs for support vector machines. Mach Learn 2007;68:267-276.
Efficient Estimation of Word Representations in Vector Space. ArXiv13013781 Cs Published Online First. T Mikolov, K Chen, G Corrado, 16Mikolov T, Chen K, Corrado G, et al. Efficient Estimation of Word Representations in Vector Space. ArXiv13013781 Cs Published Online First: 16 January 2013.http://arxiv.org/abs/1301.3781
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, Advances in Neural Information Processing Systems. Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems. 2013. 3111-9.
Distributional semantics resources for biomedical text processing. S Pyysalo, F Ginter, H Moen, The 5th International Symposium on Languages in Biology and Medicine. Pyysalo S, Ginter F, Moen H, et al. Distributional semantics resources for biomedical text processing. In: The 5th International Symposium on Languages in Biology and Medicine. 2013. 39-43.
pROC: an open-source package for R and S+ to analyze and compare ROC curves. X Robin, N Turck, A Hainard, BMC Bioinformatics. 1277Robin X, Turck N, Hainard A, et al. pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics 2011;12:77.
Learning with positive and unlabeled examples using weighted logistic regression. W S Lee, B Liu, ICML. 2003. Lee WS, Liu B. Learning with positive and unlabeled examples using weighted logistic regression. In: ICML. 2003. 448-455.
An overview of MetaMap: historical perspective and recent advances. A R Aronson, F-M Lang, J Am Med Inform Assoc. 17Aronson AR, Lang F-M. An overview of MetaMap: historical perspective and recent advances. J Am Med Inform Assoc 2010;17:229-36.
| [] |
[
"Ensemble Fine-tuned mBERT for Translation Quality Estimation",
"Ensemble Fine-tuned mBERT for Translation Quality Estimation"
] | [
"Shaika Chowdhury ",
"Naouel Baili naouel.baili@iqvia.com ",
"Brian Vannah brian.vannah@iqvia.com ",
"\nUniversity of Illinois at Chicago\nIQVIA\nUSUS\n",
"\nIQVIA\nUS\n"
] | [
"University of Illinois at Chicago\nIQVIA\nUSUS",
"IQVIA\nUS"
] | [
"Proceedings of the Sixth Conference on Machine Translation (WMT)"
] | Quality Estimation (QE) is an important component of the machine translation workflow as it assesses the quality of the translated output without consulting reference translations. In this paper, we discuss our submission to the WMT 2021 QE Shared Task. We participate in Task 2 sentence-level sub-task that challenge participants to predict the HTER score for sentence-level post-editing effort. Our proposed system is an ensemble of multilingual BERT (mBERT)-based regression models, which are generated by fine-tuning on different input settings. It demonstrates comparable performance with respect to the Pearson's correlation and beats the baseline system in MAE/ RMSE for several language pairs. In addition, we adapt our system for the zero-shot setting by exploiting target language-relevant language pairs and pseudo-reference translations. | null | [
"https://www.aclanthology.org/2021.wmt-1.93.pdf"
] | 237,453,579 | 2109.03914 | 4b493196a4ab2bee642dc9811d54ba19fa11d8a8 |
Ensemble Fine-tuned mBERT for Translation Quality Estimation
November 10-11, 2021
Shaika Chowdhury
Naouel Baili naouel.baili@iqvia.com
Brian Vannah brian.vannah@iqvia.com
University of Illinois at Chicago
IQVIA
USUS
IQVIA
US
Ensemble Fine-tuned mBERT for Translation Quality Estimation
Proceedings of the Sixth Conference on Machine Translation (WMT)
the Sixth Conference on Machine Translation (WMT)November 10-11, 2021897
Quality Estimation (QE) is an important component of the machine translation workflow as it assesses the quality of the translated output without consulting reference translations. In this paper, we discuss our submission to the WMT 2021 QE Shared Task. We participate in Task 2 sentence-level sub-task that challenge participants to predict the HTER score for sentence-level post-editing effort. Our proposed system is an ensemble of multilingual BERT (mBERT)-based regression models, which are generated by fine-tuning on different input settings. It demonstrates comparable performance with respect to the Pearson's correlation and beats the baseline system in MAE/ RMSE for several language pairs. In addition, we adapt our system for the zero-shot setting by exploiting target language-relevant language pairs and pseudo-reference translations.
Introduction
Progress in machine translation (MT) has accelerated due to the introduction of deep learning based approaches, dubbed as neural machine translation (NMT) (Cho et al., 2014;Sutskever et al., 2014;Bahdanau et al., 2014). Several metrics (e.g., BLEU (Papineni et al., 2002), METEOR (Agarwal and Lavie, 2008)) are used to automatically evaluate the quality of the translations outputted by the NMT systems. However, these evaluation metrics require comparing the NMT outputs against human-prepared reference translations, which cannot be readily obtained. To tackle this predicament, recently quality estimation (QE) (Blatz et al., 2004; has emerged as an alternative evaluation approach for NMT systems. QE obviates the need for human judgements and hence can be efficiently integrated into the dynamic translation pipeline in the industry setting. * work done during internship at IQVIA QE is performed at different granularity (e.g., word, sentence, document) (Kepler et al., 2019a); in this work we focus on the sentence-level postediting effort task, which predicts the quality of the translated sentence as a whole in terms of the number of edit operations that need to be made to yield a post-edited translation, termed as HTER (Snover et al., 2006).
Sentence-level QE using neural approaches is generally treated as a supervised regression problem involving mainly two steps. In the first step, an encoder is used to learn vector representation/s of the source and translation sentences. While in the second step, the learned representations are passed through a sigmoid output layer to estimate the HTER score. These two steps can be performed either with a single model in an end-toend fashion (e.g., Bi-RNN (Ive et al., 2018)), or using two separate models (e.g., POSTECH (Kim et al., 2017)). The different QE systems vary in their choice of the encoder, which range from RNNbased to Transformer-based models.
In this work, we leverage the fine-tuning capability of a Transformer-based encoder, namely the mBERT (Devlin et al., 2018) pre-trained model. Alongside the standard practice of feeding both the source and target (i.e., translation) sentences as the input sequence (Kepler et al., 2019a;Kim et al., 2019), we also explore other input settings based on only the target-side sentences (i.e., monolingual context). To this end, our final QE system is an ensemble of several mBERT models 1 , each generated by fine-tuning on a different input combination comprising the source and/or target sentences. We experiment with the following three input settings: (1) both source and target, (2) just target and (3) both target and a randomly-sampled target sentence in the data forming the input se-quence. Empirical analysis on 6 language pairs shows that the ensemble model is able to perform better than the individual fine-tuned models. Moreover, we provide experimental results for zero-shot QE, where training data for the test language pair is not available. This we tackle by improvising on the available training/dev data that match the target language of the test language pair and also by generating the pseudo-reference translations in that language.
Data
We use the WMT21 QE Shared Task 2 sentencelevel data (Specia et al., 2021;Fomicheva et al., 2020a,b) for the following 7 language pairs: English-German (En-De), Romanian-English (Ro-En), Estonian-English (Et-En), Nepalese-English (Ne-En), Sinhala-English (Si-En), Russian-English (Ru-En) and Khmer-English (Km-En). Source-side data for each language pair includes sentences from Wikipedia articles, with part of the data gathered from Reddit articles for Ru-En. To obtain the translations, state-of-the-art MT models (Vaswani et al., 2017) built using fairseq toolkit were used. The label for this task is the HTER score for the source-translation pair. Annotation was performed first at the word-level with the help of TER 2 tool. The word-level tags were then aggregated deterministically to obtain the sentence-level HTER score. The training, development, test and blind test data sizes for each language pair (except Km-En) are 7K, 1K, 1K and 1K instances respectively. As Km-En language pair was introduced for zero-shot prediction, only the test data containing 990 source and translation sentences was provided.
Our Approach
A key innovation in recent neural models lies in learning the contextualized representations by pretraining on a language modeling task. One such model, the multilingual BERT (mBERT) 3 , is a transformer-based masked language model that is pre-trained on monolingual Wikipedia corpora of 104 languages with a shared word-piece vocabulary. Training the pre-trained mBERT model for a supervised downstream task, aka finetuning, has dominated performance across a wide spectrum of NLP tasks (Devlin et al., 2018). Our proposed approach leverages this fine-tuning capability of mBERT so as to form the component models in the ensemble QE system (Section 3.3). That is, each component model is a re-purposed mBERT that is fine-tuned for the sentence-level HTER score prediction task on one of the three input settings discussed in Section 3.2.
3.1 Fine-tuning mBERT for Regression mBERT's model architecture is similar to BERT 4 and contains the following parameter settings: 12 layers, 12 attention heads and 768 hidden dimension per token. However, the only difference is that mBERT is trained on corpora of multiple languages instead of just on English. This enables mBERT to share representations across the different languages and hence can be conveniently used for all language pairs in the WMT21 data.
We first load the pre-trained mBERT model 5 and use its weights as the starting point of finetuning. The pre-trained mBERT is then trained on QE-specific input sequences (Section 3.2) for a few epochs such that the constructed sequence X is consumed by mBERT to output the contextualized
representation h = (h CLS , h x 1 , h x 2 , ..h x T , h SEP ).
Here, [CLS] is a special symbol that denotes downstream classification and [SEP ] is for separating non-consecutive token sequences. Considering the final hidden vector of the [CLS] token as the aggregate representation, it is then passed into the output layer with sigmoid activation to predict the HTER score:
y = sigmoid(W · h CLS + b)(1)
W is a weight matrix for sentence-level QE finetuning that is trained along with all the parameters of mBERT end-to-end.
Input Settings
We construct the input sequence for each language pair in the following three ways: SRC-MT: Given a source sentence s = (s 1 , s 2 , ...s N ) from a source language (e.g., English) and its translation t = (t 1 , t 2 , ...t M ) from a target language (e.g., German), we concatenate them together as X =
MT-MT':
Given the translation t for a source sentence s, we randomly sample another translation t' = (t 1 , t 2 , ...t K ) from the training data having HTER label close to t 6 . Although the source sentences for t and t' are different, we assume the additional monolingual context would help mBERT learn the correlating QE-specific features between t and t' for the target-side language. The resultant input sequence is X
= ([CLS], t 1 , t 2 , ...t M , [SEP ], t 1 , t 2 , ...t K , [SEP ]).
We fine-tune each of these mBERT models using AdamW optimizer (Kingma and Ba, 2014; Loshchilov and Hutter, 2017) for two epochs with a batch size of 32 and a learning rate of 2e −5 .
Ensemble Model
To take advantage of the individual strengths of the three mBERT component models fine-tuned on the aforementioned input settings, we combine their HTER score predictions by training an ensemble model. In particular, we experiment with three different ensemble models -Gradient Boosting (Friedman, 2001), AdaBoost (Freund and Schapire, 1997) and Average. For Gradient Boosting and AdaBoost we use the implementation in scikit-learn 7 with 10-fold cross validation. The settings for Gradient Boosting are: number of estimators 600, learning rate 0.01, minimum number of samples 3 and other default settings. We use the default settings for AdaBoost. In Average ensembling, we average the HTER score predictions by the three mBERT models. Our system submission to WMT21 is based on Gradient Boosting as it gave the best performance on the test data, as shown in Table 1.
Zero-Shot QE
Performing sentence-level QE in the zero-shot setting presents a unique challenge as the QE system is expected to predict HTER scores for sentences in a test language pair (e.g., Km-En) without having been trained on any instances from that test 6 to ensure that t' is similar to t, we check that the difference between their HTER scores is within 0.1 7 https://scikit-learn.org language pair. We address this by training on language pairs in the WMT21 QE data that match the target-side language (i.e., En) in the test language pair. The reason we focus on the target-side language is because the component mBERT models in the proposed ensemble QE system are finetuned on monolingual input sequences in the targetside language, which could potentially help the QE system generalize on the unseen test language pair. We consider the training and development data for the following language pairs in WMT21 QE data: Ro-En, Si-En, Et-En. Additionally, we augment this data by generating pseudo-references in the target language. A pseudo-reference (Scarton and Specia, 2014) is a translation for a source sentence that is outputted by a different NMT system than the one that produced the actual translations (e.g., transformer-based translation system proposed in (Vaswani et al., 2017)) and has shown to improve sentence-level QE performance (Soricut and Narsale, 2012). We use Google Translate 8 to get the pseudo-references in En for the Ro, Si and Et source sentences. The HTER scores for the translation and pseudo-reference pairs are then obtained using the TER tool. We train the ensemble QE system on the combined WMT21 QE data and the pseudo-reference parallel data, and test on the unseen test language pair.
Baseline
The baseline QE system (BASELINE) set by the WMT21 organizers this year is the Transformerbased Predictor-Estimator model (Kepler et al., 2019b;Moura et al., 2020). XLM-RoBERTa is used as the Predictor for feature generation. The baseline system is fine-tuned on the HTER scores and word-level tags jointly. input settings, as well as the performance of the ensemble of the three mBERT models, which we call ENSBRT. First, comparing among the three input settings, it seems that mBERT exhibits competitive results even when it does not have knowledge of the source-side text in the M T and M T -M T settings, in particular for the following language pairs -En-DE, Si-En, Ne-En. While the ensemble mBERT model, ENSBRT, outperforms the independent counterparts for all the language pairs. This shows that the ensemble method can help to balance out the weakness of any component model, thereby benefiting the sentence-level QE task overall. We also visualize ENSBRT's predictions against the ground truth HTER scores in Figure 1.
Results
Conclusion
In this work, we describe the ENSBRT system submission to the WMT21 QE Shared Task. ENSBRT is based on fine-tuning the multilingual BERT pretrained model for sentence-level translation quality score prediction. We explore three different input settings for fine-tuning which include either bilingual or monolingual context, and combine the predictions of the three models using ensemble methods as our final system. Furthermore, zeroshot QE is facilitated by using labeled data for existing language pairs and pseudo-references that align with the target language of the unseen test data.
References
Abhaya Agarwal and Alon Lavie. 2008. Meteor, m-bleu and m-ter: Evaluation metrics for highcorrelation with human rankings of machine translation output. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 115-118.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Figure 1: Visualization comparing HTER score predictions by ENSBRT (i.e., predicted (red)) against the gold labels (i.e., original (blue)) for 6 language pairs on the test set. X-axis represents each data point and Y-axis is the HTER score. The closer the corresponding red line and blue dot are to each other the better, as we expect the HTER prediction to be same as or close to the ground truth.
([CLS], t 1 , t 2 , ...t M , [SEP ], s 1 , s 2 , ...s N , [SEP ]) to form the input sequence. MT: The target sentence is only used to form the input sequence, X = ([CLS], t 1 , t 2 , ...t M , [SEP ]).
2 presents the experimental results of mBERT fine-tuned on the SRC-M T , M T and M T -M T
Table 1 :
1Performance of ENSBRT with different ensemble methods on the En-De test set.Avg
AdaBoost GradBoost
Pearson's
0.266 0.458
0.473
Spearman's 0.249 0.436
0.443
Table
Table 2 :
2Performance in Pearson's correlation of mBERT fine-tuned with different input settings on the test set. ENSBRT is the proposed ensemble mBERT QE system.En-De Ro-En Ru-En Si-En Et-En Ne-En
SRC-MT 0.389
0.793
0.400
0.526 0.601 0.489
MT
0.469
0.762
0.374
0.552 0.580 0.491
MT-MT' 0.431
0.761
0.350
0.492 0.556 0.454
ENSBRT 0.473
0.802
0.418
0.576 0.632 0.525
Table 3 :
3Performance of BASELINE and ENSBRT on the WMT21 blind test set for different language pairs. Bold indicates ENSBRT beats BASELINE in that metric.En-De Ro-En Ru-En Si-En Et-En Ne-En Km-En
BASELINE
Pearson's ↑ 0.529
0.831
0.448
0.607 0.714 0.626
0.576
MAE ↓
0.183
0.142
0.255
0.204 0.195 0.205
0.241
RMSE ↓
0.129
0.115
0.188
0.159 0.149 0.160
0.196
ENSBRT
Pearson's ↑ 0.519
0.795
0.376
0.522 0.666 0.572
0.529
MAE ↓
0.171
0.171
0.251
0.206 0.171 0.176
0.262
RMSE ↓
0.129
0.141
0.189
0.162 0.132 0.139
0.197
Table 3
3compares the QE performance between the BASELINE and ENSBRT in terms of Pearson's correlation, RMSE and MAE on the WMT21 blind test set, for which the ground truth HTER scores were not available at the time. We submitted results for 6 language pairs (En-De, Ro-En, Ru-En, Si-En, Et-En, Ne-En) in the normal QE setting and one language pair (Km-En) for zero-shot prediction. ENSBRT demonstrates comparable performance to the BASELINE for Pearson's and outperforms it in either MAE or RMSE for the following language pairs: En-De, Ru-En, Et-En and Ne-En.
we also experimented with XLM-RoBERTa(Conneau et al., 2019) as the component model in our preliminary run; however, the results were worse compared to mBERT
http://www.cs.umd.edu/ snover/tercom/ 3 https://github.com/googleresearch/bert/blob/master/multilingual.md
https://huggingface.co/bert-base-uncased 5 https://huggingface.co/bert-base-multilingual-uncased
https://github.com/lushan88a/google_trans_new
Confidence estimation for machine translation. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Coling 2004: Proceedings of the 20th international conference on computational linguistics. Alberto Sanchis, and Nicola UeffingJohn Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence esti- mation for machine translation. In Coling 2004: Proceedings of the 20th international conference on computational linguistics, pages 315-321.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Marina Fomicheva, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, André F T Martins, arXiv:2010.04480MLQE-PE: A multilingual quality estimation and post-editing dataset. arXiv preprintMarina Fomicheva, Shuo Sun, Erick Fonseca, Frédéric Blain, Vishrav Chaudhary, Francisco Guzmán, Nina Lopatina, Lucia Specia, and André F. T. Martins. 2020a. MLQE-PE: A multilingual quality esti- mation and post-editing dataset. arXiv preprint arXiv:2010.04480.
Unsupervised quality estimation for neural machine translation. Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, Lucia Specia, Transactions of the Association for Computational Linguistics. 8Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020b. Unsupervised quality estimation for neural machine translation. Transactions of the As- sociation for Computational Linguistics, 8:539-555.
A decisiontheoretic generalization of on-line learning and an application to boosting. Yoav Freund, Robert E Schapire, Journal of computer and system sciences. 551Yoav Freund and Robert E Schapire. 1997. A decision- theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1):119-139.
Greedy function approximation: a gradient boosting machine. H Jerome, Friedman, Annals of statistics. Jerome H Friedman. 2001. Greedy function approx- imation: a gradient boosting machine. Annals of statistics, pages 1189-1232.
Deepquest: a framework for neural-based quality estimation. Julia Ive, Frédéric Blain, Lucia Specia, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsAssociation for Computational LinguisticsJulia Ive, Frédéric Blain, and Lucia Specia. 2018. Deepquest: a framework for neural-based quality es- timation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3146-3157. Association for Computational Linguis- tics.
Unbabel's participation in the wmt19 translation quality estimation shared task. Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, Amin Farajian, V António, André Ft Lopes, Martins, arXiv:1907.10352arXiv preprintFabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, M Amin Farajian, António V Lopes, and André FT Martins. 2019a. Unbabel's participation in the wmt19 translation quality estima- tion shared task. arXiv preprint arXiv:1907.10352.
Openkiwi: An open source framework for quality estimation. Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, André Ft Martins, arXiv:1902.08646arXiv preprintFabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, and André FT Martins. 2019b. Openkiwi: An open source framework for quality estimation. arXiv preprint arXiv:1902.08646.
Predictorestimator: Neural quality estimation based on target word prediction for machine translation. Hyun Kim, Hun-Young Jung, Hongseok Kwon, Jong-Hyeok Lee, Seung-Hoon Na, ACM Transactions on Asian and Low-Resource Language Information Processing. 171TALLIPHyun Kim, Hun-Young Jung, Hongseok Kwon, Jong- Hyeok Lee, and Seung-Hoon Na. 2017. Predictor- estimator: Neural quality estimation based on tar- get word prediction for machine translation. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 17(1):1-22.
Qe bert: bilingual bert using multi-task learning for neural quality estimation. Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, Seung-Hoon Na, Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine Translation3Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, and Seung- Hoon Na. 2019. Qe bert: bilingual bert using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 85-89.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. 2017. Decou- pled weight decay regularization. arXiv preprint arXiv:1711.05101.
Ist-unbabel participation in the wmt20 quality estimation shared task. Joao Moura, Miguel Vera, Fabio Daan Van Stigt, André Ft Kepler, Martins, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationJoao Moura, Miguel Vera, Daan van Stigt, Fabio Ke- pler, and André FT Martins. 2020. Ist-unbabel par- ticipation in the wmt20 quality estimation shared task. In Proceedings of the Fifth Conference on Ma- chine Translation, pages 1029-1036.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, arXiv:1904.01038fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprintMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
Documentlevel translation quality estimation: exploring discourse and pseudo-references. Carolina Scarton, Lucia Specia, Proceedings of the 17th Annual conference of the European Association for Machine Translation. the 17th Annual conference of the European Association for Machine TranslationCarolina Scarton and Lucia Specia. 2014. Document- level translation quality estimation: exploring dis- course and pseudo-references. In Proceedings of the 17th Annual conference of the European Association for Machine Translation, pages 101-108.
A study of translation edit rate with targeted human annotation. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul, Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers. the 7th Conference of the Association for Machine Translation in the Americas: Technical PapersMatthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Associa- tion for Machine Translation in the Americas: Tech- nical Papers, pages 223-231.
Combining quality prediction and system selection for improved automatic translation output. Radu Soricut, Sushant Narsale, Proceedings of the Seventh Workshop on Statistical Machine Translation. the Seventh Workshop on Statistical Machine TranslationRadu Soricut and Sushant Narsale. 2012. Combining quality prediction and system selection for improved automatic translation output. In Proceedings of the Seventh Workshop on Statistical Machine Transla- tion, pages 163-170.
2021. Findings of the wmt 2021 shared task on quality estimation. Lucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, André F T Martins, Proceedings of the Sixth Conference on Machine Translation, Online. the Sixth Conference on Machine Translation, OnlineAssociation for Computational LinguisticsLucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, and André F. T. Martins. 2021. Findings of the wmt 2021 shared task on quality estimation. In Proceed- ings of the Sixth Conference on Machine Translation, Online. Association for Computational Linguistics.
Quality estimation for machine translation. Lucia Specia, Carolina Scarton, Gustavo Henrique Paetzold, Synthesis Lectures on Human Language Technologies. 111Lucia Specia, Carolina Scarton, and Gustavo Henrique Paetzold. 2018. Quality estimation for machine translation. Synthesis Lectures on Human Language Technologies, 11(1):1-162.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
| [
"https://github.com/googleresearch/bert/blob/master/multilingual.md",
"https://github.com/lushan88a/google_trans_new"
] |